diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 00000000..e3ee594e --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,25 @@ +--- +name: Bug report +about: Create a report to help us improve + +--- + +**Describe the bug** +A clear and concise description of what the bug is. + +**To Reproduce** +Steps to reproduce the behavior. + +**Expected behavior** +A clear and concise description of what you expected to happen. + +**Screenshots** +If applicable, add screenshots to help explain your problem. + +**Environment (please complete the following information):** + - Application Platform: [e.g. Pivotal Cloud Foundry, Kubernetes, OpenShift] + - Application Platform Version: [e.g. k8s 1.11.0] + - Broker Version [e.g. 0.1.25] + +**Additional context** +Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md new file mode 100644 index 00000000..066b2d92 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -0,0 +1,17 @@ +--- +name: Feature request +about: Suggest an idea for this project + +--- + +**Is your feature request related to a problem? Please describe.** +A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] + +**Describe the solution you'd like** +A clear and concise description of what you want to happen. + +**Describe alternatives you've considered** +A clear and concise description of any alternative solutions or features you've considered. + +**Additional context** +Add any other context or screenshots about the feature request here. diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index ab40d21d..842f2726 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,6 +1,30 @@ -*Issue #, if available:* + -*Description of changes:* +## Overview +Brief description of what this PR does, and why it is needed (use case)? + +## Related Issues + +**Which issue(s) this PR fixes** *(optional, in `fixes #(, fixes #, ...)` format, will close the issue(s) when PR gets merged)*: +Fixes # + +## Testing + +How did you validate the changes in this PR? If there are unit tests included describe what they test + +### Notes + +Optional. Caveats, Alternatives, Other relevant information. + +## Testing Instructions + + How to test this PR Start after checking out this branch (bulleted) + * Include test case, and expected output + +## License By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. diff --git a/.gitignore b/.gitignore index 4a3087af..682e92c5 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,13 @@ +/servicebroker +/servicebroker-linux +/functional-testing/aws-servicebroker +.idea/ +/packaging/cloudfoundry/product/ +/packaging/cloudfoundry/release/ +/packaging/cloudfoundry/resources/cfnsb +/packaging/helm/aws-servicebroker-*.tgz +/packaging/helm/index.yaml + # General ignores .DS_Store *.zip @@ -106,4 +116,5 @@ venv.bak/ /site # mypy -.mypy_cache/ \ No newline at end of file +.mypy_cache/ + diff --git a/CODEOWNERS b/CODEOWNERS new file mode 100644 index 00000000..6791194b --- /dev/null +++ b/CODEOWNERS @@ -0,0 +1,3 @@ +# Maintainers +* @jaymccon @vsomayaji + diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 00000000..3b644668 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,4 @@ +## Code of Conduct +This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). +For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact +opensource-codeofconduct@amazon.com with any additional questions or comments. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..71f693c1 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,61 @@ +# Contributing Guidelines + +Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional +documentation, we greatly value feedback and contributions from our community. + +Please read through this document before submitting any issues or pull requests to ensure we have all the necessary +information to effectively respond to your bug report or contribution. + + +## Reporting Bugs/Feature Requests + +We welcome you to use the GitHub issue tracker to report bugs or suggest features. + +When filing an issue, please check [existing open](https://github.com/awslabs/aws-servicebroker/issues), or [recently closed](https://github.com/awslabs/aws-servicebroker/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already +reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: + +* A reproducible test case or series of steps +* The version of our code being used +* Any modifications you've made relevant to the bug +* Anything unusual about your environment or deployment + + +## Contributing via Pull Requests +Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: + +1. You are working against the latest source on the *master* branch. +2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. +3. You open an issue to discuss any significant work - we would hate for your time to be wasted. + +To send us a pull request, please: + +1. Fork the repository. +2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. +3. Ensure local tests pass. +4. Commit to your fork using clear commit messages. +5. Send us a pull request, answering any default questions in the pull request interface. +6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. + +GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and +[creating a pull request](https://help.github.com/articles/creating-a-pull-request/). + + +## Finding contributions to work on +Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/aws-servicebroker/labels/help%20wanted) issues is a great place to start. + + +## Code of Conduct +This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). +For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact +opensource-codeofconduct@amazon.com with any additional questions or comments. + + +## Security issue notifications +If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. + + +## Licensing + +See the [LICENSE](https://github.com/awslabs/aws-servicebroker/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. + +We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 00000000..b6878ffa --- /dev/null +++ b/Dockerfile @@ -0,0 +1,20 @@ +FROM deis/go-dev as builder + +ENV PROJECT_DIR=/go/src/github.com/awslabs/aws-service-broker +RUN mkdir -p $PROJECT_DIR +WORKDIR $PROJECT_DIR +ARG SOURCE_DIR="./" + +COPY $SOURCE_DIR . + +RUN dep ensure && make test && make linux + +FROM alpine:latest + +RUN apk add --no-cache ca-certificates bash + +COPY --from=builder /go/src/github.com/awslabs/aws-service-broker/servicebroker-linux /usr/local/bin/aws-servicebroker +COPY --from=builder /go/src/github.com/awslabs/aws-service-broker/scripts/start_broker.sh /usr/local/bin/ +RUN chmod +x /usr/local/bin/start_broker.sh + +CMD ["start_broker.sh"] diff --git a/Gopkg.lock b/Gopkg.lock new file mode 100644 index 00000000..277a2437 --- /dev/null +++ b/Gopkg.lock @@ -0,0 +1,265 @@ +# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. + + +[[projects]] + name = "github.com/abbot/go-http-auth" + packages = ["."] + revision = "0ddd408d5d60ea76e320503cc7dd091992dee608" + version = "v0.4.0" + +[[projects]] + name = "github.com/aws/aws-sdk-go" + packages = [ + "aws", + "aws/awserr", + "aws/awsutil", + "aws/client", + "aws/client/metadata", + "aws/corehandlers", + "aws/credentials", + "aws/credentials/ec2rolecreds", + "aws/credentials/endpointcreds", + "aws/credentials/stscreds", + "aws/defaults", + "aws/ec2metadata", + "aws/endpoints", + "aws/request", + "aws/session", + "aws/signer/v4", + "awstesting/mock", + "internal/sdkio", + "internal/sdkrand", + "internal/shareddefaults", + "private/protocol", + "private/protocol/json/jsonutil", + "private/protocol/jsonrpc", + "private/protocol/query", + "private/protocol/query/queryutil", + "private/protocol/rest", + "private/protocol/restxml", + "private/protocol/xml/xmlutil", + "service/cloudformation", + "service/cloudformation/cloudformationiface", + "service/dynamodb", + "service/dynamodb/dynamodbattribute", + "service/iam", + "service/s3", + "service/s3/s3iface", + "service/ssm", + "service/ssm/ssmiface", + "service/sts", + "service/sts/stsiface" + ] + revision = "827e7eac8c2680d5bdea7bc3ef29c596eabe1eae" + version = "v1.13.59" + +[[projects]] + branch = "master" + name = "github.com/beorn7/perks" + packages = ["quantile"] + revision = "3a771d992973f24aa725d07868b467d1ddfceafb" + +[[projects]] + name = "github.com/davecgh/go-spew" + packages = ["spew"] + revision = "346938d642f2ec3594ed81d874461961cd0faa76" + version = "v1.1.0" + +[[projects]] + name = "github.com/go-errors/errors" + packages = ["."] + revision = "a6af135bd4e28680facf08a3d206b454abc877a4" + version = "v1.0.1" + +[[projects]] + name = "github.com/go-ini/ini" + packages = ["."] + revision = "06f5f3d67269ccec1fe5fe4134ba6e982984f7f5" + version = "v1.37.0" + +[[projects]] + branch = "master" + name = "github.com/golang/glog" + packages = ["."] + revision = "23def4e6c14b4da8ac2ed8007337bc5eb5007998" + +[[projects]] + name = "github.com/golang/protobuf" + packages = ["proto"] + revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265" + version = "v1.1.0" + +[[projects]] + name = "github.com/gorilla/context" + packages = ["."] + revision = "08b5f424b9271eedf6f9f0ce86cb9396ed337a42" + version = "v1.1.1" + +[[projects]] + name = "github.com/gorilla/mux" + packages = ["."] + revision = "e3702bed27f0d39777b0b37b664b6280e8ef8fbf" + version = "v1.6.2" + +[[projects]] + branch = "basic_auth" + name = "github.com/jaymccon/osb-broker-lib" + packages = ["pkg/server"] + revision = "18f3aa144f1880846501710964f645fdbcc534b5" + +[[projects]] + name = "github.com/jmespath/go-jmespath" + packages = ["."] + revision = "0b12d6b5" + +[[projects]] + branch = "master" + name = "github.com/koding/cache" + packages = ["."] + revision = "e8a81b0b3f20f895153311abde1062894b5912d6" + +[[projects]] + name = "github.com/matttproud/golang_protobuf_extensions" + packages = ["pbutil"] + revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c" + version = "v1.0.1" + +[[projects]] + name = "github.com/pmezard/go-difflib" + packages = ["difflib"] + revision = "792786c7400a136282c1664665ae0a8db921c6c2" + version = "v1.0.0" + +[[projects]] + name = "github.com/pmorie/go-open-service-broker-client" + packages = ["v2"] + revision = "dca737037ce636eb282e84e3a1c7479c9692e884" + version = "0.0.10" + +[[projects]] + branch = "master" + name = "github.com/pmorie/osb-broker-lib" + packages = [ + "pkg/broker", + "pkg/metrics", + "pkg/rest" + ] + revision = "87d71cfbf3427836e5623f1e3843a466348b7be6" + +[[projects]] + name = "github.com/prometheus/client_golang" + packages = [ + "prometheus", + "prometheus/promhttp" + ] + revision = "c5b7fccd204277076155f10851dad72b76a49317" + version = "v0.8.0" + +[[projects]] + branch = "master" + name = "github.com/prometheus/client_model" + packages = ["go"] + revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c" + +[[projects]] + branch = "master" + name = "github.com/prometheus/common" + packages = [ + "expfmt", + "internal/bitbucket.org/ww/goautoneg", + "model" + ] + revision = "7600349dcfe1abd18d72d3a1770870d9800a7801" + +[[projects]] + branch = "master" + name = "github.com/prometheus/procfs" + packages = [ + ".", + "internal/util", + "nfs", + "xfs" + ] + revision = "61aaa706c6d4fda9365b6273c96839eb7e27f6e4" + +[[projects]] + name = "github.com/satori/go.uuid" + packages = ["."] + revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3" + version = "v1.2.0" + +[[projects]] + name = "github.com/stretchr/testify" + packages = ["assert"] + revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686" + version = "v1.2.2" + +[[projects]] + branch = "master" + name = "golang.org/x/crypto" + packages = [ + "bcrypt", + "blowfish" + ] + revision = "de0752318171da717af4ce24d0a2e8626afaeb11" + +[[projects]] + branch = "master" + name = "golang.org/x/net" + packages = ["context"] + revision = "c39426892332e1bb5ec0a434a079bf82f5d30c54" + +[[projects]] + branch = "v2" + name = "gopkg.in/mgo.v2" + packages = [ + ".", + "bson", + "internal/json", + "internal/sasl", + "internal/scram" + ] + revision = "3f83fa5005286a7fe593b055f0d7771a7dce4655" + +[[projects]] + name = "gopkg.in/yaml.v2" + packages = ["."] + revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183" + version = "v2.2.1" + +[solve-meta] + analyzer-name = "dep" + analyzer-version = 1 + input-imports = [ + "github.com/aws/aws-sdk-go/aws", + "github.com/aws/aws-sdk-go/aws/awserr", + "github.com/aws/aws-sdk-go/aws/credentials", + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds", + "github.com/aws/aws-sdk-go/aws/credentials/stscreds", + "github.com/aws/aws-sdk-go/aws/ec2metadata", + "github.com/aws/aws-sdk-go/aws/request", + "github.com/aws/aws-sdk-go/aws/session", + "github.com/aws/aws-sdk-go/awstesting/mock", + "github.com/aws/aws-sdk-go/service/cloudformation", + "github.com/aws/aws-sdk-go/service/dynamodb", + "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute", + "github.com/aws/aws-sdk-go/service/s3", + "github.com/aws/aws-sdk-go/service/s3/s3iface", + "github.com/aws/aws-sdk-go/service/ssm", + "github.com/aws/aws-sdk-go/service/sts", + "github.com/aws/aws-sdk-go/service/sts/stsiface", + "github.com/go-errors/errors", + "github.com/golang/glog", + "github.com/jaymccon/osb-broker-lib/pkg/server", + "github.com/koding/cache", + "github.com/pmorie/go-open-service-broker-client/v2", + "github.com/pmorie/osb-broker-lib/pkg/broker", + "github.com/pmorie/osb-broker-lib/pkg/metrics", + "github.com/pmorie/osb-broker-lib/pkg/rest", + "github.com/prometheus/client_golang/prometheus", + "github.com/satori/go.uuid", + "github.com/stretchr/testify/assert", + "gopkg.in/yaml.v2", + ] + solver-name = "gps-cdcl" + solver-version = 1 diff --git a/Gopkg.toml b/Gopkg.toml new file mode 100644 index 00000000..35d49643 --- /dev/null +++ b/Gopkg.toml @@ -0,0 +1,24 @@ +[prune] + go-tests = true + non-go = true + unused-packages = true + +[[constraint]] + branch = "master" + name = "github.com/golang/glog" + +[[constraint]] + name = "github.com/pmorie/go-open-service-broker-client" + version = "0.0.10" + +[[constraint]] + name = "github.com/aws/aws-sdk-go" + version = "1.13.34" + +[[constraint]] + branch = "master" + name = "github.com/pmorie/osb-broker-lib" + +[[constraint]] + branch = "basic_auth" + name = "github.com/jaymccon/osb-broker-lib" \ No newline at end of file diff --git a/LICENSE b/LICENSE new file mode 100644 index 00000000..4b36afc8 --- /dev/null +++ b/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2018 Amazon Web Services, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/Makefile b/Makefile new file mode 100644 index 00000000..5a7ce946 --- /dev/null +++ b/Makefile @@ -0,0 +1,68 @@ +IMAGE ?= my-docker-org/aws-servicebroker:latest +BUCKET_NAME ?= my-helm-repo-bucket +BUCKET_PREFIX ?= /charts +HELM_URL ?= https://$(BUCKET_NAME).s3.amazonaws.com$(BUCKET_PREFIX) +S3URI ?= $(shell echo $(HELM_URL)/ | sed 's/https:/s3:/' | sed 's/.s3.amazonaws.com//') +ACL ?= private +PROFILE_NAME ?= "" +PROFILE ?= $(shell if [ "${PROFILE_NAME}" != "" ] ; then echo "--profile ${PROFILE_NAME}" ; fi) +VERSION ?= $(shell cat ./version) + +build: ## Builds the starter pack + go build -i github.com/awslabs/aws-service-broker/cmd/servicebroker + +test: ## Runs the tests + go test -v $(shell go list ./... | grep -v /vendor/ | grep -v /test/) + +functional-test: ## Builds and execs a minikube image for functional testing + GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \ + go build -o functional-testing/aws-servicebroker --ldflags="-s" github.com/awslabs/aws-service-broker/cmd/servicebroker && \ + cd functional-testing ; \ + docker build -t aws-sb:functest . && \ + docker run --privileged -it --rm aws-sb:functest /start.sh ; \ + cd ../ + +linux: ## Builds a Linux executable + GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \ + go build -o servicebroker-linux --ldflags="-s" github.com/awslabs/aws-service-broker/cmd/servicebroker + +cf: ## Builds a PCF tile and bosh release + GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \ + go build -o packaging/cloudfoundry/resources/cfnsb --ldflags="-s" github.com/awslabs/aws-service-broker/cmd/servicebroker && \ + cd packaging/cloudfoundry/ ; \ + tile build $(VERSION); \ + cd ../../ + +image: ## Builds docker image + docker build . -t $(IMAGE) + +clean: ## Cleans up build artifacts + rm -f servicebroker + rm -f servicebroker-linux + rm -f functional-testing/aws-servicebroker + rm -rf packaging/cloudfoundry/product + rm -rf packaging/cloudfoundry/release + rm -f packaging/helm/index.yaml + rm -f packaging/helm/aws-servicebroker-*.tgz + +helm: ## Creates helm release and repository index file + cd packaging/helm/ ; \ + helm package aws-servicebroker --version $(VERSION) && \ + helm repo index . --url $(HELM_URL) ; \ + cd ../../ + +deploy-chart: ## Deploys helm chart and index file to S3 path specified by HELM_URL + make helm && \ + aws s3 cp packaging/helm/aws-servicebroker-*.tgz $(S3URI) --acl $(ACL) $(PROFILE) && \ + aws s3 cp packaging/helm/index.yaml $(S3URI) --acl $(ACL) $(PROFILE) + +help: ## Shows the help + @echo 'Usage: make ... ' + @echo '' + @echo 'Available targets are:' + @echo '' + @grep -E '^[ a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \ + awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' + @echo '' + +.PHONY: build test functional-test linux cf image helm deploy-chart clean help diff --git a/NOTICE b/NOTICE new file mode 100644 index 00000000..a2581ab8 --- /dev/null +++ b/NOTICE @@ -0,0 +1,2 @@ +AWS Service Broker +Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. diff --git a/README.md b/README.md new file mode 100644 index 00000000..add97c2e --- /dev/null +++ b/README.md @@ -0,0 +1,9 @@ +

The AWS Service Broker allows native AWS services to be exposed directly through application platforms that implement the [Open Service Broker API](https://github.com/openservicebrokerapi/servicebroker/), and provides simple integration of AWS Services directly within the application platform.



+ +### [Homepage](https://aws.amazon.com/partners/servicebroker/) + +### [Documentation](/docs/) + +## License + +This library is licensed under the Apache 2.0 License. \ No newline at end of file diff --git a/bosh-release/aws-servicebroker-boshrelease b/bosh-release/aws-servicebroker-boshrelease new file mode 160000 index 00000000..582e74a8 --- /dev/null +++ b/bosh-release/aws-servicebroker-boshrelease @@ -0,0 +1 @@ +Subproject commit 582e74a8f874ba90c3311e3005d0bb50a7c4509b diff --git a/buildspec.yml b/buildspec.yml new file mode 100644 index 00000000..4b104423 --- /dev/null +++ b/buildspec.yml @@ -0,0 +1,26 @@ +version: 0.2 + +env: + variables: + LATEST_TAG: latest + parameter-store: + DOCKER_REPO: "/k8s/aws-servicebroker/docker-repo" + DOCKER_REGISTRY: "/k8s/aws-servicebroker/docker-registry" + DOCKER_LOGIN: "/k8s/aws-servicebroker/docker-login" + DOCKER_USER: "/k8s/aws-servicebroker/docker-user" + VERSION_TAG: "/k8s/aws-servicebroker/image-version" + +phases: + build: + commands: + - echo Entered the build phase... + - DOCKER_PATH="$DOCKER_REGISTRY/$DOCKER_REPO" + - echo "DOCKER_PATH=${DOCKER_PATH}" + - echo "DOCKER_USER=${DOCKER_USER}" + - echo "TAG=${VERSION_TAG}" + - TAG="$VERSION_TAG" + - docker build -t $DOCKER_PATH:$LATEST_TAG ./ + - echo $DOCKER_LOGIN | docker login $DOCKER_REGISTRY -u $DOCKER_USER --password-stdin + - docker push $DOCKER_PATH:$LATEST_TAG + - docker tag $DOCKER_PATH:$LATEST_TAG $DOCKER_PATH:$TAG + - docker push $DOCKER_PATH:$TAG diff --git a/cmd/servicebroker/main.go b/cmd/servicebroker/main.go new file mode 100644 index 00000000..c061e06b --- /dev/null +++ b/cmd/servicebroker/main.go @@ -0,0 +1,149 @@ +package main + +import ( + "context" + "flag" + "fmt" + "os" + "os/signal" + "path" + "regexp" + "strconv" + "syscall" + + "github.com/golang/glog" + prom "github.com/prometheus/client_golang/prometheus" + + "github.com/awslabs/aws-service-broker/pkg/broker" + "github.com/jaymccon/osb-broker-lib/pkg/server" + "github.com/pmorie/osb-broker-lib/pkg/metrics" + "github.com/pmorie/osb-broker-lib/pkg/rest" +) + +var options struct { + broker.Options + + Port int + Insecure bool + TLSCert string + TLSKey string + TLSCertFile string + TLSKeyFile string + EnableBasicAuth bool + BasicAuthUser string + BasicAuthPassword string +} + +func init() { + flag.IntVar(&options.Port, "port", 8443, "use '--port' option to specify the port for broker to listen on") + flag.BoolVar(&options.Insecure, "insecure", false, "use --insecure to use HTTP vs HTTPS.") + flag.StringVar(&options.TLSCertFile, "tls-cert-file", "", "File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert).") + flag.StringVar(&options.TLSKeyFile, "tls-private-key-file", "", "File containing the default x509 private key matching --tls-cert-file.") + flag.StringVar(&options.TLSCert, "tlsCert", "", "base-64 encoded PEM block to use as the certificate for TLS. If '--tlsCert' is used, then '--tlsKey' must also be used.") + flag.StringVar(&options.TLSKey, "tlsKey", "", "base-64 encoded PEM block to use as the private key matching the TLS certificate.") + flag.BoolVar(&options.EnableBasicAuth, "enableBasicAuth", false, "Enable HTTP Basic Authentication") + flag.StringVar(&options.BasicAuthUser, "basicAuthUser", "", "HTTP Basic Authentication user") + flag.StringVar(&options.BasicAuthPassword, "basicAuthPass", "", "HTTP Basic Authentication password") + broker.AddFlags(&options.Options) + flag.Parse() +} + +func main() { + if err := run(); err != nil && err != context.Canceled && err != context.DeadlineExceeded { + glog.Fatalln(err) + } +} + +func run() error { + ctx, cancelFunc := context.WithCancel(context.Background()) + defer cancelFunc() + go cancelOnInterrupt(ctx, cancelFunc) + + return runWithContext(ctx) +} + +func runWithContext(ctx context.Context) error { + if flag.Arg(0) == "version" { + fmt.Printf("%s/%s\n", path.Base(os.Args[0]), "0.1.0") + return nil + } + if (options.TLSCert != "" || options.TLSKey != "") && + (options.TLSCert == "" || options.TLSKey == "") { + fmt.Println("To use TLS with specified cert or key data, both --tlsCert and --tlsKey must be used") + return nil + } + + matched, _ := regexp.MatchString("^[[:alnum:]]*$", options.BrokerID) + if !matched { + glog.Fatalln("brokerId can only contain letters and numbers") + } + + addr := ":" + strconv.Itoa(options.Port) + + clients := broker.AwsClients{ + NewCfn: broker.AwsCfnClientGetter, + NewS3: broker.AwsS3ClientGetter, + NewSsm: broker.AwsSsmClientGetter, + NewSts: broker.AwsStsClientGetter, + NewDdb: broker.AwsDdbClientGetter, + NewIam: broker.AwsIamClientGetter, + } + + awsBroker, err := broker.NewAWSBroker(options.Options, broker.AwsSessionGetter, clients, broker.GetCallerId, broker.UpdateCatalog, broker.PollUpdate) + if err != nil { + glog.Fatalln(err) + } + + // Prom. metrics + reg := prom.NewRegistry() + osbMetrics := metrics.New() + reg.MustRegister(osbMetrics) + + api, err := rest.NewAPISurface(awsBroker, osbMetrics) + if err != nil { + return err + } + if options.BasicAuthUser == "" { + options.BasicAuthUser = os.Getenv("SECURITY_USER_NAME") + } + if options.BasicAuthPassword == "" { + options.BasicAuthPassword = os.Getenv("SECURITY_USER_PASSWORD") + } + auth := server.BasicAuth{User: options.BasicAuthUser, Pass: options.BasicAuthPassword} + s := server.New(api, reg, options.EnableBasicAuth, auth.Secret) + + glog.Infof("Starting broker!") + + if options.Insecure { + err = s.Run(ctx, addr) + } else { + if options.TLSCert != "" && options.TLSKey != "" { + glog.V(4).Infof("Starting secure broker with TLS cert and key data") + err = s.RunTLS(ctx, addr, options.TLSCert, options.TLSKey) + } else { + if options.TLSCertFile == "" || options.TLSKeyFile == "" { + glog.Error("unable to run securely without TLS Certificate and Key. Please review options and if running with TLS, specify --tls-cert-file and --tls-private-key-file or --tlsCert and --tlsKey.") + return nil + } + glog.V(4).Infof("Starting secure broker with file based TLS cert and key") + err = s.RunTLSWithTLSFiles(ctx, addr, options.TLSCertFile, options.TLSKeyFile) + } + } + return err +} + +func cancelOnInterrupt(ctx context.Context, f context.CancelFunc) { + term := make(chan os.Signal) + signal.Notify(term, os.Interrupt, syscall.SIGTERM) + + for { + select { + case <-term: + glog.Infof("Received SIGTERM, exiting gracefully...") + f() + os.Exit(0) + case <-ctx.Done(): + os.Exit(0) + } + } +} diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000..40330778 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,201 @@ +AWS Service Broker Documentation +================================ + +![Architecture](/docs/images/architecture.png) +*Illustrates how application platforms can use the broker to provision and bind to AWS services.* + +## Installation + +* [Prerequisites](/docs/install_prereqs.md) +* [Installation on OpenShift](/docs/getting-started-openshift.md) +* [Installation on Pivotal Cloud Foundry](/docs/getting-started-pcf.md) +* [Installation on Kubernetes](/docs/getting-started-k8s.md) + +## Provisioning and binding services + +Documentation for all of the available plans, their parameters and binding outputs are available in the +[AWS Service Broker GitHub repository](https://github.com/awslabs/aws-servicebroker/tree/master/templates) + +## Configuration tasks + +### Passing In AWS credentials via parameters + +The **aws_access_key**, **aws_secret_key** can be passed in as parameters to the provision request. + +If provided, they will be used in place of the aws service catalog process role. + +These parameters will be stored in the DynamoDB backend. Currentently STS generated credentials +are not supported as there is no way to update them upon expiration via the +open service broker spec. + +For example + +``` +# svcat provision my-instance-name \ + -n my-app \ + --class my-instance-class \ + --plan prd \ + -p VpcId=vpc-123451234512341234,aws_access_key=bacdbcadbcadbcad,aws_secret_key=abcdabcdabcdabcdabcdabcdabcdabcd + Name: my-instance-name + Namespace: my-app + Status: + Class: my-instance-class + Plan: prd + +Parameters: + Name: my-ingress-sg-1535425552 + VpcId: vpc-123451234512341234 + aws_access_key: bacdbcadbcadbcad + aws_secret_key: abcdabcdabcdabcdabcdabcdabcdabcd +``` + +### Managing Resources Via Assumed Role + +The aws-service-broker has the ability to assume a role for all resources it manages. + +This role can be in the same account, or a separate target account. + +To setup the role, assume admin credentials in the account where the role will reside +and create the role for the aws-service-broker to assume. + +``` +service_broker_account_id=123456654321 # role where the service broker will run, will be the same as the target if in single account + +aws cloudformation create-stack \ + --stack-name AwsServiceBrokerWorkerRole \ + --template-body file://setup/aws-service-broker-worker.json \ + --capabilities CAPABILITY_NAMED_IAM \ + --parameters ParameterKey=ServiceBrokerAccountId,ParameterValue=$service_broker_account_id +``` + +To do you this you must ensure that the role the **aws-service-broker** is running allows it to assume the target role. + +Get the ARN: + +``` +aws cloudformation describe-stacks \ + --stack-name AwsServiceBrokerWorkerRole | jq -r .Stacks[0].Outputs[0].OutputValue +``` + +Ensure the service broker role has the below permissions: + +```json +{ + "Action": "sts:AssumeRole", + "Resource": "arn:aws:iam::123456654321:role/aws-service-broker-worker", + "Effect": "Allow" +} +``` + +Provide **target_account_id** and **target_role_name** as parameters to the provision command +to tell the service broker to assume the role in another account to provision. + +``` +svcat provision my-ingress-api-gw \ + -n my-app \ + --class my-class \ + --plan prd \ + -p VpcId=vpc-1234567887654321,target_account_id=123456654321,target_role_name=aws-service-broker-worker +```` + +### Overriding the default AWS region + +The **region** can be passed in as a parameter to the provision request. + +If provided, it will be used in place of the aws service catalog process region. + +### Parameter Overrides + +The broker can override parameter values using override records in the metadata DynamoDB table. +The broker provides a hierarchy of parameter overrides to prescribe values for common parameters like AWS credentials, region, +VPC ID or any other parameter in a service plan. + +An override can be broker wide, or only apply to a particular org/cluster, space/namespace, or ServiceClass. + +The structure of an override record is: + +```json +{ + "id": "", + "userid": "", + "parameter_name": "", + "parameter_value": "", + "service_class": "", + "org_guid": "", + "space_guid": "", + "cluster_id": "", + "namespace": "" +} +``` + +> Notes: +> * `id`, `userid`, `parameter_name` and `parameter_value` are required. +> * `org_guid` and `space_guid` are Cloud Foundry specific, and cannot be combined with `cluster_id` and `namespace` (Kubernetes specific) +> * If a parameter is overridden globally (none of the optional fields are provided) and the `-prescribeOverrides` flag is passed, it will be removed from the available parameters presented by the application platform's UI +> * cluster_id for kubernetes is [generated by the service catalog](https://github.com/kubernetes-incubator/service-catalog/blob/acf976260e505bedb10b7c8f18efc69833714ecc/pkg/controller/controller.go#L1317), and will change if the service catalog is removed and reinstalled. + +The order of precedence for parameter values is: + +1. Plan default +2. User provided +3. Global Overrides +4. ServiceClass overrides +5. Org/Cluster overrides +6. Org/Cluster + ServiceClass overrides +7. Space/Namespace overrides +8. Space/Namespace + ServiceClass overrides +9. Org/Cluster + Space/Namespace overrides +10. Org/Cluster + Space/Namespace + ServiceClass overrides + +#### Examples + +> Note: You need the ossp-uuid and aws-cli command line tools to run these examples + +**Set a global override to provision into us-west-2 region:** + +```bash +ACCOUNT_ID=123456789012 # Account ID for the AWS account that the broker user/role is in +BROKER_ID=aws-service-broker # brokerId provided as an argument when launching the broker, if not specified it defaults to aws-service-broker +DYNAMODB_TABLE=awssb # name of broker metadata table +DYNAMODB_REGION=us-east-1 # region that the dynamo table is in +cat < "./override.json" +{ + "id": { "S": "$(uuid)" }, + "userid": { "S": "$(uuid -v 5 00000000-0000-0000-0000-000000000000 ${ACCOUNT_ID}${BROKER_ID})" }, + "parameter_name": { "S": "region" }, + "parameter_value": { "S": "us-west-2" } +} +EOF +aws dynamodb put-item --table-name ${DYNAMODB_TABLE} --region ${DYNAMODB_REGION} --item file://override.json +``` + +**Set `myns` namespace to provision into us-west-2 region:** + +```bash +ACCOUNT_ID=123456789012 # Account ID for the AWS account that the broker user/role is in +BROKER_ID=aws-service-broker # brokerId provided as an argument when launching the broker, if not specified it defaults to aws-service-broker +DYNAMODB_TABLE=awssb # name of broker metadata table +DYNAMODB_REGION=us-east-1 # region that the dynamo table is in +CLUSTER_ID=$(kubectl get cm cluster-info -n catalog -o jsonpath='{$.data.id}') # Ensure your kubectl is set to the desired cluster +NAMESPACE=myns +cat < "./override.json" +{ + "id": { "S": "$(uuid)" }, + "userid": { "S": "$(uuid -v 5 00000000-0000-0000-0000-000000000000 ${ACCOUNT_ID}${BROKER_ID})" }, + "parameter_name": { "S": "region" }, + "parameter_value": { "S": "us-west-2" }, + "cluster_id": { "S": "${CLUSTER_ID}" }, + "namespace": { "S": "${NAMESPACE}" } +} +EOF +aws dynamodb put-item --table-name ${DYNAMODB_TABLE} --region ${DYNAMODB_REGION} --item file://override.json +``` + +### Custom Catalog + +You can configure the broker to point to your own S3 bucket (which can be private or public) containing +CloudFormation templates and ServiceClass specs. The bucket, prefix and AWS region that the broker scans for ServiceClasses is configured using the +`-s3Bucket`, `-s3Key` and `-s3Region` commandline switches. + +* [Example -spec.yaml file](/examples/example-spec.yaml) + diff --git a/docs/getting-started-k8s.md b/docs/getting-started-k8s.md new file mode 100644 index 00000000..72452f29 --- /dev/null +++ b/docs/getting-started-k8s.md @@ -0,0 +1,33 @@ +# Getting Started Guide - Kubernetes + +This guide uses helm, for documentation on installing the helm client see [https://docs.helm.sh/using_helm/#install-helm](https://docs.helm.sh/using_helm/#install-helm) + + +### Installing Kubernetes Service Catalog + +```bash +# Install helm and tiller into the cluster +helm init +# Wait until tiller is ready before moving onuntil kubectl get pods -n kube-system -l name=tiller | grep 1/1; do sleep 1; done + +kubectl create clusterrolebinding tiller-cluster-admin \ + --clusterrole=cluster-admin \ + --serviceaccount=kube-system:default +# Adds the chart repository for the service catalog +helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com +# Installs the service catalog +helm install svc-cat/catalog --name catalog --namespace catalog +``` + +### Installing the AWS Service Broker + +```bash +# Add the service broker chart repository +helm repo add aws-sb https://awsservicebroker.s3.amazonaws.com/charts + +# Show the available variables for the chart +helm inspect aws-sb/aws-servicebroker + +# Minimal broker install, assuming defaults above. Add flags to set credentials, region, etc +helm install aws-sb/aws-servicebroker -n aws-servicebroker -ns aws-sb +``` diff --git a/docs/getting-started-openshift.md b/docs/getting-started-openshift.md new file mode 100644 index 00000000..5f54d08d --- /dev/null +++ b/docs/getting-started-openshift.md @@ -0,0 +1,645 @@ +# Getting Started Guide - OpenShift + +The AWS Service Broker is now Integrated into the Openshift AWS Quickstart, which provides an easy to deploy, production ready, Openshift Container Platform deployment. see [here](https://aws.amazon.com/quickstart/architecture/openshift/) for more details. + +## Overview + +This guide describes how to configure an OpenShift cluster with the capability to deploy AWS services. + +### Terminology + +The following terms and abbreviations are used throughout the document. + +* **Service Catalog** is the component which finds and presents the users the list of services which the user has access to. It also gives the user the ability to provision new instances of those services and provide a way to to bind the provisioned services to existing applications. +* **Service Brokers** are the components that manages a set of capabilities in the cloud infrastructure, and provides the service catalog with the list of services, via implementing the Open Service Broker API +* **AWS Broker** is the Red Hat's OpenShift Org's implementation of the service broker for Amazon Services. +* **Ansible Playbook Bundle (APB)** is a application definition (meta-container) used to define and deploy applications. + +### AWS Services available +After completing the steps in this guide, the following AWS services will be available from the OpenShift Service Catalog as APBs. + +* Simple Queue Service (SQS) +* Simple Notification Service (SNS) +* Route 53 +* Relational Database Service (RDS) +* Elastic MapReduce (EMR) +* Simple Cloud Storage Service (S3) +* ElastiCache +* Redshift +* DynamoDB +* Athena + + +## Requirements + +The following are required to provision AWS services from the OpenShift Service Catalog. + +* OpenShift Container Platform (OCP) or Origin v3.7 +* Docker +* Service Catalog +* AWS Service Broker configured with an appropriate registry (e.g. [docker.io/awsservicebroker](https://hub.docker.com/u/awsservicebroker/)) +* APB Prerequisites (if applicable) + +Instructions below will guide you in deploying these components in production and development environments. + +## Before Deploying the AWS Broker + +### Create an AWS Access Key for the AWS Broker to use + +The AWS Broker requires an AWS Access Key to provision AWS services. See the [AWS IAM documentation](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for information on creating an Access Key. + +Keep track of the created key for later in the deployment process. + +### Create an IAM Role for the AWS Broker to assume + +When the AWS Broker is provisioning services, it assumes an IAM Role to access CloudFormation. The AWS Access Key you've just created needs access to this CloudFormation IAM role so that the AWS Broker can assume the role for service provisioning tasks. + +Follow these instructions to manually create a compatible IAM role. + +1. Login to the AWS Management Web Console +1. Click "Services → IAM" +1. Click "Roles" in the Left column +1. Click "Create Role" +1. On the "Select type of trust entity" screen, select "CloudFormation" +1. Then click "Next: Permissions" to continue +1. Select an appropriate permission level (select "AdministratorAccess" to give the broker full permissions) +1. Click "Next: Review" to continue +1. Enter the desired IAM Role Name (e.g. "aws-broker-cloudformation"), and click "Create Role" + +Once you have completed creating the role, you can get its ARN by going back to the "Services → IAM" and clicking on "Roles", then selecting your newly created Role. + +The role ARN will have the following format: + +``` +arn:aws:iam::375558675309:role/my-role-name +``` + +Keep track of role ARN for later during the deployment process. + + +## Choosing a Deployment Type + +### Production + +[Jump to "Production" deployment instructions.](#production-deployment-instructions) + + * For production workloads + * Uses an [OpenShift template](https://s3.amazonaws.com/awsservicebroker/scripts/deploy-awsservicebroker.template.yaml) to deploy the AWS Broker into an existing OpenShift cluster + * Can be quickly deployed on an existing OpenShift cluster + * Only way to run with on-premises multi-node OpenShift cluster + * Requires manually running the OpenShift installer + * May require additional knowledge of OpenShift + + +### Development + + [Jump to "Development" deployment instructions.](#development-deployment-instructions) + + * For development and testing + * Uses [CatASB](https://github.com/fusor/catasb) to deploy OpenShift, Service Catalog, and AWS Broker + * Relaxed security settings by default + * Not for production workloads + * Provides a quick way to reset environment (OpenShift, Service Catalog, AWS Broker) to latest available. + * Supports deploying OpenShift cluster onto AWS EC2 + + + +## Production Deployment Instructions +### Step 1: Deploy an OpenShift cluster configured to run the Service Catalog +Refer to OpenShift documentation for instructions on deploying an OpenShift 3.7 cluster. + + * [OpenShift Origin Documentation](https://docs.openshift.org/) + * [OpenShift Container Platform Documentation](https://access.redhat.com/documentation/en-us/openshift_container_platform/) + +Before proceeding to Step 2, set up the following: + * OpenShift 3.7 cluster configured to run the Service Catalog + * OpenShift Persistent Volume (PV) configured and available for use by AWS Broker (1 GiB recommended) + +### Step 2: Add the AWS Broker to an OpenShift Cluster +#### The AWS Broker Deployment Template + +The simplest way to load the AWS Broker onto an existing OpenShift cluster is with [deploy-awsservicebroker.template.yaml](https://s3.amazonaws.com/awsservicebroker/scripts/deploy-awsservicebroker.template.yaml), an OpenShift template describing the components of an AWS Broker deployment. + +The AWS Broker template [deploy-awsservicebroker.template.yaml](https://s3.amazonaws.com/awsservicebroker/scripts/deploy-awsservicebroker.template.yaml) has many configurable parameters, and requires several SSL certificates. + +Use the [helper script](https://s3.amazonaws.com/awsservicebroker/scripts/deploy_aws_broker.sh) described in the next section to quickly fill out the recommended values. Important template parameters are described below: + + * `DOCKERHUB_ORG` - Organization from which AWS service APB images will be loaded. Set to`"awsservicebroker"`. + * `ENABLE_BASIC_AUTH` - Changes authentication from bearer-token auth to basic auth. Set to `"false"`. + * `NAMESPACE` - Namespace to deploy the broker in. Set to `"aws-service-broker"`. + * `ETCD_TRUSTED_CA_FILE` - File path of CA certificate for AWS Broker etcd store. + * `BROKER_CLIENT_CERT_PATH` - File path of AWS Broker client certificate. + * `BROKER_CLIENT_KEY_PATH` - File path of AWS Broker client key. + +#### Using the Helper Script to Process the AWS Broker Deployment Template +The easiest way to deploy the contents of the AWS Broker deployment template is to run the [helper script](https://s3.amazonaws.com/awsservicebroker/scripts/deploy_aws_broker.sh) which will generate required SSL certificates and provide required parameters to the template. + + +First, create a directory containing the deployment template and helper script. +```bash +mkdir -p ~/aws_broker_install +cd ~/aws_broker_install +wget https://s3.amazonaws.com/awsservicebroker/scripts/deploy-awsservicebroker.template.yaml +wget https://s3.amazonaws.com/awsservicebroker/scripts/deploy_aws_broker.sh +``` + +Before running the helper script, verify that the variables near the top of the file are set correctly. +```bash +vi deploy_aws_broker.sh +``` + +```bash +CLUSTER_ADMIN_USER="system:admin" # OpenShift user with Cluster Administrator role. +TEMPLATE_FILE="./deploy-awsservicebroker.template.yaml" # Path to AWS Broker deploy template +DOCKERHUB_ORG=${DOCKERHUB_ORG:-"awsservicebroker"} # Dockerhub organization where AWS APBs reside. +``` + +Finally, run the script to deploy the AWS Broker. +```bash +chmod +x deploy_aws_broker.sh +./deploy_aws_broker.sh +``` + +Once the AWS Broker is deployed, it should be visible from the OpenShift namespace `"aws-service-broker"`. You should also see AWS Services appear in the OpenShift Service Catalog. + +## Development Deployment Instructions + +### CatASB - Introduction +[CatASB](https://github.com/fusor/catasb) is a collection of Ansible playbooks which will automate the creation of an OpenShift environment containing the Service Catalog and the AWS Broker. _Unlike_ the production deployment steps, these steps will automatically handle creation of the OpenShift cluster. + +To deploy this way, you will first edit a configuration YAML file to customize the automation to your needs. + +First, clone the `catasb` git repository + + +```bash +git clone https://github.com/fusor/catasb.git +cd catasb +``` + + +Copy the '`my_vars.yml.example`' to '`my_vars.yml`', and edit the file. '`my_vars.yml`' is your custom configuration file. Any variable defined in this file will overwrite its definition anywhere else. + + +```bash +cd config +cp my_vars.yml.example my_vars.yml +``` + + +**Review**, **uncomment**, and **modify** variables that you wish to customize. + + +```bash +vi my_vars.yml +``` + + +Below are some of the variables that you may wish to override: + + +```yaml +dockerhub_org: awsservicebroker + +origin_image_tag: latest +openshift_client_version: latest + +deploy_asb: False +deploy_awsservicebroker: True + +aws_role_arn_name: "aws-broker-cloudformation" +``` + +`origin_image_tag` - version of the origin to be used ([click here for the list of valid tags](https://hub.docker.com/r/openshift/origin/tags/)) + +`openshift_client_version` - default is "latest", should match `origin_image_tag` version. + +`awsservicebroker_broker_template_dir` - location of AWS Broker Config file + +`deploy_asb` - deploy Ansible Service Broker, defaults to "True" + +`deploy_awsservicebroker` - deploy AWS Broker, defaults to "False" + +`aws_role_arn_name` - IAM Role Name for CloudFormation + +### CatASB - Associating an AWS Access and Secret key pair for all APBs + +If you wish to use only one set of AWS Access and Secret key pair for for all AWS Service APBs, you can set a few environment variables **_BEFORE_** running the CatASB scripts, and the secrets will be **_automatically_** created for the APBs to consume. + +In the terminal, export the values for the AWS Access and Secret keys as shown below + + +```bash +$ export AWS_ACCESS_KEY_ID="" +$ export AWS_SECRET_ACCESS_KEY="" +``` + +With these exported, the APBs will no longer require the user to input the Access and Secret key parameters during the APB provisioning step, since those parameter fields will not be visible. + +If you wish to remove the automatically created secret later on, login to the OpenShift Console, visit ` aws-service-broker→ resources → secrets` → `aws-custom-access-key-pair` secret and select `Actions → Delete`. + +**Note**: After deleting the secret, manual entry of the AWS access and secret key parameters will be required during the AWS service provisioning step. + + + +### CatASB - Deploying to the local machine + +This will do an '`oc cluster up`', and install/configure the Service Catalog with the AWS broker. + +Navigate into the `catasb/local/` folder + + +```bash +cd catasb/local/linux # for Linux OS + +or + +cd catasb/local/mac # for Mac OS +``` + + +If you are running in the Mac OS, review/edit `catasbconfig/mac_vars.yml` + + +To facilitate automatic creation of secrets for the AWS Access and AWS Secret Key parameters for all your APBs, do the following. + + +```bash +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +``` + + +For more information on creating secrets for your APB parameters [Click Here] + +Run the `setup` script + + +```bash +./run_setup_local.sh # for Linux OS + +or + +./run_mac_local.sh # for Mac OS +``` + + +If the CatASB is successful, the script will eventually output the details of the OpenShift Cluster. + + +#### Troubleshooting + + +When visiting the cluster URL (e.g. [https://172.17.0.1:8443/console/](https://172.17.0.1:8443/console/)), you may get an issue with _not_ being able connect. Check your firewall rules to make sure all of the OpenShift Ports are permitted. [Click here to see the list of ports](https://docs.openshift.com/container-platform/latest/install_config/install/prerequisites.html#required-ports) + + +Try disabling your firewall, reset your environment, and see if you can reach the cluster URL + + +```bash +sudo iptables -F +./reset_environment.sh +``` + + + +### CatASB - Deploying to EC2 (single-node) + +This environment uses "`oc cluster up`" in a single EC2 instance, and will install the OpenShift components from RPMs. + +Navigate into the `catasb/ec2 `folder + + +```bash +cd catasb/ec2 +``` + + +Define the following environment variable for your AWS Account + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment Variable + Default Values +
AWS_ACCESS_KEY_ID + No Default +
AWS_SECRET_ACCESS_KEY + No Default +
AWS_SSH_KEY_NAME + splice +
TARGET_DNS_ZONE + ec2.dog8code.com +
OWNERS_NAME + whoami +
TARGET_SUBDOMAIN + ${OWNERS_NAME} +
AWS_SSH_PRIV_KEY_PATH + No Default +
+ + +Setup the AWS network, and the EC2 instance: + + +```bash +./run_create_infrastructure.sh +``` + + +The script will output the details of the AWS environment + +Next, install and configure OpenShift, service catalog, and the broker + + +```bash +./run_setup_environment.sh +``` + + +To terminate the EC2 instance and to remove/clean-up the AWS network, run the following + + +```bash +./terminate_instances.sh +``` + + + +All of the scripts above will output the details of the OpenShift Cluster. However, if you wish to review those details at any time, you can run the following: + + +```bash +./display_information.sh +``` + + + +### CatASB - OpenShift Web Console Login + +When you visit the cluster URL ([https://172.17.0.1:8443/console/](https://172.17.0.1:8443/console/) is default for local CatASB) you should see a login screen as shown below. The default login for CatASB is `admin` username with `admin` password. + +![OpenShift Login](/docs/images/openshift-login.png) + +After login, you will be greeted with the following main screen. + +![OpenShift Service Catalog](/docs/images/service-catalog.png) + + +## Using Secrets to Hide Parameters from Service Catalog Users + +Many of the AWS Service APBs share a common set of required parameters (e.g. `AWS Access Key`, `AWS Secret Key`, `CloudFormation Role ARN`, `Region`, `VPC ID`) which a cluster administrator may want to hide from the user for security or simplicity purposes. Using AWS Broker secrets, cluster administrators can hide chosen parameters, and instead opt to manually designate preset values per-service or in general. + +To hide selected AWS Service parameters from Service Catalog users, a cluster administrator must create secret(s) in the 'aws-service-broker' namespace. Once secrets containing parameter presets are created and associated with an APB, those parameters will NOT appear during the normal APB's launching process. This means that the user will not even "see" that parameter option to enter the value for, since they have already been set and created as a secret. Those parameter values will automatically be filled with values created in the secret. + +Follow the steps below to manually create and configure secrets for your APBs. When deploying with CatASB, secrets containing the `AWS Access Key` and `AWS Secret Key` will be created automatically if appropriate environment variables are set before running. + +### Manually Creating Secrets to Autofill AWS Service Parameters + +Let's consider a scenario in which you haven't yet set `AWS_ACCESS_KEY_ID` or `AWS_SECRET_ACCESS_KEY` as AWS Broker secrets, and that you wish to create an appropriate secret now so that users provisioning services won't have to know these details. + +Start by creating a secrets file. The following snippet shows example contents of a secret-containing YAML file, `aws-secret.yml`. + + +```yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: aws-secret +stringData: + aws_access_key: "changeme" + aws_secret_key: "changeme" +``` + + +**Note**: The named values (`aws_access_key` and `aws_secret_key`) in the snippet's `stringData` section **MUST** be equal to the parameter names inside of the AWS Service APB that you wish to receive the secret value. If the names do not match exactly, the parameter values will NOT receive the secret. + +Next, create the secret in the "`aws-service-broker`" namespace + +```bash +oc create -f aws-secret.yml -n aws-service-broker +``` + +You may create as many AWS Broker secrets as you like. Simply repeat these steps for each secret. + +You can verify that the secrets were created in the OpenShift Web Console by visiting the `resource → secrets` section in the `aws-service-broker` namespace. + +Now, we want to configure our broker to _use_ the secret that we just created in "`aws-secret`" and configure them to be consumed by our APBs. + +To do so, edit the broker's `configmap` by issuing the following command + + +```bash +oc edit configmap -n aws-service-broker +``` + + +Search for the following section of the `configmap` + + +```yaml + broker: + dev_broker: True + bootstrap_on_startup: true + refresh_interval: "24h" + launch_apb_on_bind: False + output_request: False + recovery: True + ssl_cert_key: /etc/tls/private/tls.key + ssl_cert: /etc/tls/private/tls.crt + auto_escalate: True + auth: + - type: basic + enabled: False +``` + + +And … + +Add in a "**secrets" section which follows the following syntax. + + +```yaml + secrets: + - {apb_name: dh-myAPB, secret: aws-secret, title: aws-secret} +``` + + +The `"apb_name"` will follow the above pattern + +* "`dh`" for dockerhub +* `"secret/title"` is the name of your secret. + +The modified configmap will look as follows: + + +```yaml + broker: + dev_broker: True + bootstrap_on_startup: true + refresh_interval: "24h" + launch_apb_on_bind: False + output_request: False + recovery: True + ssl_cert_key: /etc/tls/private/tls.key + ssl_cert: /etc/tls/private/tls.crt + auto_escalate: True + auth: + - type: basic + enabled: False + secrets: + - {apb_name: dh-sqs, secret: aws-secret, title: aws-secret} + - {apb_name: dh-sns, secret: aws-secret, title: aws-secret} + - {apb_name: dh-r53, secret: aws-secret, title: aws-secret} + - {apb_name: dh-rdsmariadb, secret: aws-secret, title: aws-secret} + - {apb_name: dh-rdspostresql, secret: aws-secret, title: aws-secret} + - {apb_name: dh-rdsmysql, secret: aws-secret, title: aws-secret} + - {apb_name: dh-emr, secret: aws-secret, title: aws-secret} + - {apb_name: dh-redshift, secret: aws-secret, title: aws-secret} + - {apb_name: dh-elasticache, secret: aws-secret, title: aws-secret} + - {apb_name: dh-dynamodb, secret: aws-secret, title: aws-secret} + - {apb_name: dh-s3, secret: aws-secret, title: aws-secret} + - {apb_name: dh-athena, secret: aws-secret, title: aws-secret} + - {apb_name: dh-kinesis, secret: aws-secret, title: aws-secret} + - {apb_name: dh-kms, secret: aws-secret, title: aws-secret} + - {apb_name: dh-lex, secret: aws-secret, title: aws-secret} + - {apb_name: dh-polly, secret: aws-secret, title: aws-secret} + - {apb_name: dh-rekognition, secret: aws-secret, title: aws-secret} + - {apb_name: dh-translate, secret: aws-secret, title: aws-secret} +``` + +To make our edits take effect, **restart** the broker's `asb` pod + + +```bash +oc rollout latest aws-asb -n aws-service-broker +``` + + +Change the default `broker-relist-interval` value of the service catalog's `controller-manager` pod by editing its deployment + + +```bash +oc edit deployment controller-manager -n kube-service-catalog +``` + + +Search for the following section + + +```yaml + spec: + containers: + - args: + - -v + - "5" + - --leader-election-namespace + - kube-service-catalog + - --broker-relist-interval + - 5m +``` + + +And … + +Edit the `broker-relist-interval` value to 1m as shown below + + +```yaml + spec: + containers: + - args: + - -v + - "5" + - --leader-election-namespace + - kube-service-catalog + - --broker-relist-interval + - 1m +``` + + +The controller-manager pod will _automatically_ restart once you_ save and exit_ the deployment edit screen.** + +Review the `asb` pod's _logs_ in the `aws-service-broker` namespace. The logs should show "`filtering secrets`" for the APB's that you have configured the secrets for. + + +```bash +[DEBUG] Filtering secrets from spec dh-sqs-apb +``` + + +## General APB Tips + +Create a new project (namespace) to provision each of the APBs, unless it make sense to do otherwise. + +All AWS APBs require the `aws_access_key` and the `aws_secret_key` parameters. Therefore, these two parameters would be a great candidates for the creating of the secrets and configure the APB's to use them via defining the the `AWS_ACCESS_KEY_ID` and the `AWS_SECRET_ACCESS_KEY` environment variable as described earlier. + +Most APB parameters have default values and are descriptive enough to make an educated guess on what the values should be. Many parameters are selectable from a set of valid choices. However, if any of the parameters do not make sense, do not provision. Click the "view documentation" and review the AWS service documentation when you're not certain what the parameters should be. + + +### Binding + +#### Provision first, Bind Later + +If you simply want to provision the APB and wish to bind it to an application at a later time, do the following + + + +* Provision the APB, but select "do not create a binding" +* Provision other apps or APBs, but again, select "do not create binding" +* Once all of the APB's/Apps are provisioned in your namespace, select an app, and create the binding from the App to the APB and ... +* Redeploy your app if it does not automatically redeploy after binding. Some `source-to-image` apps may need to be manually redeployed + + +#### Bind During the Provisioning Step + +To bind applications to APB during the provisioning step, you must already have an App or an APB that was successfully provisioned. Once you have an APB to bind to, do the following + + + +* Provision the APB, but select "create a binding to be used later" +* Provision apps, but do not bind the apps to the APB +* Go to the the "resources → secrets" and find and click on the `binding secret` +* Click on the "`Add to application`" at the top right, and select your application +* Redeploy your app if it does not automatically redeploy + + +## General Troubleshooting + +### Debugging connectivity issues from external traffic to VPC + +We've run into some cases with RDS where the connection to the RDS instance was not accessible. When this happens look at the VPC of the RDS instance and trace to the associated routing table. Verify that the routing table has a reference to the internet gateway, if you don't see a reference to the igw like below then add it so external traffic is allowed. + +![AWS VPC Dashboard](/docs/images/vpc-dashboard.png) diff --git a/docs/getting-started-pcf.md b/docs/getting-started-pcf.md new file mode 100644 index 00000000..9e114928 --- /dev/null +++ b/docs/getting-started-pcf.md @@ -0,0 +1,127 @@ +# Getting Started Guide - Pivotal Cloud Foundry + +*Note:* The use of the AWS Service Broker in Cloud Foundry is at an alpha stage, bugs and possible update related breaking +changes may manifest. Use of the AWS Service Broker in Pivotal Cloud Foundry is not recommended for production at this +time. + +### Prerequisites + +#### PCF 2.1+ + +Testing on V2.1 was done using the [Pivotal Cloud Foundry on the AWS Cloud Quick Start](https://aws.amazon.com/quickstart/architecture/pivotal-cloud-foundry/). +Though not tested, older PCF versions may work. + +#### IAM Roles/Users + +The AWS Service Broker packages all services into CloudFormation templates that are executed by the broker. The broker +can use a role if the broker is installed into an EC2 instance with access to the ec2 metadata endpoint +(169.254.169.254). Alternatively, an IAM user and static keypair can be created for the broker to use. The IAM user/role +requires the following IAM policy: + +***Service User/Role*** +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": [ + "cloudformation:*", + "ssm:*", + "dynamodb:*", + "s3:*" + ], + "Resource": [ + "*" + ], + "Effect": "Allow" + }, + { + "Action": [ + "iam:PassRole" + ], + "Resource": [ + "arn:aws:iam::*:role/AWSServiceBrokerCFNRole" + ], + "Effect": "Allow" + } + ] +} +``` + +A [CloudFormation service role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html) +is also required, an example of a broad policy to enable all current service plans is included below, this can be scoped +down if only specific services are required: + +***CloudFormation Role*** +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": [ + "cloudformation:*", + "iam:*", + "kms:*", + "ssm:*", + "ec2:*", + "lambda:*", + "athena:*", + "dynamodb:*", + "elasticache:*", + "elasticmapreduce:*", + "rds:*", + "redshift:*", + "route53:*", + "s3:*", + "sns:*", + "sqs:*", + "polly:*", + "lex:*", + "translate:*", + "rekognition:*", + "kinesis:*" + ], + "Resource": [ + "*" + ], + "Effect": "Allow" + } + ] +} +``` + +#### DynamoDB Table + +The broker uses a DynamoDB table as a persistent store for service instances and as a distributed cache/lock. To create +the table the following command can be run using the AWS CLI: + +```bash +aws dynamodb create-table --attribute-definitions \ +AttributeName=id,AttributeType=S AttributeName=userid,AttributeType=S \ +AttributeName=type,AttributeType=S --key-schema AttributeName=id,KeyType=HASH \ +AttributeName=userid,KeyType=RANGE --global-secondary-indexes \ +'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \ +--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ +--region us-east-1 --table-name awssb +``` + +*Note:* the Service User/Role policy expects the CloudFormation role to be named AWSServiceBrokerCFNRole, if you name it +something else you will also need to update this policy to reflect the name. + +### Installation + +* Download the [AWS Service Broker Tile](https://awsservicebrokeralpha.s3.amazonaws.com/pcf/aws-service-broker-latest.pivotal) +* Login to Ops Manager and import the tile +* Complete configuration in the `AWS Service Broker Configuration` section. Take note of the following fields: + * `AWS Access Key ID` and `AWS Secret Access` - if you are using an ec2 instance role attached to the broker hosts, + specify "use-role" as the value for both fields, otherwise specify the credentials for the user created in the + prerequisites section of this guide. + * `AWS Region ` - this is the default region for the broker to deploy services into, and must match the region that the + DynamoDB table created in the prerequisisites section of this guide was created in (this will be decoupled in an upcoming update). + * `Amazon S3 Bucket` - specify `awsservicebroker` + * `Amazon S3 Key Prefix` - specify `templates/latest/` + * `Amazon S3 Region` - specify `us-east-1` + * `Amazon S3 Key Suffix` - specify `-main.yaml` + * `Amazon DynamoDB table name` - specify the name of the table created in the prerequisites section of this guide, default is `awssb` + + diff --git a/docs/images/architecture.png b/docs/images/architecture.png new file mode 100644 index 00000000..376c7f14 Binary files /dev/null and b/docs/images/architecture.png differ diff --git a/docs/images/aws_servicebroker_logo.png b/docs/images/aws_servicebroker_logo.png new file mode 100644 index 00000000..db006228 Binary files /dev/null and b/docs/images/aws_servicebroker_logo.png differ diff --git a/docs/install_prereqs.md b/docs/install_prereqs.md new file mode 100644 index 00000000..26b59194 --- /dev/null +++ b/docs/install_prereqs.md @@ -0,0 +1,121 @@ +### DynamoDB Table + +Table can be created with the following aws cli command: + +```bash +aws dynamodb create-table --attribute-definitions \ +AttributeName=id,AttributeType=S AttributeName=userid,AttributeType=S \ +AttributeName=type,AttributeType=S --key-schema AttributeName=id,KeyType=HASH \ +AttributeName=userid,KeyType=RANGE --global-secondary-indexes \ +'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \ +--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ +--region us-east-1 --table-name awssb +``` + +You can customize the table name as needed and pass in your table name using –tableName + +### IAM + +By default the broker will use the same credentials for provisioning ServiceInstances and for broker operations like +fetching the catalog and reading/writing metadata to DynamoDB. + +The user or role that the broker runs as requires the following policy: +(will scope this down further before public release) + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::awsservicebroker/templates/*", + "arn:aws:s3:::awsservicebroker" + ], + "Effect": "Allow" + }, + { + "Action": [ + "dynamodb:PutItem", + "dynamodb:GetItem" + ], + "Resource": "arn:aws:dynamodb:::table/", + "Effect": "Allow" + }, + { + "Action": [ + "ssm:GetParameter", + "ssm:GetParameters" + ], + "Resource": "arn:aws:ssm:::parameter/asb-*", + "Effect": "Allow" + } + ] +} +``` + +The role/user used for provisioning requires additional permissions for provisioning, binding and deprovisioning ServiceInstances. By default this is the same user/role as the broker role, so can be added to that. + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "SsmForSecretBindings", + "Action": "ssm:PutParameter", + "Resource": "arn:aws:ssm:::parameter/asb-*", + "Effect": "Allow" + }, + { + "Sid": "AllowCfnToGetTemplates", + "Action": "s3:GetObject", + "Resource": "arn:aws:s3:::awsservicebroker/templates/*", + "Effect": "Allow" + }, + { + "Sid": "CloudFormation", + "Action": [ + "cloudformation:CreateStack", + "cloudformation:DeleteStack", + "cloudformation:DescribeStacks", + "cloudformation:UpdateStack", + "cloudformation:CancelUpdateStack" + ], + "Resource": [ + "arn:aws:cloudformation:::stack/aws-service-broker-*/*" + ], + "Effect": "Allow" + }, + { + "Sid": "ServiceClassPermissions", + "Action": [ + "athena:*", + "dynamodb:*", + "kms:*", + "elasticache:*", + "elasticmapreduce:*", + "kinesis:*", + "rds:*", + "redshift:*", + "route53:*", + "s3:*", + "sns:*", + "sns:*", + "sqs:*", + "ec2:*", + "iam:*", + "lambda:*" + ], + "Resource": [ + "*" + ], + "Effect": "Allow" + } + ] +} +``` + +If a custom catalog is published, this policy may need to be adapted. diff --git a/functional-testing/Dockerfile b/functional-testing/Dockerfile new file mode 100644 index 00000000..ef5c0572 --- /dev/null +++ b/functional-testing/Dockerfile @@ -0,0 +1,70 @@ +FROM debian:stretch + +RUN \ + DEBIAN_FRONTEND=noninteractive apt-get update -y && \ + DEBIAN_FRONTEND=noninteractive apt-get -yy -q --no-install-recommends install \ + iptables \ + ebtables \ + ethtool \ + ca-certificates \ + conntrack \ + socat \ + git \ + nfs-common \ + glusterfs-client \ + cifs-utils \ + apt-transport-https \ + ca-certificates \ + curl \ + gnupg2 \ + software-properties-common \ + bridge-utils \ + ipcalc \ + aufs-tools \ + sudo \ + wget \ + openssh-client \ + vim \ + net-tools \ + && DEBIAN_FRONTEND=noninteractive apt-get clean && \ + rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* + +RUN \ + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \ + apt-key export "9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88" | gpg - && \ + echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" >> \ + /etc/apt/sources.list.d/docker.list && \ + DEBIAN_FRONTEND=noninteractive apt-get update && \ + DEBIAN_FRONTEND=noninteractive apt-get -yy -q --no-install-recommends install \ + docker-ce \ + && DEBIAN_FRONTEND=noninteractive apt-get clean && \ + rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* + +RUN curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64 && chmod +x minikube +ENV MINIKUBE_WANTUPDATENOTIFICATION=false +ENV MINIKUBE_WANTREPORTERRORPROMPT=false +ENV CHANGE_MINIKUBE_NONE_USER=true + +COPY fake-systemctl.sh /usr/local/bin/systemctl + +RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.1/bin/linux/amd64/kubectl && \ + chmod a+x kubectl && \ + mv kubectl /usr/local/bin + +RUN curl -sLO https://download.svcat.sh/cli/latest/linux/amd64/svcat && \ + chmod +x ./svcat && \ + mv ./svcat /usr/local/bin/ && \ + svcat install plugin + +RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh && \ + chmod +x get_helm.sh && \ + ./get_helm.sh --version v2.8.2 + +COPY start.sh /start.sh +COPY ca.yml /ca.yml +COPY aws-servicebroker /usr/bin/aws-servicebroker + +RUN chmod a+x /start.sh +RUN chmod +x /usr/bin/aws-servicebroker + +VOLUME /var/lib/docker diff --git a/functional-testing/ca.yml b/functional-testing/ca.yml new file mode 100644 index 00000000..8cdcce24 --- /dev/null +++ b/functional-testing/ca.yml @@ -0,0 +1,18 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + creationTimestamp: null + name: cluster-admin + annotations: + rbac.authorization.kubernetes.io/autoupdate: "true" +rules: +- apiGroups: + - '*' + resources: + - '*' + verbs: + - '*' +- nonResourceURLs: + - '*' + verbs: + - '*' diff --git a/functional-testing/fake-systemctl.sh b/functional-testing/fake-systemctl.sh new file mode 100644 index 00000000..20b5327b --- /dev/null +++ b/functional-testing/fake-systemctl.sh @@ -0,0 +1,5 @@ +#!/bin/bash +if [[ "$@" == "is-active kubelet localkube" ]]; then + exit 1 +fi +exit 0 \ No newline at end of file diff --git a/functional-testing/start.sh b/functional-testing/start.sh new file mode 100644 index 00000000..47237736 --- /dev/null +++ b/functional-testing/start.sh @@ -0,0 +1,61 @@ +#!/bin/bash +echo "______________________________________________________________________________" +echo "" +echo " [*] starting minikube..." +echo "______________________________________________________________________________" +echo "" +export CNI_BRIDGE_NETWORK_OFFSET="0.0.1.0" +dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 &> /var/log/docker.log 2>&1 < /dev/null & +/minikube start --vm-driver=none --extra-config=apiserver.Admission.PluginNames=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,GenericAdmissionWebhook,ResourceQuota < /dev/null + +c=0 +while [ $(cat /var/log/docker.log | grep -c 'docker-containerd-shim started') -lt 9 ] ; do + if [ $c -gt 60 ]; then + echo "ERROR: failed waiting for minikube to come up..." + exit 1 + fi + sleep 10 + c=$((c+1)) +done +echo "______________________________________________________________________________" +echo "" +echo " [*] starting tiller..." +echo "______________________________________________________________________________" +echo "" +kubectl config view --merge=true --flatten=true > /kubeconfig +kubectl create serviceaccount --namespace kube-system cluster-admin +kubectl create clusterrolebinding cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:cluster-admin +kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default +kubectl --namespace kube-system apply -f /ca.yml +helm init --wait +helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com + +c=0 +while [ "$(kubectl get pods -n kube-system | grep tiller | grep -c 1/1)" != "1" ] ; do + if [ $c -gt 60 ]; then + echo "ERROR: failed waiting for tiller to come up..." + exit 1 + fi + sleep 10 + c=$((c+1)) +done +echo "______________________________________________________________________________" +echo "" +echo " [*] starting service catalog..." +echo "______________________________________________________________________________" +echo "" +helm install svc-cat/catalog --name catalog --namespace catalog --wait --timeout 1200 --version 0.1.13 + +c=0 +while [ "$(kubectl get pods -n catalog | grep -c '2/2\|1/1')" != "2" ] ; do + if [ $c -gt 60 ]; then + echo "ERROR: failed waiting for service catalog to come up..." + exit 1 + fi + sleep 10 + c=$((c+1)) +done + +cd / + +/bin/bash \ No newline at end of file diff --git a/packaging/cloudfoundry/resources/sb.png b/packaging/cloudfoundry/resources/sb.png new file mode 100644 index 00000000..5e40c26c Binary files /dev/null and b/packaging/cloudfoundry/resources/sb.png differ diff --git a/packaging/cloudfoundry/tile-history.yml b/packaging/cloudfoundry/tile-history.yml new file mode 100644 index 00000000..e6a434f1 --- /dev/null +++ b/packaging/cloudfoundry/tile-history.yml @@ -0,0 +1,28 @@ +--- +history: +- 0.0.1 +- 0.0.2 +- 0.0.3 +- 0.0.4 +- 0.0.5 +- 0.0.6 +- 0.0.7 +- 0.0.8 +- 0.0.9 +- 0.0.10 +- 0.0.11 +- 0.0.12 +- 0.0.13 +- 0.0.14 +- 0.0.15 +- 0.0.16 +- 0.0.17 +- 0.0.18 +- 0.0.19 +- 0.0.20 +- 0.0.21 +- 0.0.22 +- 0.0.23 +- 0.0.24 +- 0.0.25 +version: 0.1.0 diff --git a/packaging/cloudfoundry/tile.yml b/packaging/cloudfoundry/tile.yml new file mode 100644 index 00000000..d4e4bd20 --- /dev/null +++ b/packaging/cloudfoundry/tile.yml @@ -0,0 +1,106 @@ +--- +name: aws-service-broker +icon_file: resources/sb.png +label: AWS Service Broker +description: The AWS Service Broker is an open source project which allows native AWS services to be exposed directly through Cloud Foundry, and provides simple integration of AWS Services directly within the application platform. +packages: +- name: aws_sb + type: app-broker + label: AWS Service Broker + manifest: + path: resources/cfnsb + buildpack: binary_buildpack + command: > + export PARAM_OVERRIDE_${BROKER_ID}_all_all_all_aws_access_key=${AWS_ACCESS_KEY_ID} ; + export PARAM_OVERRIDE_${BROKER_ID}_all_all_all_aws_secret_key=${AWS_SECRET_ACCESS_KEY} ; + export PARAM_OVERRIDE_${BROKER_ID}_all_all_all_region=${AWS_DEFAULT_REGION} ; + ./cfnsb + -logtostderr + -brokerId="${BROKER_ID}" + -enableBasicAuth=true + -insecure=${INSECURE} + -port=${PORT} + -region=${AWS_DEFAULT_REGION} + -s3Bucket=${S3_BUCKET} + -s3Key="${S3_KEY}" + -s3Region=${S3_REGION} + -tableName=${TABLE_NAME} + -templateFilter="${TEMPLATE_FILTER}" + -tlsCert="${TLS_CERT}" + -tlsKey="${TLS_KEY}" + -prescribeOverrides ${PRESCRIBE} + -v="${VERBOSITY}" + memory: 1024M +forms: +- name: aws_sb_properties + label: AWS Service Broker Configuration + description: Required configuration to run the AWS service broker + properties: + - name: BROKER_ID + type: string + default: "awsservicebroker" + label: Broker ID + description: An ID to use for partitioning broker data in DynamoDb. if multiple brokers are used in the same AWS account, this value must be unique per broker + - name: AWS_ACCESS_KEY_ID + type: string + label: AWS Access Key ID + description: AWS IAM User Key ID to use, if left blank will attempt to use a role, if defined secret-key must also be defined + - name: AWS_SECRET_ACCESS_KEY + type: secret + label: AWS Secret Access Key + description: AWS IAM User Secret Key to use, if left blank will attempt to use a role, if defined key-id must also be defined + - name: AWS_DEFAULT_REGION + type: string + label: AWS Region + default: us-east-1 + description: AWS Region to deploy services into + - name: S3_BUCKET + type: string + label: Amazon S3 Bucket + default: awsservicebroker + description: S3 bucket containing service definititions + - name: S3_KEY + type: string + label: Amazon S3 Key Prefix + default: templates/latest + description: S3 key prefix to use when scanning for service definitions + - name: TEMPLATE_FILTER + type: string + label: Amazon S3 Key Suffix + default: -main.yaml + description: only process templates with the defined suffix. + - name: PRESCRIBE + type: boolean + label: Prescribe Global Overrides + default: true + description: parameters that are overridden globally will not be available in service plans + - name: S3_REGION + type: string + label: Amazon S3 Region + default: us-east-1 + description: Region that S3 bucket resides in, if different from region to deploy resources into + - name: TABLE_NAME + type: string + label: Amazon DynamoDB table name + default: awssb + description: DynamoDB table name where broker state is stored. Multiple brokers can use the same table, but must use distinct Broker ID's to prevent them from sharing state + - name: TLS_CERT + type: string + label: TLS Certificate + description: base-64 encoded PEM block to use as the certificate for TLS. + optional: true + - name: TLS_KEY + type: string + label: TLS Key + optional: true + description: base-64 encoded PEM block to use as the private key matching the TLS certificate + - name: INSECURE + type: boolean + label: Disable SSL + default: false + description: use HTTP vs HTTPS + - name: VERBOSITY + type: integer + label: Log Verbosity + default: 5 + description: log level for V logs diff --git a/packaging/helm/aws-servicebroker/Chart.yaml b/packaging/helm/aws-servicebroker/Chart.yaml new file mode 100644 index 00000000..c18efaba --- /dev/null +++ b/packaging/helm/aws-servicebroker/Chart.yaml @@ -0,0 +1,3 @@ +name: aws-servicebroker +description: Deploys the AWS Service Broker +version: 0.0.20 diff --git a/packaging/helm/aws-servicebroker/templates/NOTES.txt b/packaging/helm/aws-servicebroker/templates/NOTES.txt new file mode 100644 index 00000000..80aba27d --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/NOTES.txt @@ -0,0 +1 @@ +For more information on usage, see https://github.com/awslabs/aws-servicebroker/docs/ diff --git a/packaging/helm/aws-servicebroker/templates/_helpers.tpl b/packaging/helm/aws-servicebroker/templates/_helpers.tpl new file mode 100644 index 00000000..cbe7fe8a --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/_helpers.tpl @@ -0,0 +1,9 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "fullname" -}} +{{- printf "%s" .Chart.Name | trunc 63 | trimSuffix "-" -}} +{{- end -}} diff --git a/packaging/helm/aws-servicebroker/templates/broker-credentials.yaml b/packaging/helm/aws-servicebroker/templates/broker-credentials.yaml new file mode 100644 index 00000000..9a857694 --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/broker-credentials.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "fullname" . }}-credentials + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +type: Opaque +data: + accesskeyid: {{ b64enc .Values.aws.accesskeyid }} + secretkey: {{ b64enc .Values.aws.secretkey }} diff --git a/packaging/helm/aws-servicebroker/templates/broker-deployment.yaml b/packaging/helm/aws-servicebroker/templates/broker-deployment.yaml new file mode 100644 index 00000000..f7897477 --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/broker-deployment.yaml @@ -0,0 +1,110 @@ +kind: Deployment +apiVersion: extensions/v1beta1 +metadata: + name: {{ template "fullname" . }} + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +spec: + replicas: 1 + selector: + matchLabels: + app: {{ template "fullname" . }} + template: + metadata: + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" + spec: + serviceAccount: {{ template "fullname" . }}-service + containers: + - name: awssb + image: {{ .Values.image }} + imagePullPolicy: {{ .Values.imagePullPolicy }} + command: + - /usr/local/bin/aws-servicebroker + args: + - --port + - "3199" + {{- if .Values.tls.cert}} + - --tlsCert + - "{{ .Values.tls.cert }}" + {{- end}} + {{- if .Values.tls.key}} + - --tlsKey + - "{{ .Values.tls.key }}" + {{- end}} + - -v + - "{{ .Values.brokerconfig.verbosity }}" + - -logtostderr + - --tls-cert-file + - "/var/run/awssb/awssb.crt" + - --tls-private-key-file + - "/var/run/awssb/awssb.key" + - --s3Bucket + - "{{ .Values.aws.bucket }}" + - --s3Key + - "{{ .Values.aws.key }}" + - --s3Region + - "{{ .Values.aws.s3region }}" + - --tableName + - "{{ .Values.aws.tablename }}" + - --brokerId + - "{{ .Values.brokerconfig.brokerid }}" + - --prescribeOverrides + - "{{ .Values.brokerconfig.prescribeoverrides }}" + ports: + - containerPort: 3199 + readinessProbe: + tcpSocket: + port: 3199 + failureThreshold: 1 + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 2 + volumeMounts: + - mountPath: /var/run/awssb + name: awssb-ssl + readOnly: true + env: + - name: AWS_ACCESS_KEY_ID + valueFrom: + secretKeyRef: + name: {{ template "fullname" . }}-credentials + key: accesskeyid + - name: AWS_SECRET_ACCESS_KEY + valueFrom: + secretKeyRef: + name: {{ template "fullname" . }}-credentials + key: secretkey + - name: AWS_DEFAULT_REGION + value: {{ .Values.aws.region }} + - name: PARAM_OVERRIDE_{{ .Values.brokerconfig.brokerid }}_all_all_all_aws_access_key + valueFrom: + secretKeyRef: + name: {{ template "fullname" . }}-credentials + key: accesskeyid + - name: PARAM_OVERRIDE_{{ .Values.brokerconfig.brokerid }}_all_all_all_aws_secret_key + valueFrom: + secretKeyRef: + name: {{ template "fullname" . }}-credentials + key: secretkey + - name: PARAM_OVERRIDE_{{ .Values.brokerconfig.brokerid }}_all_all_all_region + value: {{ .Values.aws.region }} + - name: PARAM_OVERRIDE_{{ .Values.brokerconfig.brokerid }}_all_all_all_VpcId + value: {{ .Values.aws.vpcid }} + volumes: + - name: awssb-ssl + secret: + defaultMode: 420 + secretName: {{ template "fullname" . }}-cert + items: + - key: tls.crt + path: awssb.crt + - key: tls.key + path: awssb.key diff --git a/packaging/helm/aws-servicebroker/templates/broker-service-account.yaml b/packaging/helm/aws-servicebroker/templates/broker-service-account.yaml new file mode 100644 index 00000000..6548ffdf --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/broker-service-account.yaml @@ -0,0 +1,105 @@ +--- +# Service account for the broker to run as. +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "fullname" . }}-service + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}--{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +{{- if .Values.authenticate}} +--- +# Service account for the client, in most cases the service catalog. +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "fullname" . }}-client + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}--{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +--- +# Cluster role to grant service account that the broker is running as +# to have the rights it needs. +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: {{ template "fullname" . }} + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}--{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +rules: +- apiGroups: ["authentication.k8s.io"] + resources: ["tokenreviews"] + verbs: ["create"] +- apiGroups: ["authorization.k8s.io"] + resources: ["subjectaccessreviews"] + verbs: ["create"] + +--- +# Cluster role to grant the client service account the rights +# to call the /v2/* URLs that the broker serves +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: access-{{ template "fullname" . }} + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}--{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +rules: +- nonResourceURLs: ["/v2", "/v2/*"] + verbs: ["GET", "POST", "PUT", "PATCH", "DELETE"] + +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: {{ template "fullname" . }}-client + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}--{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +subjects: + - kind: ServiceAccount + name: {{ template "fullname" . }}-client + namespace: {{ .Release.Name }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: access-{{ template "fullname" . }} + +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: {{ template "fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "fullname" . }}-service + namespace: {{ .Release.Name }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "fullname" . }} +--- +# This secret needs to be a post install hook because otherwise it is skipped +# This causes the service catalog's cluster serverice broker to be unable to +# contact the broker. +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "fullname" . }} + annotations: + kubernetes.io/service-account.name: {{ template "fullname" . }}-client + "helm.sh/hook": post-install + "helm.sh/hook-weight": "-5" +type: kubernetes.io/service-account-token +{{- end }} diff --git a/packaging/helm/aws-servicebroker/templates/broker-service.yaml b/packaging/helm/aws-servicebroker/templates/broker-service.yaml new file mode 100644 index 00000000..4aeadca8 --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/broker-service.yaml @@ -0,0 +1,16 @@ +kind: Service +apiVersion: v1 +metadata: + name: {{ template "fullname" . }} + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +spec: + selector: + app: {{ template "fullname" . }} + ports: + - protocol: TCP + port: 443 + targetPort: 3199 diff --git a/packaging/helm/aws-servicebroker/templates/broker.yaml b/packaging/helm/aws-servicebroker/templates/broker.yaml new file mode 100644 index 00000000..a395026c --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/broker.yaml @@ -0,0 +1,21 @@ +{{- if .Values.deployClusterServiceBroker }} +apiVersion: servicecatalog.k8s.io/v1beta1 +kind: ClusterServiceBroker +metadata: + name: aws-servicebroker + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}--{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +spec: + url: https://{{ template "fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local + insecureSkipTLSVerify: true +{{- if .Values.authenticate}} + authInfo: + bearer: + secretRef: + namespace: {{.Release.Namespace}} + name: {{ template "fullname" . }} +{{- end }} +{{- end }} diff --git a/packaging/helm/aws-servicebroker/templates/ssl-certs.yaml b/packaging/helm/aws-servicebroker/templates/ssl-certs.yaml new file mode 100644 index 00000000..136452ef --- /dev/null +++ b/packaging/helm/aws-servicebroker/templates/ssl-certs.yaml @@ -0,0 +1,19 @@ +{{- $ca := genCA "svc-cat-ca" 3650 }} +{{- $cn := printf "%s" .Release.Name }} +{{- $altName1 := printf "%s.%s" .Release.Name .Release.Namespace }} +{{- $altName2 := printf "%s.%s.svc" .Release.Name .Release.Namespace }} +{{- $cert := genSignedCert $cn nil (list $altName1 $altName2) 3650 $ca }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "fullname" . }}-cert + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +type: Opaque +data: + tls.crt: {{ b64enc $cert.Cert }} + tls.key: {{ b64enc $cert.Key }} + \ No newline at end of file diff --git a/packaging/helm/aws-servicebroker/values.yaml b/packaging/helm/aws-servicebroker/values.yaml new file mode 100644 index 00000000..ec7b5978 --- /dev/null +++ b/packaging/helm/aws-servicebroker/values.yaml @@ -0,0 +1,20 @@ +image: awsservicebroker/aws-servicebroker:beta +imagePullPolicy: Always +authenticate: true +tls: + cert: + key: +deployClusterServiceBroker: true +aws: + region: us-east-1 + bucket: awsservicebroker + key: templates/latest + s3region: us-east-1 + tablename: awssb + accesskeyid: "" + secretkey: "" + vpcid: "" +brokerconfig: + verbosity: 10 + brokerid: awsservicebroker + prescribeoverrides: true \ No newline at end of file diff --git a/pkg/broker/api.go b/pkg/broker/api.go new file mode 100644 index 00000000..8693a17e --- /dev/null +++ b/pkg/broker/api.go @@ -0,0 +1,538 @@ +package broker + +import ( + "fmt" + "net/http" + "strings" + + "github.com/awslabs/aws-service-broker/pkg/serviceinstance" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/golang/glog" + osb "github.com/pmorie/go-open-service-broker-client/v2" + "github.com/pmorie/osb-broker-lib/pkg/broker" +) + +// GetCatalog is executed on a /v2/catalog/ osb api call +// https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#catalog-management +func (b *AwsBroker) GetCatalog(c *broker.RequestContext) (*broker.CatalogResponse, error) { + response := &broker.CatalogResponse{} + + var services []osb.Service + l, _ := b.listingcache.Get("__LISTINGS__") + glog.V(10).Infoln(l) + for _, s := range l.([]ServiceNeedsUpdate) { + sd, err := b.catalogcache.Get(s.Name) + if err != nil { + if err.Error() == "not found" { + glog.Errorf("Failed to fetch %q from the cache, item not found", s.Name) + } else { + glog.Errorln(err) + } + } else { + services = append(services, sd.(osb.Service)) + glog.Infof("ServiceClass: %q %q", sd.(osb.Service).Name, sd.(osb.Service).ID) + for _, plan := range sd.(osb.Service).Plans { + glog.Infof(" ServicePlan %q %q", plan.Name, plan.ID) + } + } + } + osbResponse := &osb.CatalogResponse{Services: prescribeOverrides(*b, services)} + + //glog.Infof("catalog response: %#+v", osbResponse) + + response.CatalogResponse = *osbResponse + + return response, nil +} + +// Provision is executed when the OSB API receives `PUT /v2/service_instances/:instance_id` +// (https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#provisioning). +func (b *AwsBroker) Provision(request *osb.ProvisionRequest, c *broker.RequestContext) (*broker.ProvisionResponse, error) { + glog.V(10).Infof("request=%+v", *request) + + if !request.AcceptsIncomplete { + return nil, newAsyncError() + } + + // Get the context + cluster := getCluster(request.Context) + namespace := getNamespace(request.Context) + + // Get the service + service, err := b.db.DataStorePort.GetServiceDefinition(request.ServiceID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service %s: %v", request.ServiceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if service == nil { + desc := fmt.Sprintf("The service %s was not found.", request.ServiceID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Get the plan + plan := getPlan(service, request.PlanID) + if plan == nil { + desc := fmt.Sprintf("The service plan %s was not found.", request.PlanID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Get the parameters and verify that all required parameters are set + params := getPlanDefaults(plan) + availableParams := getAvailableParams(plan) + for k, v := range getOverrides(b.brokerid, availableParams, namespace, service.Name, cluster) { + params[k] = v + } + for k, v := range request.Parameters { + if !stringInSlice(k, availableParams) { + desc := fmt.Sprintf("The parameter %s is not available.", k) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + params[k] = paramValue(v) + } + for _, p := range getRequiredParams(plan) { + if _, ok := params[p]; !ok { + desc := fmt.Sprintf("The parameter %s is required.", p) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + } + glog.V(10).Infof("params=%v", params) + + instance := &serviceinstance.ServiceInstance{ + ID: request.InstanceID, + ServiceID: request.ServiceID, + Params: params, + PlanID: request.PlanID, + } + + // Verify that the instance doesn't already exist + i, err := b.db.DataStorePort.GetServiceInstance(instance.ID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service instance %s: %v", instance.ID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if i != nil { + // TODO: This logic could use some love. The docs state that 200 OK MUST be + // returned if the service instance already exists, is fully provisioned, + // and the requested parameters are identical to the existing service + // instance. Right now, this doesn't check whether the instance is fully + // provisioned, and the reflect.DeepEqual check in Match will return false + // if the parameter order is different. + if i.Match(instance) { + glog.Infof("Service instance %s already exists.", instance.ID) + response := broker.ProvisionResponse{} + response.Exists = true + return &response, nil + } + glog.V(10).Infof("i=%+v instance=%+v", *i, *instance) + desc := fmt.Sprintf("Service instance %s already exists but with different attributes.", instance.ID) + return nil, newHTTPStatusCodeError(http.StatusConflict, "", desc) + } + + // Create the CFN stack + cfnSvc := b.Clients.NewCfn(b.GetSession(b.keyid, b.secretkey, b.region, b.accountId, b.profile, params)) + resp, err := cfnSvc.Client.CreateStack(&cloudformation.CreateStackInput{ + Capabilities: aws.StringSlice([]string{cloudformation.CapabilityCapabilityNamedIam}), + Parameters: toCFNParams(params), + StackName: aws.String(getStackName(service.Name, instance.ID)), + Tags: []*cloudformation.Tag{ + { + Key: aws.String("aws-service-broker:broker-id"), + Value: aws.String(b.brokerid), + }, + { + Key: aws.String("aws-service-broker:instance-id"), + Value: aws.String(request.InstanceID), + }, + { + Key: aws.String("aws-service-broker:cluster"), + Value: aws.String(cluster), + }, + { + Key: aws.String("aws-service-broker:namespace"), + Value: aws.String(namespace), + }, + }, + TemplateURL: b.generateS3HTTPUrl(service.Name), + }) + if err != nil { + desc := fmt.Sprintf("Failed to create the CloudFormation stack: %v", err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + instance.StackID = aws.StringValue(resp.StackId) + err = b.db.DataStorePort.PutServiceInstance(*instance) + if err != nil { + // Try to delete the stack + if _, err := cfnSvc.Client.DeleteStack(&cloudformation.DeleteStackInput{StackName: aws.String(instance.StackID)}); err != nil { + glog.Errorf("Failed to delete the CloudFormation stack %s: %v", instance.StackID, err) + } + + desc := fmt.Sprintf("Failed to create the service instance %s: %v", request.InstanceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + response := broker.ProvisionResponse{} + response.Async = true + return &response, nil +} + +// Deprovision executed when the osb api receives DELETE /v2/service_instances/:instance_id +// https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#deprovisioning +func (b *AwsBroker) Deprovision(request *osb.DeprovisionRequest, c *broker.RequestContext) (*broker.DeprovisionResponse, error) { + response := broker.DeprovisionResponse{} + si, err := b.db.DataStorePort.GetServiceInstance(request.InstanceID) + if err != nil { + panic(err) + } + if si == nil || si.StackID == "" { + errmsg := "CloudFormation stackid missing, chances are stack creation failed in an unexpected way, assuming there is nothing to deprovision" + glog.Errorln(errmsg) + response.Async = false + return &response, nil + } + glog.V(10).Infoln(si.Params) + cfnsvc := b.Clients.NewCfn(b.GetSession(b.keyid, b.secretkey, b.region, b.accountId, b.profile, si.Params)) + _, err = cfnsvc.Client.DeleteStack(&cloudformation.DeleteStackInput{StackName: aws.String(si.StackID)}) + if err != nil { + panic(err) + } + + if request.AcceptsIncomplete { + response.Async = true + } + return &response, nil +} + +// LastOperation executed when the osb api receives GET /v2/service_instances/:instance_id/last_operation +// https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#polling-last-operation +func (b *AwsBroker) LastOperation(request *osb.LastOperationRequest, c *broker.RequestContext) (*broker.LastOperationResponse, error) { + glog.Infoln(request) + glog.Infoln(c) + si, err := b.db.DataStorePort.GetServiceInstance(request.InstanceID) + if err != nil { + panic(err) + } + glog.Infoln(si) + r := broker.LastOperationResponse{LastOperationResponse: osb.LastOperationResponse{State: "", Description: nil}} + if si == nil || si.StackID == "" { + errmsg := "CloudFormation stackid missing, chances are stack creation failed in an unexpected way" + glog.Errorln(errmsg) + r.LastOperationResponse.State = "failed" + r.LastOperationResponse.Description = &errmsg + return &r, nil + } + glog.V(10).Infoln(si.Params) + cfnsvc := b.Clients.NewCfn(b.GetSession(b.keyid, b.secretkey, b.region, b.accountId, b.profile, si.Params)) + response, err := cfnsvc.Client.DescribeStacks(&cloudformation.DescribeStacksInput{StackName: aws.String(si.StackID)}) + if err != nil { + panic(err) + } + failedstates := []string{"CREATE_FAILED", "ROLLBACK_IN_PROGRESS", "ROLLBACK_FAILED", "ROLLBACK_COMPLETE", "DELETE_FAILED", "UPDATE_ROLLBACK_IN_PROGRESS", "UPDATE_ROLLBACK_FAILED", "UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS", "UPDATE_ROLLBACK_COMPLETE"} + progressingstates := []string{"CREATE_IN_PROGRESS", "DELETE_IN_PROGRESS", "UPDATE_IN_PROGRESS", "UPDATE_COMPLETE_CLEANUP_IN_PROGRESS"} + successfulstates := []string{"CREATE_COMPLETE", "DELETE_COMPLETE", "UPDATE_COMPLETE"} + status := *response.Stacks[0].StackStatus + if stringInSlice(status, failedstates) { + glog.Errorf("CloudFormation stack failed %#+v", si.StackID) + glog.Errorf(status) + r.LastOperationResponse.State = "failed" + r.LastOperationResponse.Description = response.Stacks[0].StackStatusReason + return &r, nil + } else if stringInSlice(status, progressingstates) { + glog.Infoln("CloudFormation stack still busy...") + glog.Infoln(status) + r.LastOperationResponse.State = "in progress" + r.LastOperationResponse.Description = response.Stacks[0].StackStatusReason + return &r, nil + } else if stringInSlice(status, successfulstates) { + glog.Infoln("CloudFormation stack operation completed...") + glog.Infoln(status) + r.LastOperationResponse.State = "succeeded" + r.LastOperationResponse.Description = response.Stacks[0].StackStatusReason + return &r, nil + } else { + return nil, fmt.Errorf("unexpected cfn status %v", status) + } +} + +// Bind is executed when the OSB API receives `PUT /v2/service_instances/:instance_id/service_bindings/:binding_id` +// (https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#request-4). +func (b *AwsBroker) Bind(request *osb.BindRequest, c *broker.RequestContext) (*broker.BindResponse, error) { + glog.V(10).Infof("request=%+v", *request) + + binding := &serviceinstance.ServiceBinding{ + ID: request.BindingID, + InstanceID: request.InstanceID, + } + + // Get the binding params + for k, v := range request.Parameters { + if strings.EqualFold(k, bindParamRoleName) { + binding.RoleName = paramValue(v) + } else if strings.EqualFold(k, bindParamScope) { + binding.Scope = paramValue(v) + } else { + desc := fmt.Sprintf("The parameter %s is not supported.", k) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + } + + // Verify that the binding doesn't already exist + sb, err := b.db.DataStorePort.GetServiceBinding(binding.ID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service binding %s: %v", binding.ID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if sb != nil { + if sb.Match(binding) { + glog.Infof("Service binding %s already exists.", binding.ID) + response := broker.BindResponse{} + response.Exists = true + return &response, nil + } + desc := fmt.Sprintf("Service binding %s already exists but with different attributes.", binding.ID) + return nil, newHTTPStatusCodeError(http.StatusConflict, "", desc) + } + + // Get the service (this is only required because the USER_KEY_ID and + // USER_SECRET_KEY credentials need to be prefixed with the service name for + // backward compatibility) + service, err := b.db.DataStorePort.GetServiceDefinition(request.ServiceID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service %s: %v", request.ServiceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if service == nil { + desc := fmt.Sprintf("The service %s was not found.", request.ServiceID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Get the instance + instance, err := b.db.DataStorePort.GetServiceInstance(binding.InstanceID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service instance %s: %v", binding.InstanceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if instance == nil { + desc := fmt.Sprintf("The service instance %s was not found.", binding.InstanceID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + sess := b.GetSession(b.keyid, b.secretkey, b.region, b.accountId, b.profile, instance.Params) + + // Get the CFN stack outputs + resp, err := b.Clients.NewCfn(sess).Client.DescribeStacks(&cloudformation.DescribeStacksInput{ + StackName: aws.String(instance.StackID), + }) + if err != nil { + desc := fmt.Sprintf("Failed to describe the CloudFormation stack %s: %v", instance.StackID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + // Get the credentials from the CFN stack outputs + credentials, err := getCredentials(service, resp.Stacks[0].Outputs, b.Clients.NewSsm(sess)) + if err != nil { + desc := fmt.Sprintf("Failed to get the credentials from CloudFormation stack %s: %v", instance.StackID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + if binding.RoleName != "" { + policyArn, err := getPolicyArn(resp.Stacks[0].Outputs, binding.Scope) + if err != nil { + desc := fmt.Sprintf("The CloudFormation stack %s does not support binding with scope '%s': %v", instance.StackID, binding.Scope, err) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Attach the scoped policy to the role + _, err = b.Clients.NewIam(sess).AttachRolePolicy(&iam.AttachRolePolicyInput{ + PolicyArn: aws.String(policyArn), + RoleName: aws.String(binding.RoleName), + }) + if err != nil { + desc := fmt.Sprintf("Failed to attach the policy %s to role %s: %v", policyArn, binding.RoleName, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + binding.PolicyArn = policyArn + } + + // Store the binding + err = b.db.DataStorePort.PutServiceBinding(*binding) + if err != nil { + desc := fmt.Sprintf("Failed to store the service binding %s: %v", binding.ID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + return &broker.BindResponse{ + BindResponse: osb.BindResponse{ + Credentials: credentials, + }, + }, nil +} + +func (b *AwsBroker) GetBinding(request *osb.GetBindingRequest, c *broker.RequestContext) (*broker.GetBindingResponse, error) { + glog.V(10).Infoln(request) + glog.V(10).Infoln(c) + return &broker.GetBindingResponse{}, nil +} + +func BindingLastOperation(request *osb.BindingLastOperationRequest, c *broker.RequestContext) (*broker.LastOperationResponse, error) { + glog.V(10).Infoln(request) + glog.V(10).Infoln(c) + return &broker.LastOperationResponse{}, nil +} + +// Unbind is executed when the OSB API receives `DELETE /v2/service_instances/:instance_id/service_bindings/:binding_id` +// (https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#request-5). +func (b *AwsBroker) Unbind(request *osb.UnbindRequest, c *broker.RequestContext) (*broker.UnbindResponse, error) { + glog.V(10).Infof("request=%+v", *request) + + // Get the binding + binding, err := b.db.DataStorePort.GetServiceBinding(request.BindingID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service binding %s: %v", request.BindingID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if binding == nil { + desc := fmt.Sprintf("The service binding %s was not found.", request.BindingID) + return nil, newHTTPStatusCodeError(http.StatusGone, "", desc) + } + + if binding.PolicyArn != "" { + instance, err := b.db.DataStorePort.GetServiceInstance(binding.InstanceID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service instance %s: %v", binding.InstanceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if instance == nil { + desc := fmt.Sprintf("The service instance %s was not found.", binding.InstanceID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + sess := b.GetSession(b.keyid, b.secretkey, b.region, b.accountId, b.profile, instance.Params) + + // Detach the scoped policy from the role + _, err = b.Clients.NewIam(sess).DetachRolePolicy(&iam.DetachRolePolicyInput{ + PolicyArn: aws.String(binding.PolicyArn), + RoleName: aws.String(binding.RoleName), + }) + if err != nil { + if aerr, ok := err.(awserr.Error); ok && aerr.Code() == iam.ErrCodeNoSuchEntityException { + glog.Infof("The policy %s was already detached from role %s.", binding.PolicyArn, binding.RoleName) + } else { + desc := fmt.Sprintf("Failed to detach the policy %s from role %s: %v", binding.PolicyArn, binding.RoleName, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + } + } + + // Delete the binding + err = b.db.DataStorePort.DeleteServiceBinding(binding.ID) + if err != nil { + desc := fmt.Sprintf("Failed to delete the service binding %s: %v", binding.ID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + return &broker.UnbindResponse{}, nil +} + +// Update is executed when the OSB API receives `PATCH /v2/service_instances/:instance_id` +// (https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#updating-a-service-instance). +func (b *AwsBroker) Update(request *osb.UpdateInstanceRequest, c *broker.RequestContext) (*broker.UpdateInstanceResponse, error) { + glog.V(10).Infof("request=%+v", *request) + + if !request.AcceptsIncomplete { + return nil, newAsyncError() + } + + // Get the service instance + instance, err := b.db.DataStorePort.GetServiceInstance(request.InstanceID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service instance %s: %v", request.InstanceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if instance == nil { + desc := fmt.Sprintf("The service instance %s was not found.", request.InstanceID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Verify that we're not changing the plan (this should never happen since + // we're setting `plan_updateable: false`, but better safe than sorry) + if request.PlanID != nil && *request.PlanID != instance.PlanID { + desc := fmt.Sprintf("The service plan cannot be changed from %s to %s.", instance.PlanID, *request.PlanID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Get the service + service, err := b.db.DataStorePort.GetServiceDefinition(request.ServiceID) + if err != nil { + desc := fmt.Sprintf("Failed to get the service %s: %v", request.ServiceID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } else if service == nil { + desc := fmt.Sprintf("The service %s was not found.", request.ServiceID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Get the plan and verify that it has updatable parameters + plan := getPlan(service, instance.PlanID) + if plan == nil { + desc := fmt.Sprintf("The service plan %s was not found.", instance.PlanID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } else if plan.Schemas.ServiceInstance.Update == nil { + desc := fmt.Sprintf("The service plan %s has no updatable parameters.", instance.PlanID) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + + // Get the parameters + params := getPlanDefaults(plan) + paramsUpdated := false + updatableParams := getUpdatableParams(plan) + for k, v := range instance.Params { + params[k] = v + } + for k, v := range request.Parameters { + newValue := paramValue(v) + if params[k] != newValue { + if !stringInSlice(k, updatableParams) { + desc := fmt.Sprintf("The parameter %s is not updatable.", k) + return nil, newHTTPStatusCodeError(http.StatusBadRequest, "", desc) + } + params[k] = newValue + paramsUpdated = true + } + } + if !paramsUpdated { + // Nothing to do, so return success (if we try a CFN update, it'll fail) + return &broker.UpdateInstanceResponse{}, nil + } + glog.V(10).Infof("params=%v", params) + + // Update the CFN stack + cfnSvc := b.Clients.NewCfn(b.GetSession(b.keyid, b.secretkey, b.region, b.accountId, b.profile, params)) + _, err = cfnSvc.Client.UpdateStack(&cloudformation.UpdateStackInput{ + Capabilities: aws.StringSlice([]string{cloudformation.CapabilityCapabilityNamedIam}), + Parameters: toCFNParams(params), + StackName: aws.String(instance.StackID), + TemplateURL: b.generateS3HTTPUrl(service.Name), + }) + if err != nil { + desc := fmt.Sprintf("Failed to update the CloudFormation stack %s: %v", instance.StackID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + // Update the params in the DB + instance.Params = params + err = b.db.DataStorePort.PutServiceInstance(*instance) + if err != nil { + // Try to cancel the update + if _, err := cfnSvc.Client.CancelUpdateStack(&cloudformation.CancelUpdateStackInput{StackName: aws.String(instance.StackID)}); err != nil { + glog.Errorf("Failed to cancel updating the CloudFormation stack %s: %v", instance.StackID, err) + glog.Errorf("Service instance %s and CloudFormation stack %s may be out of sync!", instance.ID, instance.StackID) + } + + desc := fmt.Sprintf("Failed to update the service instance %s: %v", instance.ID, err) + return nil, newHTTPStatusCodeError(http.StatusInternalServerError, "", desc) + } + + response := broker.UpdateInstanceResponse{} + response.Async = true + return &response, nil +} + +func (b *AwsBroker) BindingLastOperation(request *osb.BindingLastOperationRequest, c *broker.RequestContext) (*broker.LastOperationResponse, error) { + return &broker.LastOperationResponse{LastOperationResponse: osb.LastOperationResponse{State: "", Description: nil}}, nil +} diff --git a/pkg/broker/api_test.go b/pkg/broker/api_test.go new file mode 100644 index 00000000..b9d26add --- /dev/null +++ b/pkg/broker/api_test.go @@ -0,0 +1,376 @@ +package broker + +import ( + "errors" + "net/http" + "testing" + + "github.com/awslabs/aws-service-broker/pkg/serviceinstance" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/cloudformation" + osb "github.com/pmorie/go-open-service-broker-client/v2" + "github.com/pmorie/osb-broker-lib/pkg/broker" + "github.com/stretchr/testify/assert" +) + +func TestGetCatalog(t *testing.T) { + assertor := assert.New(t) + + opts := Options{ + TableName: "testtable", + S3Bucket: "abucket", + S3Region: "us-east-1", + S3Key: "tempates/test", + Region: "us-east-1", + BrokerID: "awsservicebroker", + PrescribeOverrides: false, + } + bl, _ := NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.listingcache.Set("__LISTINGS__", []ServiceNeedsUpdate{{Name: "test", Update: false}}) + + expected := &broker.CatalogResponse{CatalogResponse: osb.CatalogResponse{}} + actual, err := bl.GetCatalog(&broker.RequestContext{}) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should return empty catalog") + + svc := osb.Service{ + ID: "test-id", + Name: "test", + Description: "blah", + Plans: []osb.Plan{ + { + ID: "planid", + Name: "planname", + Schemas: &osb.Schemas{}, + }, + }, + } + + bl.catalogcache.Set("test", svc) + expected = &broker.CatalogResponse{CatalogResponse: osb.CatalogResponse{Services: []osb.Service{svc}}} + actual, err = bl.GetCatalog(&broker.RequestContext{}) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should return a single service matching the mock") +} + +type mockDataStoreProvision struct{} + +func (db mockDataStoreProvision) PutServiceDefinition(sd osb.Service) error { return nil } +func (db mockDataStoreProvision) GetParam(paramname string) (value string, err error) { + return "some-value", nil +} +func (db mockDataStoreProvision) PutParam(paramname string, paramvalue string) error { return nil } +func (db mockDataStoreProvision) PutServiceInstance(si serviceinstance.ServiceInstance) error { + return nil +} +func (db mockDataStoreProvision) GetServiceDefinition(serviceuuid string) (*osb.Service, error) { + if serviceuuid == "test-service-id" { + return &osb.Service{ + ID: "test-service-id", + Name: "test-service-name", + Plans: []osb.Plan{ + {ID: "test-plan-id", Name: "test-plan-name", Schemas: &osb.Schemas{ServiceInstance: &osb.ServiceInstanceSchema{ + Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string", "required": true}, + "override_param": map[string]interface{}{"type": "string"}, + "region": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []interface{}{"req_param"}, + }, + }, + }}}, + }, + }, nil + } else if serviceuuid == "err" { + return nil, errors.New("test failure") + } else if serviceuuid == "noplan" { + return &osb.Service{}, nil + } + return nil, nil +} +func (db mockDataStoreProvision) GetServiceInstance(sid string) (*serviceinstance.ServiceInstance, error) { + if sid == "err" { + return nil, errors.New("test failure") + } else if sid == "exists" { + return &serviceinstance.ServiceInstance{StackID: "an-id"}, nil + } + return nil, nil +} +func (db mockDataStoreProvision) GetServiceBinding(id string) (*serviceinstance.ServiceBinding, error) { + if id == "exists" { + return &serviceinstance.ServiceBinding{ + ID: "exists", + InstanceID: "exists", + }, nil + } + return nil, nil +} +func (db mockDataStoreProvision) PutServiceBinding(sb serviceinstance.ServiceBinding) error { + return nil +} +func (db mockDataStoreProvision) DeleteServiceBinding(id string) error { return nil } + +func TestProvision(t *testing.T) { + assertor := assert.New(t) + + opts := Options{ + TableName: "testtable", + S3Bucket: "abucket", + S3Region: "us-east-1", + S3Key: "tempates/test", + Region: "us-east-1", + BrokerID: "awsservicebroker", + PrescribeOverrides: true, + } + bl, _ := NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + bl.globalOverrides = map[string]string{"override_param": "some_value"} + provReq := &osb.ProvisionRequest{ + InstanceID: "test-instance-id", + ServiceID: "test-service-id", + PlanID: "test-plan-id", + OriginatingIdentity: &osb.OriginatingIdentity{}, + AcceptsIncomplete: true, + Parameters: map[string]interface{}{ + "region": "us-east-1", + "anotherParam": "pval", + }, + } + reqContext := &broker.RequestContext{} + + expectedErr := newHTTPStatusCodeError(http.StatusBadRequest, "", "The parameter anotherParam is not available.") + _, err := bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with missing parameter error") + + provReq.Parameters = map[string]interface{}{ + "region": "us-east-1", + } + expectedErr = newHTTPStatusCodeError(http.StatusBadRequest, "", "The parameter req_param is required.") + _, err = bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with required parameter error") + + provReq.Parameters = map[string]interface{}{ + "region": "us-east-1", + "req_param": "pval", + } + expected := &broker.ProvisionResponse{ProvisionResponse: osb.ProvisionResponse{Async: true}} + actual, err := bl.Provision(provReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should return empty provision response") + + expectedErr = osb.HTTPStatusCodeError{ + StatusCode: 422, + ErrorMessage: aws.String("AsyncRequired"), + Description: aws.String("This service plan requires client support for asynchronous service operations."), + } + _, err = bl.Provision(&osb.ProvisionRequest{AcceptsIncomplete: false}, &broker.RequestContext{}) + assertor.Equal(expectedErr, err, "err should be 422") + + expectedErr = newHTTPStatusCodeError(http.StatusBadRequest, "", "The service plan test-plan-id was not found.") + provReq.ServiceID = "noplan" + _, err = bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with missing plan error") + + expectedErr = newHTTPStatusCodeError(http.StatusInternalServerError, "", "Failed to get the service err: test failure") + provReq.ServiceID = "err" + _, err = bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with 500 test error") + + expectedErr = newHTTPStatusCodeError(http.StatusBadRequest, "", "The service nonexist was not found.") + provReq.ServiceID = "nonexist" + _, err = bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with 500 error") + + expectedErr = newHTTPStatusCodeError(http.StatusInternalServerError, "", "Failed to get the service instance err: test failure") + provReq.ServiceID = "test-service-id" + provReq.InstanceID = "err" + _, err = bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with 500 error") + + expectedErr = newHTTPStatusCodeError(http.StatusConflict, "", "Service instance exists already exists but with different attributes.") + provReq.ServiceID = "test-service-id" + provReq.InstanceID = "exists" + _, err = bl.Provision(provReq, reqContext) + assertor.Equal(expectedErr, err, "should fail with 500 error") + +} + +func TestDeprovision(t *testing.T) { + assertor := assert.New(t) + + opts := Options{ + TableName: "testtable", + S3Bucket: "abucket", + S3Region: "us-east-1", + S3Key: "tempates/test", + Region: "us-east-1", + BrokerID: "awsservicebroker", + PrescribeOverrides: true, + } + bl, _ := NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + + deprovReq := &osb.DeprovisionRequest{ + InstanceID: "test-instance-id", + AcceptsIncomplete: true, + } + reqContext := &broker.RequestContext{} + + expected := &broker.DeprovisionResponse{} + actual, err := bl.Deprovision(deprovReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed even if stack is not in serviceInstance (was never created)") + + bl.accountId = "test" + bl.secretkey = "testkey" + + deprovReq.InstanceID = "exists" + expected.Async = true + actual, err = bl.Deprovision(deprovReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed even if stack is not in serviceInstance (was never created)") + +} + +func TestLastOperation(t *testing.T) { + assertor := assert.New(t) + + opts := Options{ + TableName: "testtable", + S3Bucket: "abucket", + S3Region: "us-east-1", + S3Key: "tempates/test", + Region: "us-east-1", + BrokerID: "awsservicebroker", + PrescribeOverrides: true, + } + + bl, _ := NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + + loReq := &osb.LastOperationRequest{InstanceID: "test-instance-id"} + reqContext := &broker.RequestContext{} + msg := "CloudFormation stackid missing, chances are stack creation failed in an unexpected way" + expected := &broker.LastOperationResponse{LastOperationResponse: osb.LastOperationResponse{State: "failed", Description: &msg}} + actual, err := bl.LastOperation(loReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed even if stack is not in serviceInstance (was never created)") + + mockClients.NewCfn = func(sess *session.Session) CfnClient { + return CfnClient{mockCfn{ + DescribeStacksResponse: cloudformation.DescribeStacksOutput{ + NextToken: nil, + Stacks: []*cloudformation.Stack{ + { + StackStatus: aws.String("CREATE_IN_PROGRESS"), + }, + }, + }, + }} + } + bl, _ = NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + expected = &broker.LastOperationResponse{LastOperationResponse: osb.LastOperationResponse{State: "in progress", Description: nil}} + loReq.InstanceID = "exists" + actual, err = bl.LastOperation(loReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed even if stack is not in serviceInstance (was never created)") + + mockClients.NewCfn = func(sess *session.Session) CfnClient { + return CfnClient{mockCfn{ + DescribeStacksResponse: cloudformation.DescribeStacksOutput{ + NextToken: nil, + Stacks: []*cloudformation.Stack{ + { + StackStatus: aws.String("CREATE_FAILED"), + }, + }, + }, + }} + } + bl, _ = NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + expected = &broker.LastOperationResponse{LastOperationResponse: osb.LastOperationResponse{State: "failed", Description: nil}} + loReq.InstanceID = "exists" + actual, err = bl.LastOperation(loReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed even if stack is not in serviceInstance (was never created)") + + mockClients.NewCfn = func(sess *session.Session) CfnClient { + return CfnClient{mockCfn{ + DescribeStacksResponse: cloudformation.DescribeStacksOutput{ + NextToken: nil, + Stacks: []*cloudformation.Stack{ + { + StackStatus: aws.String("CREATE_COMPLETE"), + }, + }, + }, + }} + } + bl, _ = NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + expected = &broker.LastOperationResponse{LastOperationResponse: osb.LastOperationResponse{State: "succeeded", Description: nil}} + loReq.InstanceID = "exists" + actual, err = bl.LastOperation(loReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed even if stack is not in serviceInstance (was never created)") +} + +func TestBind(t *testing.T) { + assertor := assert.New(t) + + opts := Options{ + TableName: "testtable", + S3Bucket: "abucket", + S3Region: "us-east-1", + S3Key: "tempates/test", + Region: "us-east-1", + BrokerID: "awsservicebroker", + PrescribeOverrides: true, + } + bl, _ := NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + + bindReq := &osb.BindRequest{ + BindingID: "test-bind-id", + InstanceID: "exists", + AcceptsIncomplete: true, + ServiceID: "test-service-id", + } + reqContext := &broker.RequestContext{} + + expected := &broker.BindResponse{BindResponse: osb.BindResponse{Credentials: map[string]interface{}{}}} + actual, err := bl.Bind(bindReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed") + +} + +func TestUnbind(t *testing.T) { + assertor := assert.New(t) + + opts := Options{ + TableName: "testtable", + S3Bucket: "abucket", + S3Region: "us-east-1", + S3Key: "tempates/test", + Region: "us-east-1", + BrokerID: "awsservicebroker", + PrescribeOverrides: true, + } + bl, _ := NewAWSBroker(opts, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bl.db.DataStorePort = mockDataStoreProvision{} + + unbindReq := &osb.UnbindRequest{BindingID: "exists"} + reqContext := &broker.RequestContext{} + + expected := &broker.UnbindResponse{UnbindResponse: osb.UnbindResponse{}} + actual, err := bl.Unbind(unbindReq, reqContext) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "should succeed") + +} diff --git a/pkg/broker/aws_sdk.go b/pkg/broker/aws_sdk.go new file mode 100644 index 00000000..07206c3a --- /dev/null +++ b/pkg/broker/aws_sdk.go @@ -0,0 +1,75 @@ +package broker + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/credentials/stscreds" + "github.com/aws/aws-sdk-go/aws/ec2metadata" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go/service/ssm" + "github.com/aws/aws-sdk-go/service/sts" + "github.com/aws/aws-sdk-go/service/sts/stsiface" + "github.com/golang/glog" +) + +// Create AWS Session +func AwsSessionGetter(keyid string, secretkey string, region string, accountId string, profile string, params map[string]string) *session.Session { + creds := AwsCredentialsGetter(keyid, secretkey, profile, params, ec2metadata.New(session.Must(session.NewSession()))) + cfg := aws.NewConfig().WithCredentials(&creds).WithRegion(region) + currentAccountSession := session.Must(session.NewSession(cfg)) + sess, err := assumeTargetRole(currentAccountSession, params, region, accountId) + if err != nil { + panic(err) + } + return sess +} + +func AwsCfnClientGetter(sess *session.Session) CfnClient { + return CfnClient{cloudformation.New(sess)} +} + +func AwsSsmClientGetter(sess *session.Session) *ssm.SSM { + return ssm.New(sess) +} + +func AwsS3ClientGetter(sess *session.Session) S3Client { + return S3Client{s3.New(sess)} +} + +func AwsDdbClientGetter(sess *session.Session) *dynamodb.DynamoDB { + return dynamodb.New(sess) +} + +func AwsStsClientGetter(sess *session.Session) *sts.STS { + return sts.New(sess) +} + +func AwsIamClientGetter(sess *session.Session) *iam.IAM { + return iam.New(sess) +} + +func GetCallerId(svc stsiface.STSAPI) (*sts.GetCallerIdentityOutput, error) { + return svc.GetCallerIdentity(&sts.GetCallerIdentityInput{}) +} + +func assumeTargetRole(sess *session.Session, params map[string]string, region string, accountId string) (*session.Session, error) { + + if _, ok := params["target_role_name"]; !ok { + glog.Infof("Parameter 'target_role_name' not set. Not assuming role.") + return sess, nil + } + + targetAccountRoleArn := generateRoleArn(params, accountId) + glog.Infof("Assuming role arn '%s'.", targetAccountRoleArn) + credentialsTargetAccount := stscreds.NewCredentials(sess, targetAccountRoleArn) + + sessionTargetAccount := session.Must(session.NewSession(&aws.Config{ + Region: ®ion, + Credentials: credentialsTargetAccount, + })) + + return sessionTargetAccount, nil +} diff --git a/pkg/broker/awsbroker.go b/pkg/broker/awsbroker.go new file mode 100644 index 00000000..fc51caf4 --- /dev/null +++ b/pkg/broker/awsbroker.go @@ -0,0 +1,368 @@ +package broker + +import ( + "fmt" + "io/ioutil" + "os" + "strings" + "time" + + "github.com/awslabs/aws-service-broker/pkg/dynamodbadapter" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/go-errors/errors" + "github.com/golang/glog" + "github.com/koding/cache" + osb "github.com/pmorie/go-open-service-broker-client/v2" + uuid "github.com/satori/go.uuid" + yaml "gopkg.in/yaml.v2" +) + +// Runs at startup and bootstraps the broker +func NewAWSBroker(o Options, awssess GetAwsSession, clients AwsClients, getCallerId GetCallerIder, updateCatalog UpdateCataloger, pollUpdate PollUpdater) (*AwsBroker, error) { + + sess := awssess(o.KeyID, o.SecretKey, o.Region, "", o.Profile, map[string]string{}) + s3sess := awssess(o.KeyID, o.SecretKey, o.S3Region, "", o.Profile, map[string]string{}) + s3svc := clients.NewS3(s3sess) + ddbsvc := clients.NewDdb(sess) + stssvc := clients.NewSts(sess) + callerid, err := getCallerId(stssvc) + if err != nil { + return &AwsBroker{}, err + } + accountid := *callerid.Account + accountuuid := uuid.NewV5(uuid.NullUUID{}.UUID, accountid+o.BrokerID) + + glog.Infof("Running as caller identity '%+v'.", callerid) + + var db Db + db.Brokerid = o.BrokerID + db.Accountid = accountid + db.Accountuuid = accountuuid + + // connect DynamoDB adapter to storage port + db.DataStorePort = dynamodbadapter.DdbDataStore{ + Accountid: accountid, + Accountuuid: accountuuid, + Brokerid: o.BrokerID, + Region: o.Region, + Ddb: *ddbsvc, + Tablename: o.TableName, + } + + // setup in memory cache + var catalogcache = cache.NewMemoryWithTTL(time.Duration(CacheTTL)) + var listingcache = cache.NewMemoryWithTTL(time.Duration(CacheTTL)) + listingcache.StartGC(time.Minute * 5) + bd := &BucketDetailsRequest{ + o.S3Bucket, + o.S3Key, + o.TemplateFilter, + } + + // populate broker variables + bl := AwsBroker{ + accountId: accountid, + keyid: o.KeyID, + secretkey: o.SecretKey, + profile: o.Profile, + tablename: o.TableName, + s3bucket: o.S3Bucket, + s3region: o.S3Region, + s3key: AddTrailingSlash(o.S3Key), + templatefilter: o.TemplateFilter, + region: o.Region, + s3svc: s3svc, + catalogcache: catalogcache, + listingcache: listingcache, + brokerid: o.BrokerID, + db: db, + GetSession: awssess, + Clients: clients, + prescribeOverrides: o.PrescribeOverrides, + globalOverrides: getGlobalOverrides(o.BrokerID), + } + + // get catalog and setup periodic updates from S3 + err = updateCatalog(listingcache, catalogcache, *bd, s3svc, db, bl, ListTemplates, ListingUpdate, MetadataUpdate) + if err != nil { + return &AwsBroker{}, err + } + go pollUpdate(600, listingcache, catalogcache, *bd, s3svc, db, bl, updateCatalog, ListTemplates) + return &bl, nil +} + +func UpdateCatalog(listingcache cache.Cache, catalogcache cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, listTemplates ListTemplateser, listingUpdate ListingUpdater, metadataUpdate MetadataUpdater) error { + l, err := listTemplates(&bd, &bl) + if err != nil { + if strings.HasPrefix(err.Error(), "NoSuchBucket: The specified bucket does not exist") { + return errors.New("Cannot access S3 Bucket, either it does not exist or the IAM user/role the broker is configured to use has no access to the bucket") + } + return err + } + err = listingUpdate(l, listingcache) + if err != nil { + return err + } + err = metadataUpdate(listingcache, catalogcache, bd, s3svc, db, MetadataUpdate) + if err != nil { + return err + } + return nil +} + +func PollUpdate(interval int, l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, updateCatalog UpdateCataloger, listTemplates ListTemplateser) { + for { + time.Sleep(time.Duration(interval) * time.Second) + go updateCatalog(l, c, bd, s3svc, db, bl, listTemplates, ListingUpdate, MetadataUpdate) + } +} + +func MetadataUpdate(l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, metadataUpdate MetadataUpdater) error { + data, err := l.Get("__LISTINGS__") + if err != nil { + return err + } + for _, item := range data.([]ServiceNeedsUpdate) { + if item.Update { + key := bd.prefix + item.Name + "-spec.yaml" + obj, err := s3svc.Client.GetObject(&s3.GetObjectInput{ + Bucket: aws.String(bd.bucket), + Key: aws.String(key), + }) + if err != nil { + return err + } else if obj.Body == nil { + return errors.New("s3 object body missing") + } else { + file, err := ioutil.ReadAll(obj.Body) + if err != nil { + return err + } else { + var i map[string]interface{} + yamlerr := yaml.Unmarshal(file, &i) + if yamlerr != nil { + return yamlerr + } else { + osbdef := db.ServiceDefinitionToOsb(i) + if osbdef.Name != "" { + err = db.DataStorePort.PutServiceDefinition(osbdef) + if err == nil { + c.Set(item.Name, osbdef) + } else { + glog.V(10).Infoln(item) + glog.V(10).Infoln(osbdef) + glog.Errorln(err) + } + } else { + glog.Errorf("invalid service definition for %q returned", i["name"].(string)) + glog.Errorln(i) + glog.Errorln(osbdef) + } + } + } + } + } else { + i, geterr := c.Get(item.Name) + if geterr != nil { + glog.Errorln(geterr) + } else { + c.Set(item.Name, i) + } + } + } + return nil +} + +func ListingUpdate(l *[]ServiceLastUpdate, c cache.Cache) error { + var services []ServiceNeedsUpdate + for _, item := range *l { + data, err := c.Get(item.Name) + if err != nil { + if err.Error() == "not found" { + c.Set(item.Name, item.Date) + services = append(services, ServiceNeedsUpdate{Name: item.Name, Update: true}) + } else { + return err + } + } else { + if data.(time.Time).Unix() < item.Date.Unix() { + c.Set(item.Name, item.Date) + services = append(services, ServiceNeedsUpdate{Name: item.Name, Update: true}) + } else { + services = append(services, ServiceNeedsUpdate{Name: item.Name, Update: false}) + } + } + } + glog.Infof("Updating listings cache with %v", services) + c.Set("__LISTINGS__", services) + return nil +} + +func ListTemplates(s3source *BucketDetailsRequest, b *AwsBroker) (*[]ServiceLastUpdate, error) { + glog.Infoln("Listing objects bucket: " + s3source.bucket + " region: " + b.s3region + " prefix: " + s3source.prefix) + ListResponse, err := b.s3svc.Client.ListObjectsV2(&s3.ListObjectsV2Input{ + Bucket: aws.String(s3source.bucket), + Prefix: aws.String(s3source.prefix), + }) + if err != nil { + if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode { + fmt.Fprintf(os.Stderr, "upload canceled due to timeout, %v\n", err) + } else { + fmt.Fprintf(os.Stderr, "failed to list objects, %v\n", err) + } + return nil, err + } + numberOfRecords := 0 + for _, s3obj := range ListResponse.Contents { + if strings.HasSuffix(*s3obj.Key, s3source.suffix) { + numberOfRecords = numberOfRecords + 1 + } + } + glog.Infof("Found %x objects\n", numberOfRecords) + s := make([]ServiceLastUpdate, 0, numberOfRecords) + for _, s3obj := range ListResponse.Contents { + if strings.HasSuffix(*s3obj.Key, s3source.suffix) { + s = append(s, ServiceLastUpdate{ + Name: strings.TrimSuffix(strings.TrimPrefix(*s3obj.Key, s3source.prefix), s3source.suffix), + Date: *s3obj.LastModified, + }) + } + } + return &s, nil +} + +// ValidateBrokerAPIVersion still to determine supported api versions +func (b *AwsBroker) ValidateBrokerAPIVersion(version string) error { + glog.Infof("Client OSB API Version: %q", version) + return nil +} + +// ServiceDefinitionToOsb converts apb service definition into osb.Service struct +func (db Db) ServiceDefinitionToOsb(sd map[string]interface{}) osb.Service { + // TODO: Marshal spec straight from the yaml in an osb.Plan, possibly using gjson + glog.Infof("converting service definition %q ", sd["name"].(string)) + defer func() { + if r := recover(); r != nil { + glog.Errorln(errors.Wrap(r, 2).ErrorStack()) + glog.Errorf("Failed to convert service definition for %q", sd["name"].(string)) + } + }() + f := false + serviceid := uuid.NewV5(db.Accountuuid, sd["name"].(string)).String() + outp := osb.Service{} + outp.ID = serviceid + outp.Name = sd["name"].(string) + outp.Bindable = sd["bindable"].(bool) + outp.Description = sd["description"].(string) + outp.PlanUpdatable = &f + metadata := make(map[string]interface{}) + for index, key := range sd["metadata"].(map[interface{}]interface{}) { + metadata[index.(string)] = key + } + outp.Metadata = metadata + var tags []string + for _, key := range sd["tags"].([]interface{}) { + tags = append(tags, key.(string)) + } + outp.Tags = tags + var plans []osb.Plan + for _, key := range sd["plans"].([]interface{}) { + plan := osb.Plan{} + for i, k := range key.(map[interface{}]interface{}) { + if i.(string) == "name" { + plan.Name = k.(string) + } else if i.(string) == "description" { + plan.Description = k.(string) + } else if i.(string) == "free" { + free := k.(bool) + plan.Free = &free + } else if i.(string) == "metadata" { + metadata := make(map[string]interface{}) + for i2, k2 := range k.(map[interface{}]interface{}) { + metadata[i2.(string)] = k2 + } + plan.Metadata = metadata + } else if i.(string) == "parameters" { + propsForCreate := make(map[string]interface{}) + requiredForCreate := make([]string, 0) + propsForUpdate := make(map[string]interface{}) + requiredForUpdate := make([]string, 0) + for _, param := range k.([]interface{}) { + var name string + var required, updatable bool + pvals := make(map[string]interface{}) + for pk, pv := range param.(map[interface{}]interface{}) { + switch pk { + case "name": + name = pv.(string) + case "required": + required = pv.(bool) + case "type": + switch pv { + case "enum": + pvals[pk.(string)] = "string" + case "int": + pvals[pk.(string)] = "integer" + default: + pvals[pk.(string)] = pv + } + case "updatable": + updatable = pv.(bool) + default: + pvals[pk.(string)] = pv + } + } + propsForCreate[name] = pvals + if required { + requiredForCreate = append(requiredForCreate, name) + } + if updatable { + propsForUpdate[name] = pvals + if required { + requiredForUpdate = append(requiredForUpdate, name) + } + } + } + plan.Schemas = &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{ + Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{ + "type": "object", + "properties": propsForCreate, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": requiredForCreate, + }, + }, + }, + } + if len(propsForUpdate) > 0 { + plan.Schemas.ServiceInstance.Update = &osb.InputParametersSchema{ + Parameters: map[string]interface{}{ + "type": "object", + "properties": propsForUpdate, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": requiredForUpdate, + }, + } + } + } + } + planid := uuid.NewV5(db.Accountuuid, "service__"+sd["name"].(string)+"__plan__"+plan.Name).String() + plan.ID = planid + plans = append(plans, plan) + } + outp.Plans = plans + glog.Infof("done converting service definition %q ", sd["name"].(string)) + return outp +} + +func (b *AwsBroker) generateS3HTTPUrl(serviceDefName string) *string { + prefix := "https://s3.amazonaws.com/" + if b.s3region != "us-east-1" { + prefix = fmt.Sprintf("https://s3-%s.amazonaws.com/", b.s3region) + } + return aws.String(prefix + b.s3bucket + "/" + b.s3key + strings.TrimSuffix(serviceDefName, "-apb") + b.templatefilter) +} diff --git a/pkg/broker/awsbroker_test.go b/pkg/broker/awsbroker_test.go new file mode 100644 index 00000000..e72ddbfa --- /dev/null +++ b/pkg/broker/awsbroker_test.go @@ -0,0 +1,329 @@ +package broker + +import ( + "errors" + "io/ioutil" + "log" + "strings" + "testing" + + "github.com/awslabs/aws-service-broker/pkg/serviceinstance" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/awstesting/mock" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface" + "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go/service/s3/s3iface" + "github.com/aws/aws-sdk-go/service/ssm" + "github.com/aws/aws-sdk-go/service/sts" + "github.com/aws/aws-sdk-go/service/sts/stsiface" + "github.com/koding/cache" + osb "github.com/pmorie/go-open-service-broker-client/v2" + uuid "github.com/satori/go.uuid" + "github.com/stretchr/testify/assert" + yaml "gopkg.in/yaml.v2" +) + +type TestCases map[string]Options + +func (T *TestCases) GetTests(f string) error { + yamlFile, err := ioutil.ReadFile(f) + if err != nil { + log.Printf("yamlFile.Get err #%v ", err) + return err + } + err = yaml.Unmarshal(yamlFile, &T) + if err != nil { + log.Printf("Unmarshal: %v", err) + return err + } + return nil +} + +func mockGetAwsSession(keyid string, secretkey string, region string, accountId string, profile string, params map[string]string) *session.Session { + sess := mock.Session + conf := aws.NewConfig() + conf.Region = aws.String(region) + return sess.Copy(conf) +} + +func mockAwsCfnClientGetter(sess *session.Session) CfnClient { + return CfnClient{mockCfn{ + DescribeStacksResponse: cloudformation.DescribeStacksOutput{}, + }} +} + +func mockAwsStsClientGetter(sess *session.Session) *sts.STS { + conf := aws.NewConfig() + conf.Region = sess.Config.Region + return &sts.STS{Client: mock.NewMockClient(conf)} +} + +func mockAwsS3ClientGetter(sess *session.Session) S3Client { + conf := aws.NewConfig() + conf.Region = sess.Config.Region + return S3Client{s3iface.S3API(&s3.S3{Client: mock.NewMockClient(conf)})} +} + +func mockAwsDdbClientGetter(sess *session.Session) *dynamodb.DynamoDB { + conf := aws.NewConfig() + conf.Region = sess.Config.Region + return &dynamodb.DynamoDB{Client: mock.NewMockClient(conf)} +} + +func mockAwsSsmClientGetter(sess *session.Session) *ssm.SSM { + conf := aws.NewConfig() + conf.Region = sess.Config.Region + return &ssm.SSM{Client: mock.NewMockClient(conf)} +} + +var mockClients = AwsClients{ + NewCfn: mockAwsCfnClientGetter, + NewSsm: mockAwsSsmClientGetter, + NewS3: mockAwsS3ClientGetter, + NewDdb: mockAwsDdbClientGetter, + NewSts: mockAwsStsClientGetter, +} + +func mockGetAccountId(svc stsiface.STSAPI) (*sts.GetCallerIdentityOutput, error) { + return &sts.GetCallerIdentityOutput{Account: aws.String("123456789012")}, nil +} + +func mockGetAccountIdFail(svc stsiface.STSAPI) (*sts.GetCallerIdentityOutput, error) { + return &sts.GetCallerIdentityOutput{}, errors.New("I should be failing...") +} + +func mockUpdateCatalog(listingcache cache.Cache, catalogcache cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, listTemplates ListTemplateser, listingUpdate ListingUpdater, metadataUpdate MetadataUpdater) error { + return nil +} + +func mockUpdateCatalogFail(listingcache cache.Cache, catalogcache cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, listTemplates ListTemplateser, listingUpdate ListingUpdater, metadataUpdate MetadataUpdater) error { + return errors.New("I failed") +} + +func mockPollUpdate(interval int, l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, updateCatalog UpdateCataloger, listTemplates ListTemplateser) { + +} + +// mock implementation of DataStore Adapter +type mockDataStore struct{} + +func (db mockDataStore) PutServiceDefinition(sd osb.Service) error { return nil } +func (db mockDataStore) GetParam(paramname string) (value string, err error) { return "some-value", nil } +func (db mockDataStore) PutParam(paramname string, paramvalue string) error { return nil } +func (db mockDataStore) PutServiceInstance(si serviceinstance.ServiceInstance) error { return nil } +func (db mockDataStore) GetServiceDefinition(serviceuuid string) (*osb.Service, error) { + service := osb.Service{ + ID: "", + Name: "", + Description: "", + Tags: nil, + Requires: nil, + Bindable: false, + BindingsRetrievable: false, + PlanUpdatable: nil, + Plans: nil, + DashboardClient: &osb.DashboardClient{ + ID: "", + Secret: "", + RedirectURI: "", + }, + Metadata: nil, + } + return &service, nil +} +func (db mockDataStore) GetServiceInstance(sid string) (*serviceinstance.ServiceInstance, error) { + si := serviceinstance.ServiceInstance{ + ID: "", + ServiceID: "", + PlanID: "", + Params: nil, + StackID: "", + } + return &si, nil +} +func (db mockDataStore) GetServiceBinding(id string) (*serviceinstance.ServiceBinding, error) { + return nil, nil +} +func (db mockDataStore) PutServiceBinding(sb serviceinstance.ServiceBinding) error { return nil } +func (db mockDataStore) DeleteServiceBinding(id string) error { return nil } + +func TestNewAwsBroker(t *testing.T) { + assert := assert.New(t) + options := new(TestCases) + options.GetTests("../../testcases/options.yaml") + + for _, v := range *options { + // Shouldn't error + bl, err := NewAWSBroker(v, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + assert.Nil(err) + + // check values are as expected + assert.Equal(v.KeyID, bl.keyid) + assert.Equal(v.SecretKey, bl.secretkey) + assert.Equal(v.Profile, bl.secretkey) + assert.Equal(v.Profile, bl.profile) + assert.Equal(v.TableName, bl.tablename) + assert.Equal(v.S3Bucket, bl.s3bucket) + assert.Equal(v.S3Region, bl.s3region) + assert.Equal(AddTrailingSlash(v.S3Key), bl.s3key) + assert.Equal(v.TemplateFilter, bl.templatefilter) + assert.Equal(v.Region, bl.region) + assert.Equal(v.BrokerID, bl.brokerid) + assert.Equal("123456789012", bl.db.Accountid) + assert.Equal(uuid.NewV5(uuid.NullUUID{}.UUID, "123456789012"+v.BrokerID), bl.db.Accountuuid) + assert.Equal(v.BrokerID, bl.db.Brokerid) + + // Should error + _, err = NewAWSBroker(v, mockGetAwsSession, mockClients, mockGetAccountIdFail, mockUpdateCatalog, mockPollUpdate) + assert.Error(err) + + // Should error + _, err = NewAWSBroker(v, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalogFail, mockPollUpdate) + assert.Error(err) + } +} + +func mockListTemplates(s3source *BucketDetailsRequest, b *AwsBroker) (*[]ServiceLastUpdate, error) { + return &[]ServiceLastUpdate{}, nil +} + +func mockListTemplatesFailNoBucket(s3source *BucketDetailsRequest, b *AwsBroker) (*[]ServiceLastUpdate, error) { + return &[]ServiceLastUpdate{}, errors.New("NoSuchBucket: The specified bucket does not exist") +} + +func mockListTemplatesFail(s3source *BucketDetailsRequest, b *AwsBroker) (*[]ServiceLastUpdate, error) { + return &[]ServiceLastUpdate{}, errors.New("ListTemplates failed") +} + +func mockListingUpdate(l *[]ServiceLastUpdate, c cache.Cache) error { + return nil +} + +func mockListingUpdateFail(l *[]ServiceLastUpdate, c cache.Cache) error { + return errors.New("ListingUpdate failed") +} + +func mockMetadataUpdate(l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, metadataUpdate MetadataUpdater) error { + return nil +} + +func mockMetadataUpdateFail(l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, metadataUpdate MetadataUpdater) error { + return errors.New("MetadataUpdate failed") +} + +func TestUpdateCatalog(t *testing.T) { + assert := assert.New(t) + options := new(TestCases) + options.GetTests("../../testcases/options.yaml") + var bl *AwsBroker + var bd *BucketDetailsRequest + for _, v := range *options { + bl, _ = NewAWSBroker(v, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bd = &BucketDetailsRequest{ + v.S3Bucket, + v.S3Key, + v.TemplateFilter, + } + } + + bl.db.DataStorePort = mockDataStore{} + + err := UpdateCatalog(bl.listingcache, bl.catalogcache, *bd, bl.s3svc, bl.db, *bl, mockListTemplates, mockListingUpdate, mockMetadataUpdate) + assert.Nil(err) + + err = UpdateCatalog(bl.listingcache, bl.catalogcache, *bd, bl.s3svc, bl.db, *bl, mockListTemplatesFailNoBucket, mockListingUpdate, mockMetadataUpdate) + assert.EqualError(err, "Cannot access S3 Bucket, either it does not exist or the IAM user/role the broker is configured to use has no access to the bucket") + + err = UpdateCatalog(bl.listingcache, bl.catalogcache, *bd, bl.s3svc, bl.db, *bl, mockListTemplatesFail, mockListingUpdate, mockMetadataUpdate) + assert.EqualError(err, "ListTemplates failed") + + err = UpdateCatalog(bl.listingcache, bl.catalogcache, *bd, bl.s3svc, bl.db, *bl, mockListTemplates, mockListingUpdateFail, mockMetadataUpdate) + assert.EqualError(err, "ListingUpdate failed") + + err = UpdateCatalog(bl.listingcache, bl.catalogcache, *bd, bl.s3svc, bl.db, *bl, mockListTemplates, mockListingUpdate, mockMetadataUpdateFail) + assert.EqualError(err, "MetadataUpdate failed") +} + +type mockS3 struct { + s3iface.S3API + GetObjectResp s3.GetObjectOutput +} + +func (m mockS3) GetObject(in *s3.GetObjectInput) (*s3.GetObjectOutput, error) { + return &m.GetObjectResp, nil +} + +type mockCfn struct { + cloudformationiface.CloudFormationAPI + DescribeStacksResponse cloudformation.DescribeStacksOutput + CreateStackResponse cloudformation.CreateStackOutput + DeleteStackResponse cloudformation.DeleteStackOutput +} + +func (m mockCfn) DescribeStacks(in *cloudformation.DescribeStacksInput) (*cloudformation.DescribeStacksOutput, error) { + return &m.DescribeStacksResponse, nil +} + +func (m mockCfn) CreateStack(in *cloudformation.CreateStackInput) (*cloudformation.CreateStackOutput, error) { + return &m.CreateStackResponse, nil +} + +func (m mockCfn) DeleteStack(in *cloudformation.DeleteStackInput) (*cloudformation.DeleteStackOutput, error) { + return &m.DeleteStackResponse, nil +} + +func TestMetadataUpdate(t *testing.T) { + assert := assert.New(t) + options := new(TestCases) + options.GetTests("../../testcases/options.yaml") + var bl *AwsBroker + var bd *BucketDetailsRequest + for _, v := range *options { + bl, _ = NewAWSBroker(v, mockGetAwsSession, mockClients, mockGetAccountId, mockUpdateCatalog, mockPollUpdate) + bd = &BucketDetailsRequest{ + v.S3Bucket, + v.S3Key, + v.TemplateFilter, + } + } + bl.db.DataStorePort = mockDataStore{} + + s3svc := S3Client{ + Client: mockS3{GetObjectResp: s3.GetObjectOutput{}}, + } + + // test "__LISTINGS__" not in cache + err := MetadataUpdate(bl.listingcache, bl.catalogcache, *bd, s3svc, bl.db, MetadataUpdate) + assert.EqualError(err, "not found") + + // test empty s3 body + var serviceUpdates []ServiceNeedsUpdate + serviceUpdates = append(serviceUpdates, ServiceNeedsUpdate{ + Name: "test-service", + Update: true, + }) + bl.listingcache.Set("__LISTINGS__", serviceUpdates) + err = MetadataUpdate(bl.listingcache, bl.catalogcache, *bd, s3svc, bl.db, MetadataUpdate) + assert.EqualError(err, "s3 object body missing") + + // test object not yaml + s3obj := s3.GetObjectOutput{Body: ioutil.NopCloser(strings.NewReader("test"))} + s3svc = S3Client{ + Client: mockS3{GetObjectResp: s3obj}, + } + err = MetadataUpdate(bl.listingcache, bl.catalogcache, *bd, s3svc, bl.db, MetadataUpdate) + assert.EqualError(err, "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `test` into map[string]interface {}") + + // TODO: test success and more failure scenarios +} + +func TestAssumeArnGeneration(t *testing.T) { + params := map[string]string{"target_role_name": "worker"} + accountId := "123456654321" + assert.Equal(t, generateRoleArn(params, accountId), "arn:aws:iam::123456654321:role/worker", "Validate role arn") + params["target_account_id"] = "000000000000" + assert.Equal(t, generateRoleArn(params, accountId), "arn:aws:iam::000000000000:role/worker", "Validate role arn") +} diff --git a/pkg/broker/cli.go b/pkg/broker/cli.go new file mode 100644 index 00000000..d8ce081a --- /dev/null +++ b/pkg/broker/cli.go @@ -0,0 +1,21 @@ +package broker + +import ( + "flag" +) + +// AddFlags adds defined flags to cli options +func AddFlags(o *Options) { + flag.StringVar(&o.KeyID, "keyId", "", "AWS IAM User Key ID to use, if left blank will attempt to use a role, if defined secret-key must also be defined.") + flag.StringVar(&o.SecretKey, "secretKey", "", "AWS IAM User Secret Key to use, if left blank will attempt to use a role, if defined key-id must also be defined.") + flag.StringVar(&o.Profile, "profile", "", "AWS credential profile to use, mutually exclusive to key-id and secret-key.") + flag.StringVar(&o.TableName, "tableName", "aws-service-broker", "DynamoDB table to use for persistent data storage.") + flag.StringVar(&o.Region, "region", "us-east-1", "AWS Region the DynamoDB table and S3 bucket are stored in.") + flag.StringVar(&o.S3Bucket, "s3Bucket", "awsservicebroker", "S3 bucket name where templates are stored.") + flag.StringVar(&o.S3Region, "s3Region", "us-east-1", "region S3 bucket is located in.") + flag.StringVar(&o.S3Key, "s3Key", "templates/latest/", "S3 key where templates are stored.") + flag.StringVar(&o.TemplateFilter, "templateFilter", "-main.yaml", "only process templates with the defined suffix.") + flag.StringVar(&o.CatalogPath, "catalogPath", "", "The path to the catalog.") + flag.StringVar(&o.BrokerID, "brokerId", "awsservicebroker", "An ID to use for partitioning broker data in DynamoDb. if multiple brokers are used in the same AWS account, this value must be unique per broker") + flag.BoolVar(&o.PrescribeOverrides, "prescribeOverrides", false, "Plan properties that are globally overridden will be removed from service plan parameters, this enforces their values for users and simplifies the list of required parameters. Common overrides are aws_access_key, aws_secret_key, region and VpcId") +} diff --git a/pkg/broker/cli_test.go b/pkg/broker/cli_test.go new file mode 100644 index 00000000..01a7874a --- /dev/null +++ b/pkg/broker/cli_test.go @@ -0,0 +1,10 @@ +package broker + +import ( + "testing" +) + +func TestAddFlags(t *testing.T) { + opts := Options{} + AddFlags(&opts) +} diff --git a/pkg/broker/constants.go b/pkg/broker/constants.go new file mode 100644 index 00000000..4aa466ad --- /dev/null +++ b/pkg/broker/constants.go @@ -0,0 +1,26 @@ +package broker + +import "time" + +// CacheTTL TTL for catalog cache record expiry +var CacheTTL = 1 * time.Hour + +var nonCfnParams = []string{ + "aws_access_key", + "aws_secret_key", + "region", + "target_role_name", + "target_account_id", +} + +const ( + bindParamRoleName = "RoleName" + bindParamScope = "Scope" +) + +const ( + cfnOutputPolicyArnPrefix = "PolicyArn" + cfnOutputSSMValuePrefix = "ssm:" + cfnOutputUserKeyID = "UserKeyId" + cfnOutputUserSecretKey = "UserSecretKey" +) diff --git a/pkg/broker/types.go b/pkg/broker/types.go new file mode 100644 index 00000000..86a939ce --- /dev/null +++ b/pkg/broker/types.go @@ -0,0 +1,135 @@ +package broker + +import ( + "sync" + "time" + + "github.com/awslabs/aws-service-broker/pkg/serviceinstance" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface" + "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/aws/aws-sdk-go/service/s3/s3iface" + "github.com/aws/aws-sdk-go/service/ssm" + "github.com/aws/aws-sdk-go/service/sts" + "github.com/aws/aws-sdk-go/service/sts/stsiface" + "github.com/koding/cache" + osb "github.com/pmorie/go-open-service-broker-client/v2" + "github.com/satori/go.uuid" +) + +// Options cli options +type Options struct { + CatalogPath string + KeyID string + SecretKey string + Profile string + TableName string + S3Bucket string + S3Region string + S3Key string + TemplateFilter string + Region string + BrokerID string + RoleArn string + PrescribeOverrides bool +} + +// BucketDetailsRequest describes the details required to fetch metadata and templates from s3 +type BucketDetailsRequest struct { + bucket string + prefix string + suffix string +} + +// AwsBroker holds configuration, caches and aws service clients +type AwsBroker struct { + sync.RWMutex + accountId string + keyid string + secretkey string + profile string + tablename string + s3bucket string + s3region string + s3key string + templatefilter string + region string + s3svc S3Client + ssmsvc ssm.SSM + catalogcache cache.Cache + listingcache cache.Cache + instances map[string]*serviceinstance.ServiceInstance + brokerid string + db Db + GetSession GetAwsSession + Clients AwsClients + prescribeOverrides bool + globalOverrides map[string]string +} + +// ServiceNeedsUpdate if Update == true the metadata should be refreshed from s3 +type ServiceNeedsUpdate struct { + Name string + Update bool +} + +// ServiceLastUpdate date when a service exposed by the broker was last updated from s3 +type ServiceLastUpdate struct { + Name string + Date time.Time +} + +// Db configuration +type Db struct { + Accountid string + Accountuuid uuid.UUID + Brokerid string + DataStorePort DataStore +} + +// DataStore port, any backend datastore must provide at least these interfaces +type DataStore interface { + PutServiceDefinition(sd osb.Service) error + GetParam(paramname string) (value string, err error) + PutParam(paramname string, paramvalue string) error + GetServiceDefinition(serviceuuid string) (*osb.Service, error) + GetServiceInstance(sid string) (*serviceinstance.ServiceInstance, error) + PutServiceInstance(si serviceinstance.ServiceInstance) error + GetServiceBinding(id string) (*serviceinstance.ServiceBinding, error) + PutServiceBinding(sb serviceinstance.ServiceBinding) error + DeleteServiceBinding(id string) error +} + +type GetAwsSession func(keyid string, secretkey string, region string, accountId string, profile string, params map[string]string) *session.Session + +type GetCfnClient func(sess *session.Session) CfnClient +type GetSsmClient func(sess *session.Session) *ssm.SSM +type GetS3Client func(sess *session.Session) S3Client +type GetDdbClient func(sess *session.Session) *dynamodb.DynamoDB +type GetStsClient func(sess *session.Session) *sts.STS +type GetIamClient func(sess *session.Session) *iam.IAM + +type AwsClients struct { + NewCfn GetCfnClient + NewSsm GetSsmClient + NewS3 GetS3Client + NewDdb GetDdbClient + NewSts GetStsClient + NewIam GetIamClient +} + +type S3Client struct { + Client s3iface.S3API +} + +type CfnClient struct { + Client cloudformationiface.CloudFormationAPI +} + +type GetCallerIder func(svc stsiface.STSAPI) (*sts.GetCallerIdentityOutput, error) +type UpdateCataloger func(listingcache cache.Cache, catalogcache cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, listTemplates ListTemplateser, listingUpdate ListingUpdater, metadataUpdate MetadataUpdater) error +type PollUpdater func(interval int, l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, bl AwsBroker, updateCatalog UpdateCataloger, listTemplates ListTemplateser) +type ListTemplateser func(s3source *BucketDetailsRequest, b *AwsBroker) (*[]ServiceLastUpdate, error) +type ListingUpdater func(l *[]ServiceLastUpdate, c cache.Cache) error +type MetadataUpdater func(l cache.Cache, c cache.Cache, bd BucketDetailsRequest, s3svc S3Client, db Db, metadataUpdate MetadataUpdater) error diff --git a/pkg/broker/util.go b/pkg/broker/util.go new file mode 100644 index 00000000..1e43a2b9 --- /dev/null +++ b/pkg/broker/util.go @@ -0,0 +1,424 @@ +package broker + +import ( + "fmt" + "net/http" + "os" + "regexp" + "strings" + "unicode" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" + "github.com/aws/aws-sdk-go/aws/ec2metadata" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/aws/aws-sdk-go/service/ssm" + "github.com/aws/aws-sdk-go/service/ssm/ssmiface" + "github.com/golang/glog" + osb "github.com/pmorie/go-open-service-broker-client/v2" +) + +func getGlobalOverrides(brokerID string) map[string]string { + prefix := fmt.Sprintf("%s_all_all_all_", brokerID) + overrides := make(map[string]string) + for k, v := range GetOverridesFromEnv() { + if strings.HasPrefix(k, prefix) { + overrides[strings.TrimPrefix(k, prefix)] = v + } + } + return overrides +} + +func prescribeOverrides(b AwsBroker, services []osb.Service) []osb.Service { + if !b.prescribeOverrides { + return services + } else { + // TODO: Alot of duplication of code with ServiceDefinitionToOsb, should cleanup + for s, service := range services { + for p, plan := range service.Plans { + overrideKeys := make([]string, 0) + for o := range b.globalOverrides { + overrideKeys = append(overrideKeys, o) + } + glog.Infoln(overrideKeys) + schemas := map[string]map[string]interface{}{ + "create": plan.Schemas.ServiceInstance.Create.Parameters.(map[string]interface{}), + } + if plan.Schemas.ServiceInstance.Update != nil { + schemas["update"] = plan.Schemas.ServiceInstance.Update.Parameters.(map[string]interface{}) + } + for schemaName, schema := range schemas { + props := make(map[string]interface{}) + required := make([]string, 0) + for k, v := range schema { + switch k { + case "properties": + for pk, pv := range v.(map[string]interface{}) { + if !stringInSlice(pk, overrideKeys) { + props[pk] = pv + } + } + case "required": + glog.Infoln(v) + for _, r := range v.([]string) { + if !stringInSlice(r, overrideKeys) { + required = append(required, r) + } + } + } + } + if schemaName == "create" { + params := map[string]interface{}{ + "type": "object", + "properties": props, + "$schema": "http://json-schema.org/draft-06/schema#", + } + if len(required) > 0 { + params["required"] = required + } + plan.Schemas = &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{ + Create: &osb.InputParametersSchema{ + Parameters: params, + }, + }, + } + } else if schemaName == "update" { + params := map[string]interface{}{ + "type": "object", + "properties": props, + "$schema": "http://json-schema.org/draft-06/schema#", + } + if len(required) > 0 { + params["required"] = required + } + if len(props) > 0 { + plan.Schemas.ServiceInstance.Update = &osb.InputParametersSchema{ + Parameters: params, + } + } + } + } + services[s].Plans[p] = plan + } + } + return services + } +} + +func GetOverridesFromEnv() map[string]string { + var Overrides = make(map[string]string) + + for _, item := range os.Environ() { + envvar := strings.Split(item, "=") + if strings.HasPrefix(envvar[0], "PARAM_OVERRIDE_") { + key := strings.TrimPrefix(envvar[0], "PARAM_OVERRIDE_") + if envvar[1] != "" { + Overrides[key] = envvar[1] + glog.V(10).Infof("%q=%q\n", key, envvar[1]) + } + } + } + return Overrides +} + +func stringInSlice(a string, list []string) bool { + for _, b := range list { + if b == a { + return true + } + } + return false +} + +// https://gist.github.com/elwinar/14e1e897fdbe4d3432e1 +func toScreamingSnakeCase(s string) string { + in := []rune(s) + + var out []rune + for i, r := range in { + if i > 0 && i < len(in)-1 && // If this is not the first or last rune + unicode.IsUpper(r) && (unicode.IsLower(in[i-1]) || unicode.IsLower(in[i+1])) { // And it's an upper preceded or followed by a lower + out = append(out, '_') + } + out = append(out, unicode.ToUpper(r)) + } + + return string(out) +} + +func getOverrides(brokerid string, params []string, space string, service string, cluster string) (overrides map[string]string) { + overridesEnv := GetOverridesFromEnv() + + var services []string + var namespaces []string + var clusters []string + if service != "all" { + services = append(services, "all") + } + if space != "all" { + namespaces = append(namespaces, "all") + } + if cluster != "all" { + clusters = append(clusters, "all") + } + overrides = make(map[string]string) + services = append(services, service) + namespaces = append(namespaces, space) + clusters = append(clusters, cluster) + for _, c := range clusters { + for _, n := range namespaces { + for _, s := range services { + for _, p := range params { + paramname := brokerid + "_" + c + "_" + n + "_" + s + "_" + p + // removing getting overrides from dynamo for the time being + /* + v, err := b.db.DataStorePort.GetParam(paramname) + if err != nil { + glog.Infof("Unable to fetch parameter override for %#+v", paramname) + glog.Infoln(err.Error()) + } + if v != "" { + overrides[p] = v + } + */ + if _, ok := overridesEnv[paramname]; ok { + overrides[p] = overridesEnv[paramname] + } + } + } + } + } + glog.Infof("Overrides: '%+v'.", overrides) + return overrides +} + +// Build aws credentials using global or override keys, or the credential chain +func AwsCredentialsGetter(keyid string, secretkey string, profile string, params map[string]string, client *ec2metadata.EC2Metadata) credentials.Credentials { + if _, ok := params["aws_access_key"]; ok { + keyid = params["aws_access_key"] + glog.V(10).Infof("Using override credentials with keyid %q\n", keyid) + } + if _, ok := params["aws_secret_key"]; ok { + secretkey = params["aws_secret_key"] + } + if keyid != "" && secretkey != "" { + glog.Infof("Found 'aws_access_key' and 'aws_secret_key' in params, using credentials keyid '%q'.", keyid) + return *credentials.NewStaticCredentials(keyid, secretkey, "") + } else if profile != "" { + glog.Infof("Profile specified, using profile '%q'.", profile) + return *credentials.NewChainCredentials([]credentials.Provider{&credentials.SharedCredentialsProvider{Profile: profile}}) + } + glog.Infof("Did not find 'aws_access_key' and 'aws_secret_key' in params, using default chain.") + return *credentials.NewChainCredentials( + []credentials.Provider{ + &credentials.EnvProvider{}, + &credentials.SharedCredentialsProvider{}, + &ec2rolecreds.EC2RoleProvider{Client: client}, + }) +} + +// add trailing / if needed +func AddTrailingSlash(s string) string { + if strings.HasSuffix(s, "/") == false { + s = s + "/" + } + return s +} + +func generateRoleArn(params map[string]string, currentAccountId string) string { + targetRoleName := params["target_role_name"] + + if _, ok := params["target_account_id"]; ok { + targetAccountId := params["target_account_id"] + + glog.Infof("Params 'target_account_id' present in params, assuming role in target account '%s'.", targetAccountId) + return fmtArn(targetAccountId, targetRoleName) + } + + glog.Infof("Params 'target_account_id' not present in params, assuming role in current account '%s'.", currentAccountId) + return fmtArn(currentAccountId, targetRoleName) +} + +// getStackName returns the stack name for a service instance. A stack name can +// contain only alphanumeric characters (case sensitive) and hyphens. It must +// start with an alphabetic character and cannot be longer than 128 characters. +func getStackName(serviceName, instanceID string) string { + s := fmt.Sprintf("aws-service-broker-%s-%s", serviceName, instanceID) + s = regexp.MustCompile("[^a-zA-Z0-9-]").ReplaceAllString(s, "-") + if len(s) > 128 { + s = s[0:128] + } + return s +} + +func fmtArn(accountId, roleName string) string { + return fmt.Sprintf("arn:aws:iam::%s:role/%s", accountId, roleName) +} + +func toCFNParams(params map[string]string) []*cloudformation.Parameter { + var cfnParams []*cloudformation.Parameter + for k, v := range params { + if stringInSlice(k, nonCfnParams) { + continue + } + cfnParams = append(cfnParams, &cloudformation.Parameter{ + ParameterKey: aws.String(k), + ParameterValue: aws.String(v), + }) + } + return cfnParams +} + +func newAsyncError() osb.HTTPStatusCodeError { + return newHTTPStatusCodeError(http.StatusUnprocessableEntity, osb.AsyncErrorMessage, osb.AsyncErrorDescription) +} + +func newHTTPStatusCodeError(statusCode int, msg, desc string) osb.HTTPStatusCodeError { + err := osb.HTTPStatusCodeError{ + StatusCode: statusCode, + } + if msg != "" { + err.ErrorMessage = &msg + } + if desc != "" { + err.Description = &desc + } + glog.Error(err) + return err +} + +func getCluster(context map[string]interface{}) string { + switch context["platform"] { + case osb.PlatformCloudFoundry: + return strings.Replace(context["organization_guid"].(string), "-", "", -1) + case osb.PlatformKubernetes: + return context["clusterid"].(string) + default: + return "unknown" + } +} + +func getNamespace(context map[string]interface{}) string { + switch context["platform"] { + case osb.PlatformCloudFoundry: + return strings.Replace(context["space_guid"].(string), "-", "", -1) + case osb.PlatformKubernetes: + return context["namespace"].(string) + default: + return "unknown" + } +} + +func getPlan(service *osb.Service, planID string) *osb.Plan { + for _, p := range service.Plans { + if p.ID == planID { + return &p + } + } + return nil +} + +func getPlanDefaults(plan *osb.Plan) map[string]string { + defaults := make(map[string]string) + for k, v := range plan.Schemas.ServiceInstance.Create.Parameters.(map[string]interface{})["properties"].(map[string]interface{}) { + if d, ok := v.(map[string]interface{})["default"]; ok { + defaults[k] = paramValue(d) + } + } + return defaults +} + +func getAvailableParams(plan *osb.Plan) (params []string) { + properties := plan.Schemas.ServiceInstance.Create.Parameters.(map[string]interface{})["properties"] + if properties != nil { + for k := range properties.(map[string]interface{}) { + params = append(params, k) + } + } + return +} + +func getUpdatableParams(plan *osb.Plan) (params []string) { + properties := plan.Schemas.ServiceInstance.Update.Parameters.(map[string]interface{})["properties"] + if properties != nil { + for k := range properties.(map[string]interface{}) { + params = append(params, k) + } + } + return +} + +func getRequiredParams(plan *osb.Plan) (params []string) { + required := plan.Schemas.ServiceInstance.Create.Parameters.(map[string]interface{})["required"] + if required != nil { + for _, p := range required.([]interface{}) { + params = append(params, p.(string)) + } + } + return +} + +func paramValue(v interface{}) string { + if v == nil { + return "" + } + return fmt.Sprintf("%v", v) +} + +func getCredentials(service *osb.Service, outputs []*cloudformation.Output, ssmSvc ssmiface.SSMAPI) (map[string]interface{}, error) { + credentials := make(map[string]interface{}) + var ssmValues []string + + for _, o := range outputs { + if strings.HasPrefix(aws.StringValue(o.OutputKey), cfnOutputPolicyArnPrefix) { + continue + } + + // The output keys "UserKeyId" and "UserSecretKey" require special handling for backward compatibility :/ + if aws.StringValue(o.OutputKey) == cfnOutputUserKeyID || aws.StringValue(o.OutputKey) == cfnOutputUserSecretKey { + k := fmt.Sprintf("%s_%s", strings.ToUpper(service.Name), toScreamingSnakeCase(aws.StringValue(o.OutputKey))) + credentials[k] = aws.StringValue(o.OutputValue) + ssmValues = append(ssmValues, aws.StringValue(o.OutputValue)) + } else { + credentials[toScreamingSnakeCase(aws.StringValue(o.OutputKey))] = aws.StringValue(o.OutputValue) + // If the output value starts with "ssm:", we'll get the actual value from SSM + if strings.HasPrefix(aws.StringValue(o.OutputValue), cfnOutputSSMValuePrefix) { + ssmValues = append(ssmValues, strings.TrimPrefix(aws.StringValue(o.OutputValue), cfnOutputSSMValuePrefix)) + } + } + } + + if len(ssmValues) > 0 { + resp, err := ssmSvc.GetParameters(&ssm.GetParametersInput{ + Names: aws.StringSlice(ssmValues), + WithDecryption: aws.Bool(true), + }) + if err != nil { + return nil, err + } else if len(resp.InvalidParameters) > 0 { + return nil, fmt.Errorf("invalid parameters: %v", resp.InvalidParameters) + } + + for _, p := range resp.Parameters { + for k, v := range credentials { + if strings.TrimPrefix(v.(string), cfnOutputSSMValuePrefix) == aws.StringValue(p.Name) { + credentials[k] = aws.StringValue(p.Value) + } + } + } + } + + return credentials, nil +} + +func getPolicyArn(outputs []*cloudformation.Output, scope string) (string, error) { + outputKey := fmt.Sprintf("%s%s", cfnOutputPolicyArnPrefix, scope) + for _, o := range outputs { + if strings.EqualFold(aws.StringValue(o.OutputKey), outputKey) { + return aws.StringValue(o.OutputValue), nil + } + } + return "", fmt.Errorf("output not found: %s", outputKey) +} diff --git a/pkg/broker/util_test.go b/pkg/broker/util_test.go new file mode 100644 index 00000000..39fb05b9 --- /dev/null +++ b/pkg/broker/util_test.go @@ -0,0 +1,339 @@ +package broker + +import ( + "os" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" + "github.com/aws/aws-sdk-go/aws/ec2metadata" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/aws/aws-sdk-go/service/ssm" + "github.com/aws/aws-sdk-go/service/ssm/ssmiface" + osb "github.com/pmorie/go-open-service-broker-client/v2" + "github.com/stretchr/testify/assert" +) + +func clearOverrides() { + // TODO: this breaks parallel testing, should mock out os.*Env functions + for _, item := range os.Environ() { + envvar := strings.Split(item, "=") + if strings.HasPrefix(envvar[0], "PARAM_OVERRIDE_") { + os.Unsetenv(envvar[0]) + } + } +} + +func TestPrescribeOverrides(t *testing.T) { + assertor := assert.New(t) + + services := []osb.Service{ + {ID: "test", Name: "test", Description: "test", Plans: []osb.Plan{ + {ID: "testplan", Name: "testplan", Description: "testplan", Schemas: &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + "override_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param", "override_param"}, + }, + }}, + }}, + }}, + } + + g := map[string]string{"override_param": "overridden"} + + msg := "params should not be modified when prescribeOverrides is false" + psvcs := prescribeOverrides(AwsBroker{brokerid: "awsservicebroker", prescribeOverrides: false, globalOverrides: g}, services) + expected := []osb.Service{ + {ID: "test", Name: "test", Description: "test", Plans: []osb.Plan{ + {ID: "testplan", Name: "testplan", Description: "testplan", Schemas: &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + "override_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param", "override_param"}, + }, + }}, + }}, + }}, + } + assertor.Equal(expected, psvcs, msg) + + msg = "override_param should be removed when prescribeOverrides is true" + psvcs = prescribeOverrides(AwsBroker{brokerid: "awsservicebroker", prescribeOverrides: true, globalOverrides: g}, services) + expected = []osb.Service{ + {ID: "test", Name: "test", Description: "test", Plans: []osb.Plan{ + {ID: "testplan", Name: "testplan", Description: "testplan", Schemas: &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param"}, + }, + }}, + }}, + }}, + } + assertor.Equal(expected, psvcs, msg) + + services = []osb.Service{ + {ID: "test", Name: "test", Description: "test", Plans: []osb.Plan{ + {ID: "testplan", Name: "testplan", Description: "testplan", Schemas: &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{ + Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + "override_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param", "override_param"}, + }, + }, + Update: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + "override_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param", "override_param"}, + }, + }, + }, + }}, + }}, + } + + msg = "override_param should be removed from Update params too when prescribeOverrides is true" + psvcs = prescribeOverrides(AwsBroker{brokerid: "awsservicebroker", prescribeOverrides: true, globalOverrides: g}, services) + expected = []osb.Service{ + {ID: "test", Name: "test", Description: "test", Plans: []osb.Plan{ + {ID: "testplan", Name: "testplan", Description: "testplan", Schemas: &osb.Schemas{ + ServiceInstance: &osb.ServiceInstanceSchema{ + Create: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param"}, + }, + }, + Update: &osb.InputParametersSchema{ + Parameters: map[string]interface{}{"type": "object", "properties": map[string]interface{}{ + "req_param": map[string]interface{}{"type": "string"}, + }, + "$schema": "http://json-schema.org/draft-06/schema#", + "required": []string{"req_param"}, + }, + }, + }, + }}, + }}, + } + assertor.Equal(expected, psvcs, msg) + clearOverrides() +} + +func TestGetOverridesFromEnv(t *testing.T) { + assertor := assert.New(t) + + clearOverrides() + + msg := "should return empty map if there are no overrides set" + output := GetOverridesFromEnv() + assertor.Equal(make(map[string]string), output, msg) + + msg = "should return map with all the found overrides, excluding any environment variables not prefixed with PARAM_OVERRIDE_" + os.Setenv("PARAM_OVERRIDE_awsservicebroker_all_all_all_test_param1", "testval1") + os.Setenv("PARAM_OVERRIDE_awsservicebroker_all_all_all_test_param2", "testval2") + os.Setenv("NOTMATCHPARAM_OVERRIDE_awsservicebroker_all_all_all_test_param3", "testval3") + output = GetOverridesFromEnv() + assertor.Equal(map[string]string{ + "awsservicebroker_all_all_all_test_param1": "testval1", + "awsservicebroker_all_all_all_test_param2": "testval2", + }, + output, + msg, + ) + clearOverrides() +} + +func TestStringInSlice(t *testing.T) { + assertor := assert.New(t) + + assertor.Equal(true, stringInSlice("present", []string{"somestr", "present", "anotherstr"}), "should return true") + + assertor.Equal(false, stringInSlice("notpresent", []string{"somestr", "present", "anotherstr"}), "should return false") +} + +func TestToScreamingSnakeCase(t *testing.T) { + assertor := assert.New(t) + + assertor.Equal("SCREAMING_SNAKE", toScreamingSnakeCase("ScreamingSnake"), "should convert camel to snake") + + assertor.Equal("AWS_TEST", toScreamingSnakeCase("AWSTest"), "Shouldn't put an underscore between consecutive caps") + +} + +func TestGetOverrides(t *testing.T) { + assertor := assert.New(t) + + clearOverrides() + brokerid, space, service, cluster := "awsservicebroker", "all", "all", "all" + params := []string{"test_param1", "test_param2"} + + output := getOverrides(brokerid, params, space, service, cluster) + assertor.Equal(make(map[string]string), output, "should return an empty slice if there's no matching overrides") + + os.Setenv("PARAM_OVERRIDE_awsservicebroker_all_all_all_test_param1", "testval1") + output = getOverrides(brokerid, params, space, service, cluster) + assertor.Equal(map[string]string{"test_param1": "testval1"}, output, "should return only items with matching overrides") + + os.Setenv("PARAM_OVERRIDE_awsservicebroker_all_all_notrightservice_test_param2", "testval2") + output = getOverrides(brokerid, params, space, service, cluster) + assertor.Equal(map[string]string{"test_param1": "testval1"}, output, "should return only items with matching overrides") + + brokerid, space, service, cluster = "awsservicebroker", "should", "not", "match" + output = getOverrides(brokerid, params, space, service, cluster) + assertor.Equal(map[string]string{"test_param1": "testval1"}, output, "should return only items with matching overrides") + + clearOverrides() + +} + +func TestAwsCredentialsGetter(t *testing.T) { + assertor := assert.New(t) + + keyid, secretkey, profile := "", "", "" + params := make(map[string]string) + client := ec2metadata.New(session.Must(session.NewSession())) + actual := AwsCredentialsGetter(keyid, secretkey, profile, params, client) + expected := *credentials.NewChainCredentials( + []credentials.Provider{ + &credentials.EnvProvider{}, + &credentials.SharedCredentialsProvider{}, + &ec2rolecreds.EC2RoleProvider{Client: client}, + }) + assertor.Equal(expected, actual, "should return credential chain creds") + + keyid, secretkey, profile = "testid", "testkey", "" + expected = *credentials.NewStaticCredentials(keyid, secretkey, "") + actual = AwsCredentialsGetter(keyid, secretkey, profile, params, client) + assertor.Equal(expected, actual, "should return static creds") + + keyid, secretkey, profile = "", "", "test" + expected = *credentials.NewChainCredentials([]credentials.Provider{&credentials.SharedCredentialsProvider{Profile: profile}}) + actual = AwsCredentialsGetter(keyid, secretkey, profile, params, client) + assertor.Equal(expected, actual, "should return shared creds") + + keyid, secretkey, profile = "", "", "" + params = map[string]string{"aws_access_key": "testKeyId", "aws_secret_key": "testSecretKey"} + expected = *credentials.NewStaticCredentials("testKeyId", "testSecretKey", "") + actual = AwsCredentialsGetter(keyid, secretkey, profile, params, client) + assertor.Equal(expected, actual, "should return static creds") +} + +func TestToCFNParams(t *testing.T) { + assertor := assert.New(t) + + params := map[string]string{"pkey": "pval"} + actual := toCFNParams(params) + expected := []*cloudformation.Parameter{ + { + ParameterKey: aws.String("pkey"), + ParameterValue: aws.String("pval"), + }, + } + assertor.Equal(expected, actual, "should return input marshalled into []*cloudformation.Parameter ") +} + +func TestNewHTTPStatusCodeError(t *testing.T) { + assertor := assert.New(t) + + code, msg, desc := 499, "testmsg", "test desc" + expected := osb.HTTPStatusCodeError{StatusCode: code, ErrorMessage: &msg, Description: &desc} + actual := newHTTPStatusCodeError(code, msg, desc) + assertor.Equal(expected, actual, "should return a HTTPStatusCodeError with code, msg and desc matching the input") +} + +func TestGetCluster(t *testing.T) { + assertor := assert.New(t) + + context := map[string]interface{}{ + "platform": osb.PlatformCloudFoundry, + "organization_guid": "test-test", + } + assertor.Equal("testtest", getCluster(context), "should strip dashes from cf guid") + + context = map[string]interface{}{ + "platform": osb.PlatformKubernetes, + "clusterid": "testtest", + } + assertor.Equal("testtest", getCluster(context), "should return clusterid from context") + + context = map[string]interface{}{ + "platform": "other", + "organization_guid": "testtest", + } + assertor.Equal("unknown", getCluster(context), "should return unknown") +} + +type mockSsmGetParameters struct { + ssmiface.SSMAPI + Resp ssm.GetParametersOutput +} + +func (mockSsmGetParameters) GetParameters(in *ssm.GetParametersInput) (*ssm.GetParametersOutput, error) { + params := make([]*ssm.Parameter, 0) + for _, n := range in.Names { + params = append(params, &ssm.Parameter{ + Name: aws.String(*n), + Value: aws.String("val-" + *n), + }) + } + return &ssm.GetParametersOutput{Parameters: params}, nil +} + +func TestGetCredentials(t *testing.T) { + assertor := assert.New(t) + + service := osb.Service{ + Name: "testsvc", + } + outputs := []*cloudformation.Output{ + { + OutputKey: aws.String(cfnOutputPolicyArnPrefix + "Test"), + OutputValue: aws.String("testpolicyval"), + }, + { + OutputKey: aws.String(cfnOutputUserKeyID), + OutputValue: aws.String("testkeyval"), + }, + { + OutputKey: aws.String("Test"), + OutputValue: aws.String("testasisval"), + }, + { + OutputKey: aws.String("TestSsmVal"), + OutputValue: aws.String(cfnOutputSSMValuePrefix + "testssmval"), + }, + } + ssmSvc := mockSsmGetParameters{} + + expected := map[string]interface{}{ + "TEST": "testasisval", + "TESTSVC_USER_KEY_ID": "val-testkeyval", + "TEST_SSM_VAL": "val-testssmval", + } + actual, err := getCredentials(&service, outputs, ssmSvc) + assertor.Equal(nil, err, "err should be nil") + assertor.Equal(expected, actual, "not getting expected output") +} diff --git a/pkg/dynamodbadapter/adapter.go b/pkg/dynamodbadapter/adapter.go new file mode 100644 index 00000000..5af94d0d --- /dev/null +++ b/pkg/dynamodbadapter/adapter.go @@ -0,0 +1,240 @@ +package dynamodbadapter + +import ( + "fmt" + + "github.com/awslabs/aws-service-broker/pkg/serviceinstance" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute" + "github.com/golang/glog" + osb "github.com/pmorie/go-open-service-broker-client/v2" + uuid "github.com/satori/go.uuid" +) + +// Item types +const ( + itemTypeParameter = "parameter" + itemTypeService = "service" + itemTypeServiceBinding = "servicebinding" + itemTypeServiceInstance = "serviceinstance" +) + +// DynamoDB implementation of DataStore Adapter +type DdbDataStore struct { + Accountid string + Accountuuid uuid.UUID + Brokerid string + Region string + Ddb dynamodb.DynamoDB + Tablename string +} + +// PutServiceDefinition push catalog service definition to DynamoDb +func (db DdbDataStore) PutServiceDefinition(sd osb.Service) error { + glog.Infof("putting service definition %q into dynamdb", sd.Name) + serviceid := uuid.NewV5(db.Accountuuid, sd.Name) + si, err := dynamodbattribute.Marshal(sd) + if err != nil { + glog.Errorln(err) + return err + } + putInput := dynamodb.PutItemInput{ + TableName: aws.String(db.Tablename), + Item: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(serviceid.String())}, + "userid": {S: aws.String(db.Accountuuid.String())}, + "serviceid": {S: aws.String(serviceid.String())}, + "servicename": {S: aws.String(sd.Name)}, + "service": si, + "type": {S: aws.String(itemTypeService)}, + }, + } + _, err = db.Ddb.PutItem(&putInput) + if err != nil { + glog.Infoln(putInput) + glog.Errorln(err) + return err + } + glog.Infof("done putting service definition %q into dynamdb", sd.Name) + return nil +} + +// Param stores a parameter value +type Param struct { + Value string `json:"value"` +} + +// GetParam fetches parameter from Dynamo +func (db DdbDataStore) GetParam(paramname string) (value string, err error) { + paramuuid := uuid.NewV5(db.Accountuuid, paramname).String() + getInput := dynamodb.GetItemInput{ + TableName: aws.String(db.Tablename), + Key: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(paramuuid)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + }, + } + result, err := db.Ddb.GetItem(&getInput) + if err != nil { + return "", err + } + if len(result.Item) == 0 { + return "", fmt.Errorf("parameter does not exist") + } + + item := Param{} + glog.Infoln("unmarshalling item") + glog.Infoln(result.Item) + dynamodbattribute.UnmarshalMap(result.Item, &item) + if err != nil { + return "", err + } + if item.Value == "" { + return "", fmt.Errorf("could not unmarshal service definition") + } + return item.Value, nil +} + +// PutParam puts parameters into Dynamo +func (db DdbDataStore) PutParam(paramname string, paramvalue string) error { + paramuuid := uuid.NewV5(db.Accountuuid, paramname).String() + putInput := dynamodb.PutItemInput{ + TableName: aws.String(db.Tablename), + Item: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(paramuuid)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + "value": {S: aws.String(paramvalue)}, + "type": {S: aws.String(itemTypeParameter)}, + }, + } + _, err := db.Ddb.PutItem(&putInput) + if err != nil { + return err + } + return nil +} + +// ServiceItem used to unmarshal catalog entries from DynamoDb +type ServiceItem struct { + ID string `json:"id"` + Userid string `json:"userid"` + Service osb.Service `json:"service"` + Serviceid string `json:"serviceid"` + Servicename string `json:"servicename"` +} + +// GetServiceDefinition fetches given catalog service definition from Dynamo +func (db DdbDataStore) GetServiceDefinition(serviceuuid string) (*osb.Service, error) { + resp, err := db.Ddb.GetItem(&dynamodb.GetItemInput{ + Key: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(serviceuuid)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + }, + TableName: aws.String(db.Tablename), + }) + if err != nil { + return nil, err + } else if len(resp.Item) == 0 { + return nil, nil + } + + var item ServiceItem + err = dynamodbattribute.UnmarshalMap(resp.Item, &item) + return &item.Service, err +} + +// GetServiceInstance fetches given service instance from Dynamo +func (db DdbDataStore) GetServiceInstance(sid string) (*serviceinstance.ServiceInstance, error) { + resp, err := db.Ddb.GetItem(&dynamodb.GetItemInput{ + ConsistentRead: aws.Bool(true), // Ensure we have the latest version of the service instance + Key: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(sid)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + }, + ProjectionExpression: aws.String("serviceinstance"), + TableName: aws.String(db.Tablename), + }) + if err != nil { + return nil, err + } else if len(resp.Item) == 0 { + return nil, nil + } + + var si serviceinstance.ServiceInstance + err = dynamodbattribute.Unmarshal(resp.Item["serviceinstance"], &si) + return &si, err +} + +// PutServiceInstance stores given service instance in Dynamo +func (db DdbDataStore) PutServiceInstance(si serviceinstance.ServiceInstance) error { + msi, err := dynamodbattribute.Marshal(si) + if err != nil { + return err + } + putInput := dynamodb.PutItemInput{ + TableName: aws.String(db.Tablename), + Item: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(si.ID)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + "serviceinstance": msi, + "type": {S: aws.String(itemTypeServiceInstance)}, + }, + } + _, err = db.Ddb.PutItem(&putInput) + if err != nil { + return err + } + return nil +} + +// GetServiceBinding returns the specified service binding. +func (db DdbDataStore) GetServiceBinding(id string) (*serviceinstance.ServiceBinding, error) { + resp, err := db.Ddb.GetItem(&dynamodb.GetItemInput{ + Key: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(id)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + }, + ProjectionExpression: aws.String("servicebinding"), + TableName: aws.String(db.Tablename), + }) + if err != nil { + return nil, err + } else if len(resp.Item) == 0 { + return nil, nil + } + + var sb serviceinstance.ServiceBinding + err = dynamodbattribute.Unmarshal(resp.Item["servicebinding"], &sb) + return &sb, err +} + +// PutServiceBinding stores the service binding. +func (db DdbDataStore) PutServiceBinding(sb serviceinstance.ServiceBinding) error { + msb, err := dynamodbattribute.Marshal(sb) + if err != nil { + return err + } + _, err = db.Ddb.PutItem(&dynamodb.PutItemInput{ + Item: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(sb.ID)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + "servicebinding": msb, + "type": {S: aws.String(itemTypeServiceBinding)}, + }, + TableName: aws.String(db.Tablename), + }) + return err +} + +// DeleteServiceBinding deletes the service binding. +func (db DdbDataStore) DeleteServiceBinding(id string) error { + _, err := db.Ddb.DeleteItem(&dynamodb.DeleteItemInput{ + Key: map[string]*dynamodb.AttributeValue{ + "id": {S: aws.String(id)}, + "userid": {S: aws.String(db.Accountuuid.String())}, + }, + TableName: aws.String(db.Tablename), + }) + return err +} diff --git a/pkg/serviceinstance/serviceinstance.go b/pkg/serviceinstance/serviceinstance.go new file mode 100644 index 00000000..a1739121 --- /dev/null +++ b/pkg/serviceinstance/serviceinstance.go @@ -0,0 +1,33 @@ +package serviceinstance + +import "reflect" + +// ServiceInstance provides details of a service instance +type ServiceInstance struct { + ID string + ServiceID string + PlanID string + Params map[string]string + StackID string +} + +func (i *ServiceInstance) Match(other *ServiceInstance) bool { + return reflect.DeepEqual(i, other) +} + +// ServiceBinding represents a service binding. +type ServiceBinding struct { + ID string + InstanceID string + PolicyArn string + RoleName string + Scope string +} + +// Match returns true if the other service binding has the same attributes. +func (b *ServiceBinding) Match(other *ServiceBinding) bool { + return b.ID == other.ID && + b.InstanceID == other.InstanceID && + b.RoleName == other.RoleName && + b.Scope == other.Scope +} diff --git a/scripts/build_and_push_image.sh b/scripts/build_and_push_image.sh new file mode 100755 index 00000000..39a9b73a --- /dev/null +++ b/scripts/build_and_push_image.sh @@ -0,0 +1,37 @@ +#!/bin/bash +# +# Build and push aws-service-broker to ECR repository +# +name=$1 +version=$2 + +region=us-west-2 +path=$GOPATH/src/github.com/awslabs/aws-service-broker + +function help { + echo "USAGE: $0 NAME VERSION" +} + +if [ "$name" == "" ]; then + help + exit 1 +fi + +if [ "$version" == "" ]; then + help + exit 1 +fi + +set -e + +cd $path + +account_id=`aws sts get-caller-identity |jq -r .Account` +url=$account_id.dkr.ecr.$region.amazonaws.com + +`aws ecr get-login --no-include-email --region $region` +docker build . -t $name:$version +docker tag $name:$version $url/$name:$version +docker push $url/$name:$version + +cd - diff --git a/scripts/start_broker.sh b/scripts/start_broker.sh new file mode 100755 index 00000000..db69a5b0 --- /dev/null +++ b/scripts/start_broker.sh @@ -0,0 +1,17 @@ +#!/usr/bin/env bash +# +# Script to start the broker run by Dockerfile +# +aws-servicebroker \ + -insecure \ + -alsologtostderr \ + -region ${AWS_DEFAULT_REGION:=us-west-2} \ + -s3Bucket ${S3_BUCKET:=awsservicebrokeralpha} \ + -s3Key ${BUCKET_PREFIX:=pcf/templates} \ + -s3Region ${BUCKET_REGION:=us-west-2} \ + -port ${PORT:=3199} \ + -tableName ${DYNAMO_TABLE:=awssb} \ + -enableBasicAuth \ + -basicAuthUser ${SECURITY_USER_NAME:=admin} \ + -basicAuthPass ${SECURITY_USER_PASSWORD} \ + -v=${VERBOSE_LEVEL:=4} diff --git a/setup/aws-service-broker-worker.json b/setup/aws-service-broker-worker.json new file mode 100644 index 00000000..aa8af937 --- /dev/null +++ b/setup/aws-service-broker-worker.json @@ -0,0 +1,93 @@ +{ + "AWSTemplateFormatVersion": "2010-09-09", + "Description": "Role and policy to allow for aws-service-broker to assume and create roles in this account", + "Parameters": { + "ServiceBrokerAccountId": { + "Type": "String", + "Description": "12 digit AWS account id (no spaces) of account where the AWS Service Broker will run." + }, + "RoleName": { + "Type": "String", + "Description": "Name of role.", + "Default": "aws-service-broker-worker" + } + }, + "Resources": { + "AwsServiceBrokerWorkerRole": { + "Type": "AWS::IAM::Role", + "Properties": { + "RoleName": { + "Ref": "RoleName" + }, + "AssumeRolePolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": { + "Ref": "ServiceBrokerAccountId" + } + }, + "Action": [ + "sts:AssumeRole" + ] + } + ] + } + } + }, + "AwsServiceBrokerWorkerPolicy": { + "Type": "AWS::IAM::Policy", + "DependsOn": "AwsServiceBrokerWorkerRole", + "Properties": { + "PolicyName": "aws-service-broker-worker", + "Roles": [ + { + "Ref": "RoleName" + } + ], + "PolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "NotAction": [ + "iam:*", + "organizations:*" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "iam:Add*", + "iam:Attach*", + "iam:Create*", + "iam:Delete*", + "iam:Detach*", + "iam:Get*", + "iam:List*", + "iam:PassRole", + "iam:Put*", + "iam:Remove*", + "iam:Update*", + "iam:Upload*", + "organizations:DescribeOrganization" + ], + "Resource": "*" + } + ] + } + } + } + }, + "Outputs": { + "RoleArn": { + "Description": "ARN of the role.", + "Value": { + "Fn::GetAtt": ["AwsServiceBrokerWorkerRole", "Arn"] + } + } + } +} diff --git a/testcases/options.yaml b/testcases/options.yaml new file mode 100644 index 00000000..68874a07 --- /dev/null +++ b/testcases/options.yaml @@ -0,0 +1,8 @@ +Minimal: + tablename: awssb + s3bucket: awsservicebrokeralpha + s3region: us-west-2 + s3key: templates/pcf + templatefilter: -main.template + brokerid: aws-service-broker + region: us-east-1 diff --git a/vendor/github.com/abbot/go-http-auth/LICENSE b/vendor/github.com/abbot/go-http-auth/LICENSE new file mode 100644 index 00000000..e454a525 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/LICENSE @@ -0,0 +1,178 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + diff --git a/vendor/github.com/abbot/go-http-auth/auth.go b/vendor/github.com/abbot/go-http-auth/auth.go new file mode 100644 index 00000000..05ded165 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/auth.go @@ -0,0 +1,109 @@ +// Package auth is an implementation of HTTP Basic and HTTP Digest authentication. +package auth + +import ( + "net/http" + + "golang.org/x/net/context" +) + +/* + Request handlers must take AuthenticatedRequest instead of http.Request +*/ +type AuthenticatedRequest struct { + http.Request + /* + Authenticated user name. Current API implies that Username is + never empty, which means that authentication is always done + before calling the request handler. + */ + Username string +} + +/* + AuthenticatedHandlerFunc is like http.HandlerFunc, but takes + AuthenticatedRequest instead of http.Request +*/ +type AuthenticatedHandlerFunc func(http.ResponseWriter, *AuthenticatedRequest) + +/* + Authenticator wraps an AuthenticatedHandlerFunc with + authentication-checking code. + + Typical Authenticator usage is something like: + + authenticator := SomeAuthenticator(...) + http.HandleFunc("/", authenticator(my_handler)) + + Authenticator wrapper checks the user authentication and calls the + wrapped function only after authentication has succeeded. Otherwise, + it returns a handler which initiates the authentication procedure. +*/ +type Authenticator func(AuthenticatedHandlerFunc) http.HandlerFunc + +// Info contains authentication information for the request. +type Info struct { + // Authenticated is set to true when request was authenticated + // successfully, i.e. username and password passed in request did + // pass the check. + Authenticated bool + + // Username contains a user name passed in the request when + // Authenticated is true. It's value is undefined if Authenticated + // is false. + Username string + + // ResponseHeaders contains extra headers that must be set by server + // when sending back HTTP response. + ResponseHeaders http.Header +} + +// UpdateHeaders updates headers with this Info's ResponseHeaders. It is +// safe to call this function on nil Info. +func (i *Info) UpdateHeaders(headers http.Header) { + if i == nil { + return + } + for k, values := range i.ResponseHeaders { + for _, v := range values { + headers.Add(k, v) + } + } +} + +type key int // used for context keys + +var infoKey key = 0 + +type AuthenticatorInterface interface { + // NewContext returns a new context carrying authentication + // information extracted from the request. + NewContext(ctx context.Context, r *http.Request) context.Context + + // Wrap returns an http.HandlerFunc which wraps + // AuthenticatedHandlerFunc with this authenticator's + // authentication checks. + Wrap(AuthenticatedHandlerFunc) http.HandlerFunc +} + +// FromContext returns authentication information from the context or +// nil if no such information present. +func FromContext(ctx context.Context) *Info { + info, ok := ctx.Value(infoKey).(*Info) + if !ok { + return nil + } + return info +} + +// AuthUsernameHeader is the header set by JustCheck functions. It +// contains an authenticated username (if authentication was +// successful). +const AuthUsernameHeader = "X-Authenticated-Username" + +func JustCheck(auth AuthenticatorInterface, wrapped http.HandlerFunc) http.HandlerFunc { + return auth.Wrap(func(w http.ResponseWriter, ar *AuthenticatedRequest) { + ar.Header.Set(AuthUsernameHeader, ar.Username) + wrapped(w, &ar.Request) + }) +} diff --git a/vendor/github.com/abbot/go-http-auth/basic.go b/vendor/github.com/abbot/go-http-auth/basic.go new file mode 100644 index 00000000..b03dd582 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/basic.go @@ -0,0 +1,163 @@ +package auth + +import ( + "bytes" + "crypto/sha1" + "crypto/subtle" + "encoding/base64" + "errors" + "net/http" + "strings" + + "golang.org/x/crypto/bcrypt" + "golang.org/x/net/context" +) + +type compareFunc func(hashedPassword, password []byte) error + +var ( + errMismatchedHashAndPassword = errors.New("mismatched hash and password") + + compareFuncs = []struct { + prefix string + compare compareFunc + }{ + {"", compareMD5HashAndPassword}, // default compareFunc + {"{SHA}", compareShaHashAndPassword}, + // Bcrypt is complicated. According to crypt(3) from + // crypt_blowfish version 1.3 (fetched from + // http://www.openwall.com/crypt/crypt_blowfish-1.3.tar.gz), there + // are three different has prefixes: "$2a$", used by versions up + // to 1.0.4, and "$2x$" and "$2y$", used in all later + // versions. "$2a$" has a known bug, "$2x$" was added as a + // migration path for systems with "$2a$" prefix and still has a + // bug, and only "$2y$" should be used by modern systems. The bug + // has something to do with handling of 8-bit characters. Since + // both "$2a$" and "$2x$" are deprecated, we are handling them the + // same way as "$2y$", which will yield correct results for 7-bit + // character passwords, but is wrong for 8-bit character + // passwords. You have to upgrade to "$2y$" if you want sant 8-bit + // character password support with bcrypt. To add to the mess, + // OpenBSD 5.5. introduced "$2b$" prefix, which behaves exactly + // like "$2y$" according to the same source. + {"$2a$", bcrypt.CompareHashAndPassword}, + {"$2b$", bcrypt.CompareHashAndPassword}, + {"$2x$", bcrypt.CompareHashAndPassword}, + {"$2y$", bcrypt.CompareHashAndPassword}, + } +) + +type BasicAuth struct { + Realm string + Secrets SecretProvider + // Headers used by authenticator. Set to ProxyHeaders to use with + // proxy server. When nil, NormalHeaders are used. + Headers *Headers +} + +// check that BasicAuth implements AuthenticatorInterface +var _ = (AuthenticatorInterface)((*BasicAuth)(nil)) + +/* + Checks the username/password combination from the request. Returns + either an empty string (authentication failed) or the name of the + authenticated user. + + Supports MD5 and SHA1 password entries +*/ +func (a *BasicAuth) CheckAuth(r *http.Request) string { + s := strings.SplitN(r.Header.Get(a.Headers.V().Authorization), " ", 2) + if len(s) != 2 || s[0] != "Basic" { + return "" + } + + b, err := base64.StdEncoding.DecodeString(s[1]) + if err != nil { + return "" + } + pair := strings.SplitN(string(b), ":", 2) + if len(pair) != 2 { + return "" + } + user, password := pair[0], pair[1] + secret := a.Secrets(user, a.Realm) + if secret == "" { + return "" + } + compare := compareFuncs[0].compare + for _, cmp := range compareFuncs[1:] { + if strings.HasPrefix(secret, cmp.prefix) { + compare = cmp.compare + break + } + } + if compare([]byte(secret), []byte(password)) != nil { + return "" + } + return pair[0] +} + +func compareShaHashAndPassword(hashedPassword, password []byte) error { + d := sha1.New() + d.Write(password) + if subtle.ConstantTimeCompare(hashedPassword[5:], []byte(base64.StdEncoding.EncodeToString(d.Sum(nil)))) != 1 { + return errMismatchedHashAndPassword + } + return nil +} + +func compareMD5HashAndPassword(hashedPassword, password []byte) error { + parts := bytes.SplitN(hashedPassword, []byte("$"), 4) + if len(parts) != 4 { + return errMismatchedHashAndPassword + } + magic := []byte("$" + string(parts[1]) + "$") + salt := parts[2] + if subtle.ConstantTimeCompare(hashedPassword, MD5Crypt(password, salt, magic)) != 1 { + return errMismatchedHashAndPassword + } + return nil +} + +/* + http.Handler for BasicAuth which initiates the authentication process + (or requires reauthentication). +*/ +func (a *BasicAuth) RequireAuth(w http.ResponseWriter, r *http.Request) { + w.Header().Set(contentType, a.Headers.V().UnauthContentType) + w.Header().Set(a.Headers.V().Authenticate, `Basic realm="`+a.Realm+`"`) + w.WriteHeader(a.Headers.V().UnauthCode) + w.Write([]byte(a.Headers.V().UnauthResponse)) +} + +/* + BasicAuthenticator returns a function, which wraps an + AuthenticatedHandlerFunc converting it to http.HandlerFunc. This + wrapper function checks the authentication and either sends back + required authentication headers, or calls the wrapped function with + authenticated username in the AuthenticatedRequest. +*/ +func (a *BasicAuth) Wrap(wrapped AuthenticatedHandlerFunc) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + if username := a.CheckAuth(r); username == "" { + a.RequireAuth(w, r) + } else { + ar := &AuthenticatedRequest{Request: *r, Username: username} + wrapped(w, ar) + } + } +} + +// NewContext returns a context carrying authentication information for the request. +func (a *BasicAuth) NewContext(ctx context.Context, r *http.Request) context.Context { + info := &Info{Username: a.CheckAuth(r), ResponseHeaders: make(http.Header)} + info.Authenticated = (info.Username != "") + if !info.Authenticated { + info.ResponseHeaders.Set(a.Headers.V().Authenticate, `Basic realm="`+a.Realm+`"`) + } + return context.WithValue(ctx, infoKey, info) +} + +func NewBasicAuthenticator(realm string, secrets SecretProvider) *BasicAuth { + return &BasicAuth{Realm: realm, Secrets: secrets} +} diff --git a/vendor/github.com/abbot/go-http-auth/digest.go b/vendor/github.com/abbot/go-http-auth/digest.go new file mode 100644 index 00000000..21b09334 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/digest.go @@ -0,0 +1,274 @@ +package auth + +import ( + "crypto/subtle" + "fmt" + "net/http" + "net/url" + "sort" + "strconv" + "strings" + "sync" + "time" + + "golang.org/x/net/context" +) + +type digest_client struct { + nc uint64 + last_seen int64 +} + +type DigestAuth struct { + Realm string + Opaque string + Secrets SecretProvider + PlainTextSecrets bool + IgnoreNonceCount bool + // Headers used by authenticator. Set to ProxyHeaders to use with + // proxy server. When nil, NormalHeaders are used. + Headers *Headers + + /* + Approximate size of Client's Cache. When actual number of + tracked client nonces exceeds + ClientCacheSize+ClientCacheTolerance, ClientCacheTolerance*2 + older entries are purged. + */ + ClientCacheSize int + ClientCacheTolerance int + + clients map[string]*digest_client + mutex sync.Mutex +} + +// check that DigestAuth implements AuthenticatorInterface +var _ = (AuthenticatorInterface)((*DigestAuth)(nil)) + +type digest_cache_entry struct { + nonce string + last_seen int64 +} + +type digest_cache []digest_cache_entry + +func (c digest_cache) Less(i, j int) bool { + return c[i].last_seen < c[j].last_seen +} + +func (c digest_cache) Len() int { + return len(c) +} + +func (c digest_cache) Swap(i, j int) { + c[i], c[j] = c[j], c[i] +} + +/* + Remove count oldest entries from DigestAuth.clients +*/ +func (a *DigestAuth) Purge(count int) { + entries := make([]digest_cache_entry, 0, len(a.clients)) + for nonce, client := range a.clients { + entries = append(entries, digest_cache_entry{nonce, client.last_seen}) + } + cache := digest_cache(entries) + sort.Sort(cache) + for _, client := range cache[:count] { + delete(a.clients, client.nonce) + } +} + +/* + http.Handler for DigestAuth which initiates the authentication process + (or requires reauthentication). +*/ +func (a *DigestAuth) RequireAuth(w http.ResponseWriter, r *http.Request) { + if len(a.clients) > a.ClientCacheSize+a.ClientCacheTolerance { + a.Purge(a.ClientCacheTolerance * 2) + } + nonce := RandomKey() + a.clients[nonce] = &digest_client{nc: 0, last_seen: time.Now().UnixNano()} + w.Header().Set(contentType, a.Headers.V().UnauthContentType) + w.Header().Set(a.Headers.V().Authenticate, + fmt.Sprintf(`Digest realm="%s", nonce="%s", opaque="%s", algorithm="MD5", qop="auth"`, + a.Realm, nonce, a.Opaque)) + w.WriteHeader(a.Headers.V().UnauthCode) + w.Write([]byte(a.Headers.V().UnauthResponse)) +} + +/* + Parse Authorization header from the http.Request. Returns a map of + auth parameters or nil if the header is not a valid parsable Digest + auth header. +*/ +func DigestAuthParams(authorization string) map[string]string { + s := strings.SplitN(authorization, " ", 2) + if len(s) != 2 || s[0] != "Digest" { + return nil + } + + return ParsePairs(s[1]) +} + +/* + Check if request contains valid authentication data. Returns a pair + of username, authinfo where username is the name of the authenticated + user or an empty string and authinfo is the contents for the optional + Authentication-Info response header. +*/ +func (da *DigestAuth) CheckAuth(r *http.Request) (username string, authinfo *string) { + da.mutex.Lock() + defer da.mutex.Unlock() + username = "" + authinfo = nil + auth := DigestAuthParams(r.Header.Get(da.Headers.V().Authorization)) + if auth == nil { + return "", nil + } + // RFC2617 Section 3.2.1 specifies that unset value of algorithm in + // WWW-Authenticate Response header should be treated as + // "MD5". According to section 3.2.2 the "algorithm" value in + // subsequent Request Authorization header must be set to whatever + // was supplied in the WWW-Authenticate Response header. This + // implementation always returns an algorithm in WWW-Authenticate + // header, however there seems to be broken clients in the wild + // which do not set the algorithm. Assume the unset algorithm in + // Authorization header to be equal to MD5. + if _, ok := auth["algorithm"]; !ok { + auth["algorithm"] = "MD5" + } + if da.Opaque != auth["opaque"] || auth["algorithm"] != "MD5" || auth["qop"] != "auth" { + return "", nil + } + + // Check if the requested URI matches auth header + if r.RequestURI != auth["uri"] { + // We allow auth["uri"] to be a full path prefix of request-uri + // for some reason lost in history, which is probably wrong, but + // used to be like that for quite some time + // (https://tools.ietf.org/html/rfc2617#section-3.2.2 explicitly + // says that auth["uri"] is the request-uri). + // + // TODO: make an option to allow only strict checking. + switch u, err := url.Parse(auth["uri"]); { + case err != nil: + return "", nil + case r.URL == nil: + return "", nil + case len(u.Path) > len(r.URL.Path): + return "", nil + case !strings.HasPrefix(r.URL.Path, u.Path): + return "", nil + } + } + + HA1 := da.Secrets(auth["username"], da.Realm) + if da.PlainTextSecrets { + HA1 = H(auth["username"] + ":" + da.Realm + ":" + HA1) + } + HA2 := H(r.Method + ":" + auth["uri"]) + KD := H(strings.Join([]string{HA1, auth["nonce"], auth["nc"], auth["cnonce"], auth["qop"], HA2}, ":")) + + if subtle.ConstantTimeCompare([]byte(KD), []byte(auth["response"])) != 1 { + return "", nil + } + + // At this point crypto checks are completed and validated. + // Now check if the session is valid. + + nc, err := strconv.ParseUint(auth["nc"], 16, 64) + if err != nil { + return "", nil + } + + if client, ok := da.clients[auth["nonce"]]; !ok { + return "", nil + } else { + if client.nc != 0 && client.nc >= nc && !da.IgnoreNonceCount { + return "", nil + } + client.nc = nc + client.last_seen = time.Now().UnixNano() + } + + resp_HA2 := H(":" + auth["uri"]) + rspauth := H(strings.Join([]string{HA1, auth["nonce"], auth["nc"], auth["cnonce"], auth["qop"], resp_HA2}, ":")) + + info := fmt.Sprintf(`qop="auth", rspauth="%s", cnonce="%s", nc="%s"`, rspauth, auth["cnonce"], auth["nc"]) + return auth["username"], &info +} + +/* + Default values for ClientCacheSize and ClientCacheTolerance for DigestAuth +*/ +const DefaultClientCacheSize = 1000 +const DefaultClientCacheTolerance = 100 + +/* + Wrap returns an Authenticator which uses HTTP Digest + authentication. Arguments: + + realm: The authentication realm. + + secrets: SecretProvider which must return HA1 digests for the same + realm as above. +*/ +func (a *DigestAuth) Wrap(wrapped AuthenticatedHandlerFunc) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + if username, authinfo := a.CheckAuth(r); username == "" { + a.RequireAuth(w, r) + } else { + ar := &AuthenticatedRequest{Request: *r, Username: username} + if authinfo != nil { + w.Header().Set(a.Headers.V().AuthInfo, *authinfo) + } + wrapped(w, ar) + } + } +} + +/* + JustCheck returns function which converts an http.HandlerFunc into a + http.HandlerFunc which requires authentication. Username is passed as + an extra X-Authenticated-Username header. +*/ +func (a *DigestAuth) JustCheck(wrapped http.HandlerFunc) http.HandlerFunc { + return a.Wrap(func(w http.ResponseWriter, ar *AuthenticatedRequest) { + ar.Header.Set(AuthUsernameHeader, ar.Username) + wrapped(w, &ar.Request) + }) +} + +// NewContext returns a context carrying authentication information for the request. +func (a *DigestAuth) NewContext(ctx context.Context, r *http.Request) context.Context { + username, authinfo := a.CheckAuth(r) + info := &Info{Username: username, ResponseHeaders: make(http.Header)} + if username != "" { + info.Authenticated = true + info.ResponseHeaders.Set(a.Headers.V().AuthInfo, *authinfo) + } else { + // return back digest WWW-Authenticate header + if len(a.clients) > a.ClientCacheSize+a.ClientCacheTolerance { + a.Purge(a.ClientCacheTolerance * 2) + } + nonce := RandomKey() + a.clients[nonce] = &digest_client{nc: 0, last_seen: time.Now().UnixNano()} + info.ResponseHeaders.Set(a.Headers.V().Authenticate, + fmt.Sprintf(`Digest realm="%s", nonce="%s", opaque="%s", algorithm="MD5", qop="auth"`, + a.Realm, nonce, a.Opaque)) + } + return context.WithValue(ctx, infoKey, info) +} + +func NewDigestAuthenticator(realm string, secrets SecretProvider) *DigestAuth { + da := &DigestAuth{ + Opaque: RandomKey(), + Realm: realm, + Secrets: secrets, + PlainTextSecrets: false, + ClientCacheSize: DefaultClientCacheSize, + ClientCacheTolerance: DefaultClientCacheTolerance, + clients: map[string]*digest_client{}} + return da +} diff --git a/vendor/github.com/abbot/go-http-auth/md5crypt.go b/vendor/github.com/abbot/go-http-auth/md5crypt.go new file mode 100644 index 00000000..a7a031c4 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/md5crypt.go @@ -0,0 +1,92 @@ +package auth + +import "crypto/md5" +import "strings" + +const itoa64 = "./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + +var md5_crypt_swaps = [16]int{12, 6, 0, 13, 7, 1, 14, 8, 2, 15, 9, 3, 5, 10, 4, 11} + +type MD5Entry struct { + Magic, Salt, Hash []byte +} + +func NewMD5Entry(e string) *MD5Entry { + parts := strings.SplitN(e, "$", 4) + if len(parts) != 4 { + return nil + } + return &MD5Entry{ + Magic: []byte("$" + parts[1] + "$"), + Salt: []byte(parts[2]), + Hash: []byte(parts[3]), + } +} + +/* + MD5 password crypt implementation +*/ +func MD5Crypt(password, salt, magic []byte) []byte { + d := md5.New() + + d.Write(password) + d.Write(magic) + d.Write(salt) + + d2 := md5.New() + d2.Write(password) + d2.Write(salt) + d2.Write(password) + + for i, mixin := 0, d2.Sum(nil); i < len(password); i++ { + d.Write([]byte{mixin[i%16]}) + } + + for i := len(password); i != 0; i >>= 1 { + if i&1 == 0 { + d.Write([]byte{password[0]}) + } else { + d.Write([]byte{0}) + } + } + + final := d.Sum(nil) + + for i := 0; i < 1000; i++ { + d2 := md5.New() + if i&1 == 0 { + d2.Write(final) + } else { + d2.Write(password) + } + + if i%3 != 0 { + d2.Write(salt) + } + + if i%7 != 0 { + d2.Write(password) + } + + if i&1 == 0 { + d2.Write(password) + } else { + d2.Write(final) + } + final = d2.Sum(nil) + } + + result := make([]byte, 0, 22) + v := uint(0) + bits := uint(0) + for _, i := range md5_crypt_swaps { + v |= (uint(final[i]) << bits) + for bits = bits + 8; bits > 6; bits -= 6 { + result = append(result, itoa64[v&0x3f]) + v >>= 6 + } + } + result = append(result, itoa64[v&0x3f]) + + return append(append(append(magic, salt...), '$'), result...) +} diff --git a/vendor/github.com/abbot/go-http-auth/misc.go b/vendor/github.com/abbot/go-http-auth/misc.go new file mode 100644 index 00000000..4536ce67 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/misc.go @@ -0,0 +1,141 @@ +package auth + +import ( + "bytes" + "crypto/md5" + "crypto/rand" + "encoding/base64" + "fmt" + "net/http" + "strings" +) + +// RandomKey returns a random 16-byte base64 alphabet string +func RandomKey() string { + k := make([]byte, 12) + for bytes := 0; bytes < len(k); { + n, err := rand.Read(k[bytes:]) + if err != nil { + panic("rand.Read() failed") + } + bytes += n + } + return base64.StdEncoding.EncodeToString(k) +} + +// H function for MD5 algorithm (returns a lower-case hex MD5 digest) +func H(data string) string { + digest := md5.New() + digest.Write([]byte(data)) + return fmt.Sprintf("%x", digest.Sum(nil)) +} + +// ParseList parses a comma-separated list of values as described by +// RFC 2068 and returns list elements. +// +// Lifted from https://code.google.com/p/gorilla/source/browse/http/parser/parser.go +// which was ported from urllib2.parse_http_list, from the Python +// standard library. +func ParseList(value string) []string { + var list []string + var escape, quote bool + b := new(bytes.Buffer) + for _, r := range value { + switch { + case escape: + b.WriteRune(r) + escape = false + case quote: + if r == '\\' { + escape = true + } else { + if r == '"' { + quote = false + } + b.WriteRune(r) + } + case r == ',': + list = append(list, strings.TrimSpace(b.String())) + b.Reset() + case r == '"': + quote = true + b.WriteRune(r) + default: + b.WriteRune(r) + } + } + // Append last part. + if s := b.String(); s != "" { + list = append(list, strings.TrimSpace(s)) + } + return list +} + +// ParsePairs extracts key/value pairs from a comma-separated list of +// values as described by RFC 2068 and returns a map[key]value. The +// resulting values are unquoted. If a list element doesn't contain a +// "=", the key is the element itself and the value is an empty +// string. +// +// Lifted from https://code.google.com/p/gorilla/source/browse/http/parser/parser.go +func ParsePairs(value string) map[string]string { + m := make(map[string]string) + for _, pair := range ParseList(strings.TrimSpace(value)) { + if i := strings.Index(pair, "="); i < 0 { + m[pair] = "" + } else { + v := pair[i+1:] + if v[0] == '"' && v[len(v)-1] == '"' { + // Unquote it. + v = v[1 : len(v)-1] + } + m[pair[:i]] = v + } + } + return m +} + +// Headers contains header and error codes used by authenticator. +type Headers struct { + Authenticate string // WWW-Authenticate + Authorization string // Authorization + AuthInfo string // Authentication-Info + UnauthCode int // 401 + UnauthContentType string // text/plain + UnauthResponse string // Unauthorized. +} + +// V returns NormalHeaders when h is nil, or h otherwise. Allows to +// use uninitialized *Headers values in structs. +func (h *Headers) V() *Headers { + if h == nil { + return NormalHeaders + } + return h +} + +var ( + // NormalHeaders are the regular Headers used by an HTTP Server for + // request authentication. + NormalHeaders = &Headers{ + Authenticate: "WWW-Authenticate", + Authorization: "Authorization", + AuthInfo: "Authentication-Info", + UnauthCode: http.StatusUnauthorized, + UnauthContentType: "text/plain", + UnauthResponse: fmt.Sprintf("%d %s\n", http.StatusUnauthorized, http.StatusText(http.StatusUnauthorized)), + } + + // ProxyHeaders are Headers used by an HTTP Proxy server for proxy + // access authentication. + ProxyHeaders = &Headers{ + Authenticate: "Proxy-Authenticate", + Authorization: "Proxy-Authorization", + AuthInfo: "Proxy-Authentication-Info", + UnauthCode: http.StatusProxyAuthRequired, + UnauthContentType: "text/plain", + UnauthResponse: fmt.Sprintf("%d %s\n", http.StatusProxyAuthRequired, http.StatusText(http.StatusProxyAuthRequired)), + } +) + +const contentType = "Content-Type" diff --git a/vendor/github.com/abbot/go-http-auth/users.go b/vendor/github.com/abbot/go-http-auth/users.go new file mode 100644 index 00000000..37718124 --- /dev/null +++ b/vendor/github.com/abbot/go-http-auth/users.go @@ -0,0 +1,154 @@ +package auth + +import ( + "encoding/csv" + "os" + "sync" +) + +/* + SecretProvider is used by authenticators. Takes user name and realm + as an argument, returns secret required for authentication (HA1 for + digest authentication, properly encrypted password for basic). + + Returning an empty string means failing the authentication. +*/ +type SecretProvider func(user, realm string) string + +/* + Common functions for file auto-reloading +*/ +type File struct { + Path string + Info os.FileInfo + /* must be set in inherited types during initialization */ + Reload func() + mu sync.Mutex +} + +func (f *File) ReloadIfNeeded() { + info, err := os.Stat(f.Path) + if err != nil { + panic(err) + } + f.mu.Lock() + defer f.mu.Unlock() + if f.Info == nil || f.Info.ModTime() != info.ModTime() { + f.Info = info + f.Reload() + } +} + +/* + Structure used for htdigest file authentication. Users map realms to + maps of users to their HA1 digests. +*/ +type HtdigestFile struct { + File + Users map[string]map[string]string + mu sync.RWMutex +} + +func reload_htdigest(hf *HtdigestFile) { + r, err := os.Open(hf.Path) + if err != nil { + panic(err) + } + csv_reader := csv.NewReader(r) + csv_reader.Comma = ':' + csv_reader.Comment = '#' + csv_reader.TrimLeadingSpace = true + + records, err := csv_reader.ReadAll() + if err != nil { + panic(err) + } + + hf.mu.Lock() + defer hf.mu.Unlock() + hf.Users = make(map[string]map[string]string) + for _, record := range records { + _, exists := hf.Users[record[1]] + if !exists { + hf.Users[record[1]] = make(map[string]string) + } + hf.Users[record[1]][record[0]] = record[2] + } +} + +/* + SecretProvider implementation based on htdigest-formated files. Will + reload htdigest file on changes. Will panic on syntax errors in + htdigest files. +*/ +func HtdigestFileProvider(filename string) SecretProvider { + hf := &HtdigestFile{File: File{Path: filename}} + hf.Reload = func() { reload_htdigest(hf) } + return func(user, realm string) string { + hf.ReloadIfNeeded() + hf.mu.RLock() + defer hf.mu.RUnlock() + _, exists := hf.Users[realm] + if !exists { + return "" + } + digest, exists := hf.Users[realm][user] + if !exists { + return "" + } + return digest + } +} + +/* + Structure used for htdigest file authentication. Users map users to + their salted encrypted password +*/ +type HtpasswdFile struct { + File + Users map[string]string + mu sync.RWMutex +} + +func reload_htpasswd(h *HtpasswdFile) { + r, err := os.Open(h.Path) + if err != nil { + panic(err) + } + csv_reader := csv.NewReader(r) + csv_reader.Comma = ':' + csv_reader.Comment = '#' + csv_reader.TrimLeadingSpace = true + + records, err := csv_reader.ReadAll() + if err != nil { + panic(err) + } + + h.mu.Lock() + defer h.mu.Unlock() + h.Users = make(map[string]string) + for _, record := range records { + h.Users[record[0]] = record[1] + } +} + +/* + SecretProvider implementation based on htpasswd-formated files. Will + reload htpasswd file on changes. Will panic on syntax errors in + htpasswd files. Realm argument of the SecretProvider is ignored. +*/ +func HtpasswdFileProvider(filename string) SecretProvider { + h := &HtpasswdFile{File: File{Path: filename}} + h.Reload = func() { reload_htpasswd(h) } + return func(user, realm string) string { + h.ReloadIfNeeded() + h.mu.RLock() + password, exists := h.Users[user] + h.mu.RUnlock() + if !exists { + return "" + } + return password + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/LICENSE.txt b/vendor/github.com/aws/aws-sdk-go/LICENSE.txt new file mode 100644 index 00000000..d6456956 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/LICENSE.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/aws/aws-sdk-go/NOTICE.txt b/vendor/github.com/aws/aws-sdk-go/NOTICE.txt new file mode 100644 index 00000000..5f14d116 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/NOTICE.txt @@ -0,0 +1,3 @@ +AWS SDK for Go +Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright 2014-2015 Stripe, Inc. diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awserr/error.go b/vendor/github.com/aws/aws-sdk-go/aws/awserr/error.go new file mode 100644 index 00000000..56fdfc2b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awserr/error.go @@ -0,0 +1,145 @@ +// Package awserr represents API error interface accessors for the SDK. +package awserr + +// An Error wraps lower level errors with code, message and an original error. +// The underlying concrete error type may also satisfy other interfaces which +// can be to used to obtain more specific information about the error. +// +// Calling Error() or String() will always include the full information about +// an error based on its underlying type. +// +// Example: +// +// output, err := s3manage.Upload(svc, input, opts) +// if err != nil { +// if awsErr, ok := err.(awserr.Error); ok { +// // Get error details +// log.Println("Error:", awsErr.Code(), awsErr.Message()) +// +// // Prints out full error message, including original error if there was one. +// log.Println("Error:", awsErr.Error()) +// +// // Get original error +// if origErr := awsErr.OrigErr(); origErr != nil { +// // operate on original error. +// } +// } else { +// fmt.Println(err.Error()) +// } +// } +// +type Error interface { + // Satisfy the generic error interface. + error + + // Returns the short phrase depicting the classification of the error. + Code() string + + // Returns the error details message. + Message() string + + // Returns the original error if one was set. Nil is returned if not set. + OrigErr() error +} + +// BatchError is a batch of errors which also wraps lower level errors with +// code, message, and original errors. Calling Error() will include all errors +// that occurred in the batch. +// +// Deprecated: Replaced with BatchedErrors. Only defined for backwards +// compatibility. +type BatchError interface { + // Satisfy the generic error interface. + error + + // Returns the short phrase depicting the classification of the error. + Code() string + + // Returns the error details message. + Message() string + + // Returns the original error if one was set. Nil is returned if not set. + OrigErrs() []error +} + +// BatchedErrors is a batch of errors which also wraps lower level errors with +// code, message, and original errors. Calling Error() will include all errors +// that occurred in the batch. +// +// Replaces BatchError +type BatchedErrors interface { + // Satisfy the base Error interface. + Error + + // Returns the original error if one was set. Nil is returned if not set. + OrigErrs() []error +} + +// New returns an Error object described by the code, message, and origErr. +// +// If origErr satisfies the Error interface it will not be wrapped within a new +// Error object and will instead be returned. +func New(code, message string, origErr error) Error { + var errs []error + if origErr != nil { + errs = append(errs, origErr) + } + return newBaseError(code, message, errs) +} + +// NewBatchError returns an BatchedErrors with a collection of errors as an +// array of errors. +func NewBatchError(code, message string, errs []error) BatchedErrors { + return newBaseError(code, message, errs) +} + +// A RequestFailure is an interface to extract request failure information from +// an Error such as the request ID of the failed request returned by a service. +// RequestFailures may not always have a requestID value if the request failed +// prior to reaching the service such as a connection error. +// +// Example: +// +// output, err := s3manage.Upload(svc, input, opts) +// if err != nil { +// if reqerr, ok := err.(RequestFailure); ok { +// log.Println("Request failed", reqerr.Code(), reqerr.Message(), reqerr.RequestID()) +// } else { +// log.Println("Error:", err.Error()) +// } +// } +// +// Combined with awserr.Error: +// +// output, err := s3manage.Upload(svc, input, opts) +// if err != nil { +// if awsErr, ok := err.(awserr.Error); ok { +// // Generic AWS Error with Code, Message, and original error (if any) +// fmt.Println(awsErr.Code(), awsErr.Message(), awsErr.OrigErr()) +// +// if reqErr, ok := err.(awserr.RequestFailure); ok { +// // A service error occurred +// fmt.Println(reqErr.StatusCode(), reqErr.RequestID()) +// } +// } else { +// fmt.Println(err.Error()) +// } +// } +// +type RequestFailure interface { + Error + + // The status code of the HTTP response. + StatusCode() int + + // The request ID returned by the service for a request failure. This will + // be empty if no request ID is available such as the request failed due + // to a connection error. + RequestID() string +} + +// NewRequestFailure returns a new request error wrapper for the given Error +// provided. +func NewRequestFailure(err Error, statusCode int, reqID string) RequestFailure { + return newRequestError(err, statusCode, reqID) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awserr/types.go b/vendor/github.com/aws/aws-sdk-go/aws/awserr/types.go new file mode 100644 index 00000000..0202a008 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awserr/types.go @@ -0,0 +1,194 @@ +package awserr + +import "fmt" + +// SprintError returns a string of the formatted error code. +// +// Both extra and origErr are optional. If they are included their lines +// will be added, but if they are not included their lines will be ignored. +func SprintError(code, message, extra string, origErr error) string { + msg := fmt.Sprintf("%s: %s", code, message) + if extra != "" { + msg = fmt.Sprintf("%s\n\t%s", msg, extra) + } + if origErr != nil { + msg = fmt.Sprintf("%s\ncaused by: %s", msg, origErr.Error()) + } + return msg +} + +// A baseError wraps the code and message which defines an error. It also +// can be used to wrap an original error object. +// +// Should be used as the root for errors satisfying the awserr.Error. Also +// for any error which does not fit into a specific error wrapper type. +type baseError struct { + // Classification of error + code string + + // Detailed information about error + message string + + // Optional original error this error is based off of. Allows building + // chained errors. + errs []error +} + +// newBaseError returns an error object for the code, message, and errors. +// +// code is a short no whitespace phrase depicting the classification of +// the error that is being created. +// +// message is the free flow string containing detailed information about the +// error. +// +// origErrs is the error objects which will be nested under the new errors to +// be returned. +func newBaseError(code, message string, origErrs []error) *baseError { + b := &baseError{ + code: code, + message: message, + errs: origErrs, + } + + return b +} + +// Error returns the string representation of the error. +// +// See ErrorWithExtra for formatting. +// +// Satisfies the error interface. +func (b baseError) Error() string { + size := len(b.errs) + if size > 0 { + return SprintError(b.code, b.message, "", errorList(b.errs)) + } + + return SprintError(b.code, b.message, "", nil) +} + +// String returns the string representation of the error. +// Alias for Error to satisfy the stringer interface. +func (b baseError) String() string { + return b.Error() +} + +// Code returns the short phrase depicting the classification of the error. +func (b baseError) Code() string { + return b.code +} + +// Message returns the error details message. +func (b baseError) Message() string { + return b.message +} + +// OrigErr returns the original error if one was set. Nil is returned if no +// error was set. This only returns the first element in the list. If the full +// list is needed, use BatchedErrors. +func (b baseError) OrigErr() error { + switch len(b.errs) { + case 0: + return nil + case 1: + return b.errs[0] + default: + if err, ok := b.errs[0].(Error); ok { + return NewBatchError(err.Code(), err.Message(), b.errs[1:]) + } + return NewBatchError("BatchedErrors", + "multiple errors occurred", b.errs) + } +} + +// OrigErrs returns the original errors if one was set. An empty slice is +// returned if no error was set. +func (b baseError) OrigErrs() []error { + return b.errs +} + +// So that the Error interface type can be included as an anonymous field +// in the requestError struct and not conflict with the error.Error() method. +type awsError Error + +// A requestError wraps a request or service error. +// +// Composed of baseError for code, message, and original error. +type requestError struct { + awsError + statusCode int + requestID string +} + +// newRequestError returns a wrapped error with additional information for +// request status code, and service requestID. +// +// Should be used to wrap all request which involve service requests. Even if +// the request failed without a service response, but had an HTTP status code +// that may be meaningful. +// +// Also wraps original errors via the baseError. +func newRequestError(err Error, statusCode int, requestID string) *requestError { + return &requestError{ + awsError: err, + statusCode: statusCode, + requestID: requestID, + } +} + +// Error returns the string representation of the error. +// Satisfies the error interface. +func (r requestError) Error() string { + extra := fmt.Sprintf("status code: %d, request id: %s", + r.statusCode, r.requestID) + return SprintError(r.Code(), r.Message(), extra, r.OrigErr()) +} + +// String returns the string representation of the error. +// Alias for Error to satisfy the stringer interface. +func (r requestError) String() string { + return r.Error() +} + +// StatusCode returns the wrapped status code for the error +func (r requestError) StatusCode() int { + return r.statusCode +} + +// RequestID returns the wrapped requestID +func (r requestError) RequestID() string { + return r.requestID +} + +// OrigErrs returns the original errors if one was set. An empty slice is +// returned if no error was set. +func (r requestError) OrigErrs() []error { + if b, ok := r.awsError.(BatchedErrors); ok { + return b.OrigErrs() + } + return []error{r.OrigErr()} +} + +// An error list that satisfies the golang interface +type errorList []error + +// Error returns the string representation of the error. +// +// Satisfies the error interface. +func (e errorList) Error() string { + msg := "" + // How do we want to handle the array size being zero + if size := len(e); size > 0 { + for i := 0; i < size; i++ { + msg += fmt.Sprintf("%s", e[i].Error()) + // We check the next index to see if it is within the slice. + // If it is, then we append a newline. We do this, because unit tests + // could be broken with the additional '\n' + if i+1 < size { + msg += "\n" + } + } + } + return msg +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/copy.go b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/copy.go new file mode 100644 index 00000000..1a3d106d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/copy.go @@ -0,0 +1,108 @@ +package awsutil + +import ( + "io" + "reflect" + "time" +) + +// Copy deeply copies a src structure to dst. Useful for copying request and +// response structures. +// +// Can copy between structs of different type, but will only copy fields which +// are assignable, and exist in both structs. Fields which are not assignable, +// or do not exist in both structs are ignored. +func Copy(dst, src interface{}) { + dstval := reflect.ValueOf(dst) + if !dstval.IsValid() { + panic("Copy dst cannot be nil") + } + + rcopy(dstval, reflect.ValueOf(src), true) +} + +// CopyOf returns a copy of src while also allocating the memory for dst. +// src must be a pointer type or this operation will fail. +func CopyOf(src interface{}) (dst interface{}) { + dsti := reflect.New(reflect.TypeOf(src).Elem()) + dst = dsti.Interface() + rcopy(dsti, reflect.ValueOf(src), true) + return +} + +// rcopy performs a recursive copy of values from the source to destination. +// +// root is used to skip certain aspects of the copy which are not valid +// for the root node of a object. +func rcopy(dst, src reflect.Value, root bool) { + if !src.IsValid() { + return + } + + switch src.Kind() { + case reflect.Ptr: + if _, ok := src.Interface().(io.Reader); ok { + if dst.Kind() == reflect.Ptr && dst.Elem().CanSet() { + dst.Elem().Set(src) + } else if dst.CanSet() { + dst.Set(src) + } + } else { + e := src.Type().Elem() + if dst.CanSet() && !src.IsNil() { + if _, ok := src.Interface().(*time.Time); !ok { + dst.Set(reflect.New(e)) + } else { + tempValue := reflect.New(e) + tempValue.Elem().Set(src.Elem()) + // Sets time.Time's unexported values + dst.Set(tempValue) + } + } + if src.Elem().IsValid() { + // Keep the current root state since the depth hasn't changed + rcopy(dst.Elem(), src.Elem(), root) + } + } + case reflect.Struct: + t := dst.Type() + for i := 0; i < t.NumField(); i++ { + name := t.Field(i).Name + srcVal := src.FieldByName(name) + dstVal := dst.FieldByName(name) + if srcVal.IsValid() && dstVal.CanSet() { + rcopy(dstVal, srcVal, false) + } + } + case reflect.Slice: + if src.IsNil() { + break + } + + s := reflect.MakeSlice(src.Type(), src.Len(), src.Cap()) + dst.Set(s) + for i := 0; i < src.Len(); i++ { + rcopy(dst.Index(i), src.Index(i), false) + } + case reflect.Map: + if src.IsNil() { + break + } + + s := reflect.MakeMap(src.Type()) + dst.Set(s) + for _, k := range src.MapKeys() { + v := src.MapIndex(k) + v2 := reflect.New(v.Type()).Elem() + rcopy(v2, v, false) + dst.SetMapIndex(k, v2) + } + default: + // Assign the value if possible. If its not assignable, the value would + // need to be converted and the impact of that may be unexpected, or is + // not compatible with the dst type. + if src.Type().AssignableTo(dst.Type()) { + dst.Set(src) + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/equal.go b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/equal.go new file mode 100644 index 00000000..59fa4a55 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/equal.go @@ -0,0 +1,27 @@ +package awsutil + +import ( + "reflect" +) + +// DeepEqual returns if the two values are deeply equal like reflect.DeepEqual. +// In addition to this, this method will also dereference the input values if +// possible so the DeepEqual performed will not fail if one parameter is a +// pointer and the other is not. +// +// DeepEqual will not perform indirection of nested values of the input parameters. +func DeepEqual(a, b interface{}) bool { + ra := reflect.Indirect(reflect.ValueOf(a)) + rb := reflect.Indirect(reflect.ValueOf(b)) + + if raValid, rbValid := ra.IsValid(), rb.IsValid(); !raValid && !rbValid { + // If the elements are both nil, and of the same type the are equal + // If they are of different types they are not equal + return reflect.TypeOf(a) == reflect.TypeOf(b) + } else if raValid != rbValid { + // Both values must be valid to be equal + return false + } + + return reflect.DeepEqual(ra.Interface(), rb.Interface()) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go new file mode 100644 index 00000000..11c52c38 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go @@ -0,0 +1,222 @@ +package awsutil + +import ( + "reflect" + "regexp" + "strconv" + "strings" + + "github.com/jmespath/go-jmespath" +) + +var indexRe = regexp.MustCompile(`(.+)\[(-?\d+)?\]$`) + +// rValuesAtPath returns a slice of values found in value v. The values +// in v are explored recursively so all nested values are collected. +func rValuesAtPath(v interface{}, path string, createPath, caseSensitive, nilTerm bool) []reflect.Value { + pathparts := strings.Split(path, "||") + if len(pathparts) > 1 { + for _, pathpart := range pathparts { + vals := rValuesAtPath(v, pathpart, createPath, caseSensitive, nilTerm) + if len(vals) > 0 { + return vals + } + } + return nil + } + + values := []reflect.Value{reflect.Indirect(reflect.ValueOf(v))} + components := strings.Split(path, ".") + for len(values) > 0 && len(components) > 0 { + var index *int64 + var indexStar bool + c := strings.TrimSpace(components[0]) + if c == "" { // no actual component, illegal syntax + return nil + } else if caseSensitive && c != "*" && strings.ToLower(c[0:1]) == c[0:1] { + // TODO normalize case for user + return nil // don't support unexported fields + } + + // parse this component + if m := indexRe.FindStringSubmatch(c); m != nil { + c = m[1] + if m[2] == "" { + index = nil + indexStar = true + } else { + i, _ := strconv.ParseInt(m[2], 10, 32) + index = &i + indexStar = false + } + } + + nextvals := []reflect.Value{} + for _, value := range values { + // pull component name out of struct member + if value.Kind() != reflect.Struct { + continue + } + + if c == "*" { // pull all members + for i := 0; i < value.NumField(); i++ { + if f := reflect.Indirect(value.Field(i)); f.IsValid() { + nextvals = append(nextvals, f) + } + } + continue + } + + value = value.FieldByNameFunc(func(name string) bool { + if c == name { + return true + } else if !caseSensitive && strings.ToLower(name) == strings.ToLower(c) { + return true + } + return false + }) + + if nilTerm && value.Kind() == reflect.Ptr && len(components[1:]) == 0 { + if !value.IsNil() { + value.Set(reflect.Zero(value.Type())) + } + return []reflect.Value{value} + } + + if createPath && value.Kind() == reflect.Ptr && value.IsNil() { + // TODO if the value is the terminus it should not be created + // if the value to be set to its position is nil. + value.Set(reflect.New(value.Type().Elem())) + value = value.Elem() + } else { + value = reflect.Indirect(value) + } + + if value.Kind() == reflect.Slice || value.Kind() == reflect.Map { + if !createPath && value.IsNil() { + value = reflect.ValueOf(nil) + } + } + + if value.IsValid() { + nextvals = append(nextvals, value) + } + } + values = nextvals + + if indexStar || index != nil { + nextvals = []reflect.Value{} + for _, valItem := range values { + value := reflect.Indirect(valItem) + if value.Kind() != reflect.Slice { + continue + } + + if indexStar { // grab all indices + for i := 0; i < value.Len(); i++ { + idx := reflect.Indirect(value.Index(i)) + if idx.IsValid() { + nextvals = append(nextvals, idx) + } + } + continue + } + + // pull out index + i := int(*index) + if i >= value.Len() { // check out of bounds + if createPath { + // TODO resize slice + } else { + continue + } + } else if i < 0 { // support negative indexing + i = value.Len() + i + } + value = reflect.Indirect(value.Index(i)) + + if value.Kind() == reflect.Slice || value.Kind() == reflect.Map { + if !createPath && value.IsNil() { + value = reflect.ValueOf(nil) + } + } + + if value.IsValid() { + nextvals = append(nextvals, value) + } + } + values = nextvals + } + + components = components[1:] + } + return values +} + +// ValuesAtPath returns a list of values at the case insensitive lexical +// path inside of a structure. +func ValuesAtPath(i interface{}, path string) ([]interface{}, error) { + result, err := jmespath.Search(path, i) + if err != nil { + return nil, err + } + + v := reflect.ValueOf(result) + if !v.IsValid() || (v.Kind() == reflect.Ptr && v.IsNil()) { + return nil, nil + } + if s, ok := result.([]interface{}); ok { + return s, err + } + if v.Kind() == reflect.Map && v.Len() == 0 { + return nil, nil + } + if v.Kind() == reflect.Slice { + out := make([]interface{}, v.Len()) + for i := 0; i < v.Len(); i++ { + out[i] = v.Index(i).Interface() + } + return out, nil + } + + return []interface{}{result}, nil +} + +// SetValueAtPath sets a value at the case insensitive lexical path inside +// of a structure. +func SetValueAtPath(i interface{}, path string, v interface{}) { + if rvals := rValuesAtPath(i, path, true, false, v == nil); rvals != nil { + for _, rval := range rvals { + if rval.Kind() == reflect.Ptr && rval.IsNil() { + continue + } + setValue(rval, v) + } + } +} + +func setValue(dstVal reflect.Value, src interface{}) { + if dstVal.Kind() == reflect.Ptr { + dstVal = reflect.Indirect(dstVal) + } + srcVal := reflect.ValueOf(src) + + if !srcVal.IsValid() { // src is literal nil + if dstVal.CanAddr() { + // Convert to pointer so that pointer's value can be nil'ed + // dstVal = dstVal.Addr() + } + dstVal.Set(reflect.Zero(dstVal.Type())) + + } else if srcVal.Kind() == reflect.Ptr { + if srcVal.IsNil() { + srcVal = reflect.Zero(dstVal.Type()) + } else { + srcVal = reflect.ValueOf(src).Elem() + } + dstVal.Set(srcVal) + } else { + dstVal.Set(srcVal) + } + +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/prettify.go b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/prettify.go new file mode 100644 index 00000000..710eb432 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/prettify.go @@ -0,0 +1,113 @@ +package awsutil + +import ( + "bytes" + "fmt" + "io" + "reflect" + "strings" +) + +// Prettify returns the string representation of a value. +func Prettify(i interface{}) string { + var buf bytes.Buffer + prettify(reflect.ValueOf(i), 0, &buf) + return buf.String() +} + +// prettify will recursively walk value v to build a textual +// representation of the value. +func prettify(v reflect.Value, indent int, buf *bytes.Buffer) { + for v.Kind() == reflect.Ptr { + v = v.Elem() + } + + switch v.Kind() { + case reflect.Struct: + strtype := v.Type().String() + if strtype == "time.Time" { + fmt.Fprintf(buf, "%s", v.Interface()) + break + } else if strings.HasPrefix(strtype, "io.") { + buf.WriteString("") + break + } + + buf.WriteString("{\n") + + names := []string{} + for i := 0; i < v.Type().NumField(); i++ { + name := v.Type().Field(i).Name + f := v.Field(i) + if name[0:1] == strings.ToLower(name[0:1]) { + continue // ignore unexported fields + } + if (f.Kind() == reflect.Ptr || f.Kind() == reflect.Slice || f.Kind() == reflect.Map) && f.IsNil() { + continue // ignore unset fields + } + names = append(names, name) + } + + for i, n := range names { + val := v.FieldByName(n) + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(n + ": ") + prettify(val, indent+2, buf) + + if i < len(names)-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + case reflect.Slice: + strtype := v.Type().String() + if strtype == "[]uint8" { + fmt.Fprintf(buf, " len %d", v.Len()) + break + } + + nl, id, id2 := "", "", "" + if v.Len() > 3 { + nl, id, id2 = "\n", strings.Repeat(" ", indent), strings.Repeat(" ", indent+2) + } + buf.WriteString("[" + nl) + for i := 0; i < v.Len(); i++ { + buf.WriteString(id2) + prettify(v.Index(i), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString("," + nl) + } + } + + buf.WriteString(nl + id + "]") + case reflect.Map: + buf.WriteString("{\n") + + for i, k := range v.MapKeys() { + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(k.String() + ": ") + prettify(v.MapIndex(k), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + default: + if !v.IsValid() { + fmt.Fprint(buf, "") + return + } + format := "%v" + switch v.Interface().(type) { + case string: + format = "%q" + case io.ReadSeeker, io.Reader: + format = "buffer(%p)" + } + fmt.Fprintf(buf, format, v.Interface()) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/string_value.go b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/string_value.go new file mode 100644 index 00000000..b6432f1a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/string_value.go @@ -0,0 +1,89 @@ +package awsutil + +import ( + "bytes" + "fmt" + "reflect" + "strings" +) + +// StringValue returns the string representation of a value. +func StringValue(i interface{}) string { + var buf bytes.Buffer + stringValue(reflect.ValueOf(i), 0, &buf) + return buf.String() +} + +func stringValue(v reflect.Value, indent int, buf *bytes.Buffer) { + for v.Kind() == reflect.Ptr { + v = v.Elem() + } + + switch v.Kind() { + case reflect.Struct: + buf.WriteString("{\n") + + names := []string{} + for i := 0; i < v.Type().NumField(); i++ { + name := v.Type().Field(i).Name + f := v.Field(i) + if name[0:1] == strings.ToLower(name[0:1]) { + continue // ignore unexported fields + } + if (f.Kind() == reflect.Ptr || f.Kind() == reflect.Slice) && f.IsNil() { + continue // ignore unset fields + } + names = append(names, name) + } + + for i, n := range names { + val := v.FieldByName(n) + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(n + ": ") + stringValue(val, indent+2, buf) + + if i < len(names)-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + case reflect.Slice: + nl, id, id2 := "", "", "" + if v.Len() > 3 { + nl, id, id2 = "\n", strings.Repeat(" ", indent), strings.Repeat(" ", indent+2) + } + buf.WriteString("[" + nl) + for i := 0; i < v.Len(); i++ { + buf.WriteString(id2) + stringValue(v.Index(i), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString("," + nl) + } + } + + buf.WriteString(nl + id + "]") + case reflect.Map: + buf.WriteString("{\n") + + for i, k := range v.MapKeys() { + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(k.String() + ": ") + stringValue(v.MapIndex(k), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + default: + format := "%v" + switch v.Interface().(type) { + case string: + format = "%q" + } + fmt.Fprintf(buf, format, v.Interface()) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/client.go b/vendor/github.com/aws/aws-sdk-go/aws/client/client.go new file mode 100644 index 00000000..3271a18e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/client.go @@ -0,0 +1,96 @@ +package client + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" +) + +// A Config provides configuration to a service client instance. +type Config struct { + Config *aws.Config + Handlers request.Handlers + Endpoint string + SigningRegion string + SigningName string + + // States that the signing name did not come from a modeled source but + // was derived based on other data. Used by service client constructors + // to determine if the signin name can be overriden based on metadata the + // service has. + SigningNameDerived bool +} + +// ConfigProvider provides a generic way for a service client to receive +// the ClientConfig without circular dependencies. +type ConfigProvider interface { + ClientConfig(serviceName string, cfgs ...*aws.Config) Config +} + +// ConfigNoResolveEndpointProvider same as ConfigProvider except it will not +// resolve the endpoint automatically. The service client's endpoint must be +// provided via the aws.Config.Endpoint field. +type ConfigNoResolveEndpointProvider interface { + ClientConfigNoResolveEndpoint(cfgs ...*aws.Config) Config +} + +// A Client implements the base client request and response handling +// used by all service clients. +type Client struct { + request.Retryer + metadata.ClientInfo + + Config aws.Config + Handlers request.Handlers +} + +// New will return a pointer to a new initialized service client. +func New(cfg aws.Config, info metadata.ClientInfo, handlers request.Handlers, options ...func(*Client)) *Client { + svc := &Client{ + Config: cfg, + ClientInfo: info, + Handlers: handlers.Copy(), + } + + switch retryer, ok := cfg.Retryer.(request.Retryer); { + case ok: + svc.Retryer = retryer + case cfg.Retryer != nil && cfg.Logger != nil: + s := fmt.Sprintf("WARNING: %T does not implement request.Retryer; using DefaultRetryer instead", cfg.Retryer) + cfg.Logger.Log(s) + fallthrough + default: + maxRetries := aws.IntValue(cfg.MaxRetries) + if cfg.MaxRetries == nil || maxRetries == aws.UseServiceDefaultRetries { + maxRetries = 3 + } + svc.Retryer = DefaultRetryer{NumMaxRetries: maxRetries} + } + + svc.AddDebugHandlers() + + for _, option := range options { + option(svc) + } + + return svc +} + +// NewRequest returns a new Request pointer for the service API +// operation and parameters. +func (c *Client) NewRequest(operation *request.Operation, params interface{}, data interface{}) *request.Request { + return request.New(c.Config, c.ClientInfo, c.Handlers, c.Retryer, operation, params, data) +} + +// AddDebugHandlers injects debug logging handlers into the service to log request +// debug information. +func (c *Client) AddDebugHandlers() { + if !c.Config.LogLevel.AtLeast(aws.LogDebug) { + return + } + + c.Handlers.Send.PushFrontNamed(request.NamedHandler{Name: "awssdk.client.LogRequest", Fn: logRequest}) + c.Handlers.Send.PushBackNamed(request.NamedHandler{Name: "awssdk.client.LogResponse", Fn: logResponse}) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go new file mode 100644 index 00000000..a397b0d0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go @@ -0,0 +1,116 @@ +package client + +import ( + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/sdkrand" +) + +// DefaultRetryer implements basic retry logic using exponential backoff for +// most services. If you want to implement custom retry logic, implement the +// request.Retryer interface or create a structure type that composes this +// struct and override the specific methods. For example, to override only +// the MaxRetries method: +// +// type retryer struct { +// client.DefaultRetryer +// } +// +// // This implementation always has 100 max retries +// func (d retryer) MaxRetries() int { return 100 } +type DefaultRetryer struct { + NumMaxRetries int +} + +// MaxRetries returns the number of maximum returns the service will use to make +// an individual API request. +func (d DefaultRetryer) MaxRetries() int { + return d.NumMaxRetries +} + +// RetryRules returns the delay duration before retrying this request again +func (d DefaultRetryer) RetryRules(r *request.Request) time.Duration { + // Set the upper limit of delay in retrying at ~five minutes + minTime := 30 + throttle := d.shouldThrottle(r) + if throttle { + if delay, ok := getRetryDelay(r); ok { + return delay + } + + minTime = 500 + } + + retryCount := r.RetryCount + if throttle && retryCount > 8 { + retryCount = 8 + } else if retryCount > 13 { + retryCount = 13 + } + + delay := (1 << uint(retryCount)) * (sdkrand.SeededRand.Intn(minTime) + minTime) + return time.Duration(delay) * time.Millisecond +} + +// ShouldRetry returns true if the request should be retried. +func (d DefaultRetryer) ShouldRetry(r *request.Request) bool { + // If one of the other handlers already set the retry state + // we don't want to override it based on the service's state + if r.Retryable != nil { + return *r.Retryable + } + + if r.HTTPResponse.StatusCode >= 500 && r.HTTPResponse.StatusCode != 501 { + return true + } + return r.IsErrorRetryable() || d.shouldThrottle(r) +} + +// ShouldThrottle returns true if the request should be throttled. +func (d DefaultRetryer) shouldThrottle(r *request.Request) bool { + switch r.HTTPResponse.StatusCode { + case 429: + case 502: + case 503: + case 504: + default: + return r.IsErrorThrottle() + } + + return true +} + +// This will look in the Retry-After header, RFC 7231, for how long +// it will wait before attempting another request +func getRetryDelay(r *request.Request) (time.Duration, bool) { + if !canUseRetryAfterHeader(r) { + return 0, false + } + + delayStr := r.HTTPResponse.Header.Get("Retry-After") + if len(delayStr) == 0 { + return 0, false + } + + delay, err := strconv.Atoi(delayStr) + if err != nil { + return 0, false + } + + return time.Duration(delay) * time.Second, true +} + +// Will look at the status code to see if the retry header pertains to +// the status code. +func canUseRetryAfterHeader(r *request.Request) bool { + switch r.HTTPResponse.StatusCode { + case 429: + case 503: + default: + return false + } + + return true +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go b/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go new file mode 100644 index 00000000..e223c54c --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go @@ -0,0 +1,112 @@ +package client + +import ( + "bytes" + "fmt" + "io" + "io/ioutil" + "net/http/httputil" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +const logReqMsg = `DEBUG: Request %s/%s Details: +---[ REQUEST POST-SIGN ]----------------------------- +%s +-----------------------------------------------------` + +const logReqErrMsg = `DEBUG ERROR: Request %s/%s: +---[ REQUEST DUMP ERROR ]----------------------------- +%s +------------------------------------------------------` + +type logWriter struct { + // Logger is what we will use to log the payload of a response. + Logger aws.Logger + // buf stores the contents of what has been read + buf *bytes.Buffer +} + +func (logger *logWriter) Write(b []byte) (int, error) { + return logger.buf.Write(b) +} + +type teeReaderCloser struct { + // io.Reader will be a tee reader that is used during logging. + // This structure will read from a body and write the contents to a logger. + io.Reader + // Source is used just to close when we are done reading. + Source io.ReadCloser +} + +func (reader *teeReaderCloser) Close() error { + return reader.Source.Close() +} + +func logRequest(r *request.Request) { + logBody := r.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody) + bodySeekable := aws.IsReaderSeekable(r.Body) + dumpedBody, err := httputil.DumpRequestOut(r.HTTPRequest, logBody) + if err != nil { + r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + + if logBody { + if !bodySeekable { + r.SetReaderBody(aws.ReadSeekCloser(r.HTTPRequest.Body)) + } + // Reset the request body because dumpRequest will re-wrap the r.HTTPRequest's + // Body as a NoOpCloser and will not be reset after read by the HTTP + // client reader. + r.ResetBody() + } + + r.Config.Logger.Log(fmt.Sprintf(logReqMsg, r.ClientInfo.ServiceName, r.Operation.Name, string(dumpedBody))) +} + +const logRespMsg = `DEBUG: Response %s/%s Details: +---[ RESPONSE ]-------------------------------------- +%s +-----------------------------------------------------` + +const logRespErrMsg = `DEBUG ERROR: Response %s/%s: +---[ RESPONSE DUMP ERROR ]----------------------------- +%s +-----------------------------------------------------` + +func logResponse(r *request.Request) { + lw := &logWriter{r.Config.Logger, bytes.NewBuffer(nil)} + r.HTTPResponse.Body = &teeReaderCloser{ + Reader: io.TeeReader(r.HTTPResponse.Body, lw), + Source: r.HTTPResponse.Body, + } + + handlerFn := func(req *request.Request) { + body, err := httputil.DumpResponse(req.HTTPResponse, false) + if err != nil { + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, req.ClientInfo.ServiceName, req.Operation.Name, err)) + return + } + + b, err := ioutil.ReadAll(lw.buf) + if err != nil { + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, req.ClientInfo.ServiceName, req.Operation.Name, err)) + return + } + lw.Logger.Log(fmt.Sprintf(logRespMsg, req.ClientInfo.ServiceName, req.Operation.Name, string(body))) + if req.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody) { + lw.Logger.Log(string(b)) + } + } + + const handlerName = "awsdk.client.LogResponse.ResponseBody" + + r.Handlers.Unmarshal.SetBackNamed(request.NamedHandler{ + Name: handlerName, Fn: handlerFn, + }) + r.Handlers.UnmarshalError.SetBackNamed(request.NamedHandler{ + Name: handlerName, Fn: handlerFn, + }) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go b/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go new file mode 100644 index 00000000..4778056d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go @@ -0,0 +1,12 @@ +package metadata + +// ClientInfo wraps immutable data from the client.Client structure. +type ClientInfo struct { + ServiceName string + APIVersion string + Endpoint string + SigningName string + SigningRegion string + JSONVersion string + TargetPrefix string +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/config.go b/vendor/github.com/aws/aws-sdk-go/aws/config.go new file mode 100644 index 00000000..5421b5d4 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/config.go @@ -0,0 +1,492 @@ +package aws + +import ( + "net/http" + "time" + + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/endpoints" +) + +// UseServiceDefaultRetries instructs the config to use the service's own +// default number of retries. This will be the default action if +// Config.MaxRetries is nil also. +const UseServiceDefaultRetries = -1 + +// RequestRetryer is an alias for a type that implements the request.Retryer +// interface. +type RequestRetryer interface{} + +// A Config provides service configuration for service clients. By default, +// all clients will use the defaults.DefaultConfig tructure. +// +// // Create Session with MaxRetry configuration to be shared by multiple +// // service clients. +// sess := session.Must(session.NewSession(&aws.Config{ +// MaxRetries: aws.Int(3), +// })) +// +// // Create S3 service client with a specific Region. +// svc := s3.New(sess, &aws.Config{ +// Region: aws.String("us-west-2"), +// }) +type Config struct { + // Enables verbose error printing of all credential chain errors. + // Should be used when wanting to see all errors while attempting to + // retrieve credentials. + CredentialsChainVerboseErrors *bool + + // The credentials object to use when signing requests. Defaults to a + // chain of credential providers to search for credentials in environment + // variables, shared credential file, and EC2 Instance Roles. + Credentials *credentials.Credentials + + // An optional endpoint URL (hostname only or fully qualified URI) + // that overrides the default generated endpoint for a client. Set this + // to `""` to use the default generated endpoint. + // + // @note You must still provide a `Region` value when specifying an + // endpoint for a client. + Endpoint *string + + // The resolver to use for looking up endpoints for AWS service clients + // to use based on region. + EndpointResolver endpoints.Resolver + + // EnforceShouldRetryCheck is used in the AfterRetryHandler to always call + // ShouldRetry regardless of whether or not if request.Retryable is set. + // This will utilize ShouldRetry method of custom retryers. If EnforceShouldRetryCheck + // is not set, then ShouldRetry will only be called if request.Retryable is nil. + // Proper handling of the request.Retryable field is important when setting this field. + EnforceShouldRetryCheck *bool + + // The region to send requests to. This parameter is required and must + // be configured globally or on a per-client basis unless otherwise + // noted. A full list of regions is found in the "Regions and Endpoints" + // document. + // + // @see http://docs.aws.amazon.com/general/latest/gr/rande.html + // AWS Regions and Endpoints + Region *string + + // Set this to `true` to disable SSL when sending requests. Defaults + // to `false`. + DisableSSL *bool + + // The HTTP client to use when sending requests. Defaults to + // `http.DefaultClient`. + HTTPClient *http.Client + + // An integer value representing the logging level. The default log level + // is zero (LogOff), which represents no logging. To enable logging set + // to a LogLevel Value. + LogLevel *LogLevelType + + // The logger writer interface to write logging messages to. Defaults to + // standard out. + Logger Logger + + // The maximum number of times that a request will be retried for failures. + // Defaults to -1, which defers the max retry setting to the service + // specific configuration. + MaxRetries *int + + // Retryer guides how HTTP requests should be retried in case of + // recoverable failures. + // + // When nil or the value does not implement the request.Retryer interface, + // the client.DefaultRetryer will be used. + // + // When both Retryer and MaxRetries are non-nil, the former is used and + // the latter ignored. + // + // To set the Retryer field in a type-safe manner and with chaining, use + // the request.WithRetryer helper function: + // + // cfg := request.WithRetryer(aws.NewConfig(), myRetryer) + // + Retryer RequestRetryer + + // Disables semantic parameter validation, which validates input for + // missing required fields and/or other semantic request input errors. + DisableParamValidation *bool + + // Disables the computation of request and response checksums, e.g., + // CRC32 checksums in Amazon DynamoDB. + DisableComputeChecksums *bool + + // Set this to `true` to force the request to use path-style addressing, + // i.e., `http://s3.amazonaws.com/BUCKET/KEY`. By default, the S3 client + // will use virtual hosted bucket addressing when possible + // (`http://BUCKET.s3.amazonaws.com/KEY`). + // + // @note This configuration option is specific to the Amazon S3 service. + // @see http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html + // Amazon S3: Virtual Hosting of Buckets + S3ForcePathStyle *bool + + // Set this to `true` to disable the SDK adding the `Expect: 100-Continue` + // header to PUT requests over 2MB of content. 100-Continue instructs the + // HTTP client not to send the body until the service responds with a + // `continue` status. This is useful to prevent sending the request body + // until after the request is authenticated, and validated. + // + // http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html + // + // 100-Continue is only enabled for Go 1.6 and above. See `http.Transport`'s + // `ExpectContinueTimeout` for information on adjusting the continue wait + // timeout. https://golang.org/pkg/net/http/#Transport + // + // You should use this flag to disble 100-Continue if you experience issues + // with proxies or third party S3 compatible services. + S3Disable100Continue *bool + + // Set this to `true` to enable S3 Accelerate feature. For all operations + // compatible with S3 Accelerate will use the accelerate endpoint for + // requests. Requests not compatible will fall back to normal S3 requests. + // + // The bucket must be enable for accelerate to be used with S3 client with + // accelerate enabled. If the bucket is not enabled for accelerate an error + // will be returned. The bucket name must be DNS compatible to also work + // with accelerate. + S3UseAccelerate *bool + + // S3DisableContentMD5Validation config option is temporarily disabled, + // For S3 GetObject API calls, #1837. + // + // Set this to `true` to disable the S3 service client from automatically + // adding the ContentMD5 to S3 Object Put and Upload API calls. This option + // will also disable the SDK from performing object ContentMD5 validation + // on GetObject API calls. + S3DisableContentMD5Validation *bool + + // Set this to `true` to disable the EC2Metadata client from overriding the + // default http.Client's Timeout. This is helpful if you do not want the + // EC2Metadata client to create a new http.Client. This options is only + // meaningful if you're not already using a custom HTTP client with the + // SDK. Enabled by default. + // + // Must be set and provided to the session.NewSession() in order to disable + // the EC2Metadata overriding the timeout for default credentials chain. + // + // Example: + // sess := session.Must(session.NewSession(aws.NewConfig() + // .WithEC2MetadataDiableTimeoutOverride(true))) + // + // svc := s3.New(sess) + // + EC2MetadataDisableTimeoutOverride *bool + + // Instructs the endpoint to be generated for a service client to + // be the dual stack endpoint. The dual stack endpoint will support + // both IPv4 and IPv6 addressing. + // + // Setting this for a service which does not support dual stack will fail + // to make requets. It is not recommended to set this value on the session + // as it will apply to all service clients created with the session. Even + // services which don't support dual stack endpoints. + // + // If the Endpoint config value is also provided the UseDualStack flag + // will be ignored. + // + // Only supported with. + // + // sess := session.Must(session.NewSession()) + // + // svc := s3.New(sess, &aws.Config{ + // UseDualStack: aws.Bool(true), + // }) + UseDualStack *bool + + // SleepDelay is an override for the func the SDK will call when sleeping + // during the lifecycle of a request. Specifically this will be used for + // request delays. This value should only be used for testing. To adjust + // the delay of a request see the aws/client.DefaultRetryer and + // aws/request.Retryer. + // + // SleepDelay will prevent any Context from being used for canceling retry + // delay of an API operation. It is recommended to not use SleepDelay at all + // and specify a Retryer instead. + SleepDelay func(time.Duration) + + // DisableRestProtocolURICleaning will not clean the URL path when making rest protocol requests. + // Will default to false. This would only be used for empty directory names in s3 requests. + // + // Example: + // sess := session.Must(session.NewSession(&aws.Config{ + // DisableRestProtocolURICleaning: aws.Bool(true), + // })) + // + // svc := s3.New(sess) + // out, err := svc.GetObject(&s3.GetObjectInput { + // Bucket: aws.String("bucketname"), + // Key: aws.String("//foo//bar//moo"), + // }) + DisableRestProtocolURICleaning *bool +} + +// NewConfig returns a new Config pointer that can be chained with builder +// methods to set multiple configuration values inline without using pointers. +// +// // Create Session with MaxRetry configuration to be shared by multiple +// // service clients. +// sess := session.Must(session.NewSession(aws.NewConfig(). +// WithMaxRetries(3), +// )) +// +// // Create S3 service client with a specific Region. +// svc := s3.New(sess, aws.NewConfig(). +// WithRegion("us-west-2"), +// ) +func NewConfig() *Config { + return &Config{} +} + +// WithCredentialsChainVerboseErrors sets a config verbose errors boolean and returning +// a Config pointer. +func (c *Config) WithCredentialsChainVerboseErrors(verboseErrs bool) *Config { + c.CredentialsChainVerboseErrors = &verboseErrs + return c +} + +// WithCredentials sets a config Credentials value returning a Config pointer +// for chaining. +func (c *Config) WithCredentials(creds *credentials.Credentials) *Config { + c.Credentials = creds + return c +} + +// WithEndpoint sets a config Endpoint value returning a Config pointer for +// chaining. +func (c *Config) WithEndpoint(endpoint string) *Config { + c.Endpoint = &endpoint + return c +} + +// WithEndpointResolver sets a config EndpointResolver value returning a +// Config pointer for chaining. +func (c *Config) WithEndpointResolver(resolver endpoints.Resolver) *Config { + c.EndpointResolver = resolver + return c +} + +// WithRegion sets a config Region value returning a Config pointer for +// chaining. +func (c *Config) WithRegion(region string) *Config { + c.Region = ®ion + return c +} + +// WithDisableSSL sets a config DisableSSL value returning a Config pointer +// for chaining. +func (c *Config) WithDisableSSL(disable bool) *Config { + c.DisableSSL = &disable + return c +} + +// WithHTTPClient sets a config HTTPClient value returning a Config pointer +// for chaining. +func (c *Config) WithHTTPClient(client *http.Client) *Config { + c.HTTPClient = client + return c +} + +// WithMaxRetries sets a config MaxRetries value returning a Config pointer +// for chaining. +func (c *Config) WithMaxRetries(max int) *Config { + c.MaxRetries = &max + return c +} + +// WithDisableParamValidation sets a config DisableParamValidation value +// returning a Config pointer for chaining. +func (c *Config) WithDisableParamValidation(disable bool) *Config { + c.DisableParamValidation = &disable + return c +} + +// WithDisableComputeChecksums sets a config DisableComputeChecksums value +// returning a Config pointer for chaining. +func (c *Config) WithDisableComputeChecksums(disable bool) *Config { + c.DisableComputeChecksums = &disable + return c +} + +// WithLogLevel sets a config LogLevel value returning a Config pointer for +// chaining. +func (c *Config) WithLogLevel(level LogLevelType) *Config { + c.LogLevel = &level + return c +} + +// WithLogger sets a config Logger value returning a Config pointer for +// chaining. +func (c *Config) WithLogger(logger Logger) *Config { + c.Logger = logger + return c +} + +// WithS3ForcePathStyle sets a config S3ForcePathStyle value returning a Config +// pointer for chaining. +func (c *Config) WithS3ForcePathStyle(force bool) *Config { + c.S3ForcePathStyle = &force + return c +} + +// WithS3Disable100Continue sets a config S3Disable100Continue value returning +// a Config pointer for chaining. +func (c *Config) WithS3Disable100Continue(disable bool) *Config { + c.S3Disable100Continue = &disable + return c +} + +// WithS3UseAccelerate sets a config S3UseAccelerate value returning a Config +// pointer for chaining. +func (c *Config) WithS3UseAccelerate(enable bool) *Config { + c.S3UseAccelerate = &enable + return c + +} + +// WithS3DisableContentMD5Validation sets a config +// S3DisableContentMD5Validation value returning a Config pointer for chaining. +func (c *Config) WithS3DisableContentMD5Validation(enable bool) *Config { + c.S3DisableContentMD5Validation = &enable + return c + +} + +// WithUseDualStack sets a config UseDualStack value returning a Config +// pointer for chaining. +func (c *Config) WithUseDualStack(enable bool) *Config { + c.UseDualStack = &enable + return c +} + +// WithEC2MetadataDisableTimeoutOverride sets a config EC2MetadataDisableTimeoutOverride value +// returning a Config pointer for chaining. +func (c *Config) WithEC2MetadataDisableTimeoutOverride(enable bool) *Config { + c.EC2MetadataDisableTimeoutOverride = &enable + return c +} + +// WithSleepDelay overrides the function used to sleep while waiting for the +// next retry. Defaults to time.Sleep. +func (c *Config) WithSleepDelay(fn func(time.Duration)) *Config { + c.SleepDelay = fn + return c +} + +// MergeIn merges the passed in configs into the existing config object. +func (c *Config) MergeIn(cfgs ...*Config) { + for _, other := range cfgs { + mergeInConfig(c, other) + } +} + +func mergeInConfig(dst *Config, other *Config) { + if other == nil { + return + } + + if other.CredentialsChainVerboseErrors != nil { + dst.CredentialsChainVerboseErrors = other.CredentialsChainVerboseErrors + } + + if other.Credentials != nil { + dst.Credentials = other.Credentials + } + + if other.Endpoint != nil { + dst.Endpoint = other.Endpoint + } + + if other.EndpointResolver != nil { + dst.EndpointResolver = other.EndpointResolver + } + + if other.Region != nil { + dst.Region = other.Region + } + + if other.DisableSSL != nil { + dst.DisableSSL = other.DisableSSL + } + + if other.HTTPClient != nil { + dst.HTTPClient = other.HTTPClient + } + + if other.LogLevel != nil { + dst.LogLevel = other.LogLevel + } + + if other.Logger != nil { + dst.Logger = other.Logger + } + + if other.MaxRetries != nil { + dst.MaxRetries = other.MaxRetries + } + + if other.Retryer != nil { + dst.Retryer = other.Retryer + } + + if other.DisableParamValidation != nil { + dst.DisableParamValidation = other.DisableParamValidation + } + + if other.DisableComputeChecksums != nil { + dst.DisableComputeChecksums = other.DisableComputeChecksums + } + + if other.S3ForcePathStyle != nil { + dst.S3ForcePathStyle = other.S3ForcePathStyle + } + + if other.S3Disable100Continue != nil { + dst.S3Disable100Continue = other.S3Disable100Continue + } + + if other.S3UseAccelerate != nil { + dst.S3UseAccelerate = other.S3UseAccelerate + } + + if other.S3DisableContentMD5Validation != nil { + dst.S3DisableContentMD5Validation = other.S3DisableContentMD5Validation + } + + if other.UseDualStack != nil { + dst.UseDualStack = other.UseDualStack + } + + if other.EC2MetadataDisableTimeoutOverride != nil { + dst.EC2MetadataDisableTimeoutOverride = other.EC2MetadataDisableTimeoutOverride + } + + if other.SleepDelay != nil { + dst.SleepDelay = other.SleepDelay + } + + if other.DisableRestProtocolURICleaning != nil { + dst.DisableRestProtocolURICleaning = other.DisableRestProtocolURICleaning + } + + if other.EnforceShouldRetryCheck != nil { + dst.EnforceShouldRetryCheck = other.EnforceShouldRetryCheck + } +} + +// Copy will return a shallow copy of the Config object. If any additional +// configurations are provided they will be merged into the new config returned. +func (c *Config) Copy(cfgs ...*Config) *Config { + dst := &Config{} + dst.MergeIn(c) + + for _, cfg := range cfgs { + dst.MergeIn(cfg) + } + + return dst +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/context.go b/vendor/github.com/aws/aws-sdk-go/aws/context.go new file mode 100644 index 00000000..79f42685 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/context.go @@ -0,0 +1,71 @@ +package aws + +import ( + "time" +) + +// Context is an copy of the Go v1.7 stdlib's context.Context interface. +// It is represented as a SDK interface to enable you to use the "WithContext" +// API methods with Go v1.6 and a Context type such as golang.org/x/net/context. +// +// See https://golang.org/pkg/context on how to use contexts. +type Context interface { + // Deadline returns the time when work done on behalf of this context + // should be canceled. Deadline returns ok==false when no deadline is + // set. Successive calls to Deadline return the same results. + Deadline() (deadline time.Time, ok bool) + + // Done returns a channel that's closed when work done on behalf of this + // context should be canceled. Done may return nil if this context can + // never be canceled. Successive calls to Done return the same value. + Done() <-chan struct{} + + // Err returns a non-nil error value after Done is closed. Err returns + // Canceled if the context was canceled or DeadlineExceeded if the + // context's deadline passed. No other values for Err are defined. + // After Done is closed, successive calls to Err return the same value. + Err() error + + // Value returns the value associated with this context for key, or nil + // if no value is associated with key. Successive calls to Value with + // the same key returns the same result. + // + // Use context values only for request-scoped data that transits + // processes and API boundaries, not for passing optional parameters to + // functions. + Value(key interface{}) interface{} +} + +// BackgroundContext returns a context that will never be canceled, has no +// values, and no deadline. This context is used by the SDK to provide +// backwards compatibility with non-context API operations and functionality. +// +// Go 1.6 and before: +// This context function is equivalent to context.Background in the Go stdlib. +// +// Go 1.7 and later: +// The context returned will be the value returned by context.Background() +// +// See https://golang.org/pkg/context for more information on Contexts. +func BackgroundContext() Context { + return backgroundCtx +} + +// SleepWithContext will wait for the timer duration to expire, or the context +// is canceled. Which ever happens first. If the context is canceled the Context's +// error will be returned. +// +// Expects Context to always return a non-nil error if the Done channel is closed. +func SleepWithContext(ctx Context, dur time.Duration) error { + t := time.NewTimer(dur) + defer t.Stop() + + select { + case <-t.C: + break + case <-ctx.Done(): + return ctx.Err() + } + + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/context_1_6.go b/vendor/github.com/aws/aws-sdk-go/aws/context_1_6.go new file mode 100644 index 00000000..8fdda530 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/context_1_6.go @@ -0,0 +1,41 @@ +// +build !go1.7 + +package aws + +import "time" + +// An emptyCtx is a copy of the Go 1.7 context.emptyCtx type. This is copied to +// provide a 1.6 and 1.5 safe version of context that is compatible with Go +// 1.7's Context. +// +// An emptyCtx is never canceled, has no values, and has no deadline. It is not +// struct{}, since vars of this type must have distinct addresses. +type emptyCtx int + +func (*emptyCtx) Deadline() (deadline time.Time, ok bool) { + return +} + +func (*emptyCtx) Done() <-chan struct{} { + return nil +} + +func (*emptyCtx) Err() error { + return nil +} + +func (*emptyCtx) Value(key interface{}) interface{} { + return nil +} + +func (e *emptyCtx) String() string { + switch e { + case backgroundCtx: + return "aws.BackgroundContext" + } + return "unknown empty Context" +} + +var ( + backgroundCtx = new(emptyCtx) +) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/context_1_7.go b/vendor/github.com/aws/aws-sdk-go/aws/context_1_7.go new file mode 100644 index 00000000..064f75c9 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/context_1_7.go @@ -0,0 +1,9 @@ +// +build go1.7 + +package aws + +import "context" + +var ( + backgroundCtx = context.Background() +) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go b/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go new file mode 100644 index 00000000..ff5d58e0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/convert_types.go @@ -0,0 +1,387 @@ +package aws + +import "time" + +// String returns a pointer to the string value passed in. +func String(v string) *string { + return &v +} + +// StringValue returns the value of the string pointer passed in or +// "" if the pointer is nil. +func StringValue(v *string) string { + if v != nil { + return *v + } + return "" +} + +// StringSlice converts a slice of string values into a slice of +// string pointers +func StringSlice(src []string) []*string { + dst := make([]*string, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// StringValueSlice converts a slice of string pointers into a slice of +// string values +func StringValueSlice(src []*string) []string { + dst := make([]string, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// StringMap converts a string map of string values into a string +// map of string pointers +func StringMap(src map[string]string) map[string]*string { + dst := make(map[string]*string) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// StringValueMap converts a string map of string pointers into a string +// map of string values +func StringValueMap(src map[string]*string) map[string]string { + dst := make(map[string]string) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Bool returns a pointer to the bool value passed in. +func Bool(v bool) *bool { + return &v +} + +// BoolValue returns the value of the bool pointer passed in or +// false if the pointer is nil. +func BoolValue(v *bool) bool { + if v != nil { + return *v + } + return false +} + +// BoolSlice converts a slice of bool values into a slice of +// bool pointers +func BoolSlice(src []bool) []*bool { + dst := make([]*bool, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// BoolValueSlice converts a slice of bool pointers into a slice of +// bool values +func BoolValueSlice(src []*bool) []bool { + dst := make([]bool, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// BoolMap converts a string map of bool values into a string +// map of bool pointers +func BoolMap(src map[string]bool) map[string]*bool { + dst := make(map[string]*bool) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// BoolValueMap converts a string map of bool pointers into a string +// map of bool values +func BoolValueMap(src map[string]*bool) map[string]bool { + dst := make(map[string]bool) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int returns a pointer to the int value passed in. +func Int(v int) *int { + return &v +} + +// IntValue returns the value of the int pointer passed in or +// 0 if the pointer is nil. +func IntValue(v *int) int { + if v != nil { + return *v + } + return 0 +} + +// IntSlice converts a slice of int values into a slice of +// int pointers +func IntSlice(src []int) []*int { + dst := make([]*int, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// IntValueSlice converts a slice of int pointers into a slice of +// int values +func IntValueSlice(src []*int) []int { + dst := make([]int, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// IntMap converts a string map of int values into a string +// map of int pointers +func IntMap(src map[string]int) map[string]*int { + dst := make(map[string]*int) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// IntValueMap converts a string map of int pointers into a string +// map of int values +func IntValueMap(src map[string]*int) map[string]int { + dst := make(map[string]int) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int64 returns a pointer to the int64 value passed in. +func Int64(v int64) *int64 { + return &v +} + +// Int64Value returns the value of the int64 pointer passed in or +// 0 if the pointer is nil. +func Int64Value(v *int64) int64 { + if v != nil { + return *v + } + return 0 +} + +// Int64Slice converts a slice of int64 values into a slice of +// int64 pointers +func Int64Slice(src []int64) []*int64 { + dst := make([]*int64, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int64ValueSlice converts a slice of int64 pointers into a slice of +// int64 values +func Int64ValueSlice(src []*int64) []int64 { + dst := make([]int64, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int64Map converts a string map of int64 values into a string +// map of int64 pointers +func Int64Map(src map[string]int64) map[string]*int64 { + dst := make(map[string]*int64) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int64ValueMap converts a string map of int64 pointers into a string +// map of int64 values +func Int64ValueMap(src map[string]*int64) map[string]int64 { + dst := make(map[string]int64) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Float64 returns a pointer to the float64 value passed in. +func Float64(v float64) *float64 { + return &v +} + +// Float64Value returns the value of the float64 pointer passed in or +// 0 if the pointer is nil. +func Float64Value(v *float64) float64 { + if v != nil { + return *v + } + return 0 +} + +// Float64Slice converts a slice of float64 values into a slice of +// float64 pointers +func Float64Slice(src []float64) []*float64 { + dst := make([]*float64, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Float64ValueSlice converts a slice of float64 pointers into a slice of +// float64 values +func Float64ValueSlice(src []*float64) []float64 { + dst := make([]float64, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Float64Map converts a string map of float64 values into a string +// map of float64 pointers +func Float64Map(src map[string]float64) map[string]*float64 { + dst := make(map[string]*float64) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Float64ValueMap converts a string map of float64 pointers into a string +// map of float64 values +func Float64ValueMap(src map[string]*float64) map[string]float64 { + dst := make(map[string]float64) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Time returns a pointer to the time.Time value passed in. +func Time(v time.Time) *time.Time { + return &v +} + +// TimeValue returns the value of the time.Time pointer passed in or +// time.Time{} if the pointer is nil. +func TimeValue(v *time.Time) time.Time { + if v != nil { + return *v + } + return time.Time{} +} + +// SecondsTimeValue converts an int64 pointer to a time.Time value +// representing seconds since Epoch or time.Time{} if the pointer is nil. +func SecondsTimeValue(v *int64) time.Time { + if v != nil { + return time.Unix((*v / 1000), 0) + } + return time.Time{} +} + +// MillisecondsTimeValue converts an int64 pointer to a time.Time value +// representing milliseconds sinch Epoch or time.Time{} if the pointer is nil. +func MillisecondsTimeValue(v *int64) time.Time { + if v != nil { + return time.Unix(0, (*v * 1000000)) + } + return time.Time{} +} + +// TimeUnixMilli returns a Unix timestamp in milliseconds from "January 1, 1970 UTC". +// The result is undefined if the Unix time cannot be represented by an int64. +// Which includes calling TimeUnixMilli on a zero Time is undefined. +// +// This utility is useful for service API's such as CloudWatch Logs which require +// their unix time values to be in milliseconds. +// +// See Go stdlib https://golang.org/pkg/time/#Time.UnixNano for more information. +func TimeUnixMilli(t time.Time) int64 { + return t.UnixNano() / int64(time.Millisecond/time.Nanosecond) +} + +// TimeSlice converts a slice of time.Time values into a slice of +// time.Time pointers +func TimeSlice(src []time.Time) []*time.Time { + dst := make([]*time.Time, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// TimeValueSlice converts a slice of time.Time pointers into a slice of +// time.Time values +func TimeValueSlice(src []*time.Time) []time.Time { + dst := make([]time.Time, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// TimeMap converts a string map of time.Time values into a string +// map of time.Time pointers +func TimeMap(src map[string]time.Time) map[string]*time.Time { + dst := make(map[string]*time.Time) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// TimeValueMap converts a string map of time.Time pointers into a string +// map of time.Time values +func TimeValueMap(src map[string]*time.Time) map[string]time.Time { + dst := make(map[string]time.Time) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go new file mode 100644 index 00000000..cfcddf3d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go @@ -0,0 +1,228 @@ +package corehandlers + +import ( + "bytes" + "fmt" + "io/ioutil" + "net/http" + "net/url" + "regexp" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/request" +) + +// Interface for matching types which also have a Len method. +type lener interface { + Len() int +} + +// BuildContentLengthHandler builds the content length of a request based on the body, +// or will use the HTTPRequest.Header's "Content-Length" if defined. If unable +// to determine request body length and no "Content-Length" was specified it will panic. +// +// The Content-Length will only be added to the request if the length of the body +// is greater than 0. If the body is empty or the current `Content-Length` +// header is <= 0, the header will also be stripped. +var BuildContentLengthHandler = request.NamedHandler{Name: "core.BuildContentLengthHandler", Fn: func(r *request.Request) { + var length int64 + + if slength := r.HTTPRequest.Header.Get("Content-Length"); slength != "" { + length, _ = strconv.ParseInt(slength, 10, 64) + } else { + if r.Body != nil { + var err error + length, err = aws.SeekerLen(r.Body) + if err != nil { + r.Error = awserr.New(request.ErrCodeSerialization, "failed to get request body's length", err) + return + } + } + } + + if length > 0 { + r.HTTPRequest.ContentLength = length + r.HTTPRequest.Header.Set("Content-Length", fmt.Sprintf("%d", length)) + } else { + r.HTTPRequest.ContentLength = 0 + r.HTTPRequest.Header.Del("Content-Length") + } +}} + +var reStatusCode = regexp.MustCompile(`^(\d{3})`) + +// ValidateReqSigHandler is a request handler to ensure that the request's +// signature doesn't expire before it is sent. This can happen when a request +// is built and signed significantly before it is sent. Or significant delays +// occur when retrying requests that would cause the signature to expire. +var ValidateReqSigHandler = request.NamedHandler{ + Name: "core.ValidateReqSigHandler", + Fn: func(r *request.Request) { + // Unsigned requests are not signed + if r.Config.Credentials == credentials.AnonymousCredentials { + return + } + + signedTime := r.Time + if !r.LastSignedAt.IsZero() { + signedTime = r.LastSignedAt + } + + // 10 minutes to allow for some clock skew/delays in transmission. + // Would be improved with aws/aws-sdk-go#423 + if signedTime.Add(10 * time.Minute).After(time.Now()) { + return + } + + fmt.Println("request expired, resigning") + r.Sign() + }, +} + +// SendHandler is a request handler to send service request using HTTP client. +var SendHandler = request.NamedHandler{ + Name: "core.SendHandler", + Fn: func(r *request.Request) { + sender := sendFollowRedirects + if r.DisableFollowRedirects { + sender = sendWithoutFollowRedirects + } + + if request.NoBody == r.HTTPRequest.Body { + // Strip off the request body if the NoBody reader was used as a + // place holder for a request body. This prevents the SDK from + // making requests with a request body when it would be invalid + // to do so. + // + // Use a shallow copy of the http.Request to ensure the race condition + // of transport on Body will not trigger + reqOrig, reqCopy := r.HTTPRequest, *r.HTTPRequest + reqCopy.Body = nil + r.HTTPRequest = &reqCopy + defer func() { + r.HTTPRequest = reqOrig + }() + } + + var err error + r.HTTPResponse, err = sender(r) + if err != nil { + handleSendError(r, err) + } + }, +} + +func sendFollowRedirects(r *request.Request) (*http.Response, error) { + return r.Config.HTTPClient.Do(r.HTTPRequest) +} + +func sendWithoutFollowRedirects(r *request.Request) (*http.Response, error) { + transport := r.Config.HTTPClient.Transport + if transport == nil { + transport = http.DefaultTransport + } + + return transport.RoundTrip(r.HTTPRequest) +} + +func handleSendError(r *request.Request, err error) { + // Prevent leaking if an HTTPResponse was returned. Clean up + // the body. + if r.HTTPResponse != nil { + r.HTTPResponse.Body.Close() + } + // Capture the case where url.Error is returned for error processing + // response. e.g. 301 without location header comes back as string + // error and r.HTTPResponse is nil. Other URL redirect errors will + // comeback in a similar method. + if e, ok := err.(*url.Error); ok && e.Err != nil { + if s := reStatusCode.FindStringSubmatch(e.Err.Error()); s != nil { + code, _ := strconv.ParseInt(s[1], 10, 64) + r.HTTPResponse = &http.Response{ + StatusCode: int(code), + Status: http.StatusText(int(code)), + Body: ioutil.NopCloser(bytes.NewReader([]byte{})), + } + return + } + } + if r.HTTPResponse == nil { + // Add a dummy request response object to ensure the HTTPResponse + // value is consistent. + r.HTTPResponse = &http.Response{ + StatusCode: int(0), + Status: http.StatusText(int(0)), + Body: ioutil.NopCloser(bytes.NewReader([]byte{})), + } + } + // Catch all other request errors. + r.Error = awserr.New("RequestError", "send request failed", err) + r.Retryable = aws.Bool(true) // network errors are retryable + + // Override the error with a context canceled error, if that was canceled. + ctx := r.Context() + select { + case <-ctx.Done(): + r.Error = awserr.New(request.CanceledErrorCode, + "request context canceled", ctx.Err()) + r.Retryable = aws.Bool(false) + default: + } +} + +// ValidateResponseHandler is a request handler to validate service response. +var ValidateResponseHandler = request.NamedHandler{Name: "core.ValidateResponseHandler", Fn: func(r *request.Request) { + if r.HTTPResponse.StatusCode == 0 || r.HTTPResponse.StatusCode >= 300 { + // this may be replaced by an UnmarshalError handler + r.Error = awserr.New("UnknownError", "unknown error", nil) + } +}} + +// AfterRetryHandler performs final checks to determine if the request should +// be retried and how long to delay. +var AfterRetryHandler = request.NamedHandler{Name: "core.AfterRetryHandler", Fn: func(r *request.Request) { + // If one of the other handlers already set the retry state + // we don't want to override it based on the service's state + if r.Retryable == nil || aws.BoolValue(r.Config.EnforceShouldRetryCheck) { + r.Retryable = aws.Bool(r.ShouldRetry(r)) + } + + if r.WillRetry() { + r.RetryDelay = r.RetryRules(r) + + if sleepFn := r.Config.SleepDelay; sleepFn != nil { + // Support SleepDelay for backwards compatibility and testing + sleepFn(r.RetryDelay) + } else if err := aws.SleepWithContext(r.Context(), r.RetryDelay); err != nil { + r.Error = awserr.New(request.CanceledErrorCode, + "request context canceled", err) + r.Retryable = aws.Bool(false) + return + } + + // when the expired token exception occurs the credentials + // need to be expired locally so that the next request to + // get credentials will trigger a credentials refresh. + if r.IsErrorExpired() { + r.Config.Credentials.Expire() + } + + r.RetryCount++ + r.Error = nil + } +}} + +// ValidateEndpointHandler is a request handler to validate a request had the +// appropriate Region and Endpoint set. Will set r.Error if the endpoint or +// region is not valid. +var ValidateEndpointHandler = request.NamedHandler{Name: "core.ValidateEndpointHandler", Fn: func(r *request.Request) { + if r.ClientInfo.SigningRegion == "" && aws.StringValue(r.Config.Region) == "" { + r.Error = aws.ErrMissingRegion + } else if r.ClientInfo.Endpoint == "" { + r.Error = aws.ErrMissingEndpoint + } +}} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/param_validator.go b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/param_validator.go new file mode 100644 index 00000000..7d50b155 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/param_validator.go @@ -0,0 +1,17 @@ +package corehandlers + +import "github.com/aws/aws-sdk-go/aws/request" + +// ValidateParametersHandler is a request handler to validate the input parameters. +// Validating parameters only has meaning if done prior to the request being sent. +var ValidateParametersHandler = request.NamedHandler{Name: "core.ValidateParametersHandler", Fn: func(r *request.Request) { + if !r.ParamsFilled() { + return + } + + if v, ok := r.Params.(request.Validator); ok { + if err := v.Validate(); err != nil { + r.Error = err + } + } +}} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/user_agent.go b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/user_agent.go new file mode 100644 index 00000000..a15f496b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/user_agent.go @@ -0,0 +1,37 @@ +package corehandlers + +import ( + "os" + "runtime" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// SDKVersionUserAgentHandler is a request handler for adding the SDK Version +// to the user agent. +var SDKVersionUserAgentHandler = request.NamedHandler{ + Name: "core.SDKVersionUserAgentHandler", + Fn: request.MakeAddToUserAgentHandler(aws.SDKName, aws.SDKVersion, + runtime.Version(), runtime.GOOS, runtime.GOARCH), +} + +const execEnvVar = `AWS_EXECUTION_ENV` +const execEnvUAKey = `exec_env` + +// AddHostExecEnvUserAgentHander is a request handler appending the SDK's +// execution environment to the user agent. +// +// If the environment variable AWS_EXECUTION_ENV is set, its value will be +// appended to the user agent string. +var AddHostExecEnvUserAgentHander = request.NamedHandler{ + Name: "core.AddHostExecEnvUserAgentHander", + Fn: func(r *request.Request) { + v := os.Getenv(execEnvVar) + if len(v) == 0 { + return + } + + request.AddToUserAgent(r, execEnvUAKey+"/"+v) + }, +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go new file mode 100644 index 00000000..f298d659 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go @@ -0,0 +1,102 @@ +package credentials + +import ( + "github.com/aws/aws-sdk-go/aws/awserr" +) + +var ( + // ErrNoValidProvidersFoundInChain Is returned when there are no valid + // providers in the ChainProvider. + // + // This has been deprecated. For verbose error messaging set + // aws.Config.CredentialsChainVerboseErrors to true + // + // @readonly + ErrNoValidProvidersFoundInChain = awserr.New("NoCredentialProviders", + `no valid providers in chain. Deprecated. + For verbose messaging see aws.Config.CredentialsChainVerboseErrors`, + nil) +) + +// A ChainProvider will search for a provider which returns credentials +// and cache that provider until Retrieve is called again. +// +// The ChainProvider provides a way of chaining multiple providers together +// which will pick the first available using priority order of the Providers +// in the list. +// +// If none of the Providers retrieve valid credentials Value, ChainProvider's +// Retrieve() will return the error ErrNoValidProvidersFoundInChain. +// +// If a Provider is found which returns valid credentials Value ChainProvider +// will cache that Provider for all calls to IsExpired(), until Retrieve is +// called again. +// +// Example of ChainProvider to be used with an EnvProvider and EC2RoleProvider. +// In this example EnvProvider will first check if any credentials are available +// via the environment variables. If there are none ChainProvider will check +// the next Provider in the list, EC2RoleProvider in this case. If EC2RoleProvider +// does not return any credentials ChainProvider will return the error +// ErrNoValidProvidersFoundInChain +// +// creds := credentials.NewChainCredentials( +// []credentials.Provider{ +// &credentials.EnvProvider{}, +// &ec2rolecreds.EC2RoleProvider{ +// Client: ec2metadata.New(sess), +// }, +// }) +// +// // Usage of ChainCredentials with aws.Config +// svc := ec2.New(session.Must(session.NewSession(&aws.Config{ +// Credentials: creds, +// }))) +// +type ChainProvider struct { + Providers []Provider + curr Provider + VerboseErrors bool +} + +// NewChainCredentials returns a pointer to a new Credentials object +// wrapping a chain of providers. +func NewChainCredentials(providers []Provider) *Credentials { + return NewCredentials(&ChainProvider{ + Providers: append([]Provider{}, providers...), + }) +} + +// Retrieve returns the credentials value or error if no provider returned +// without error. +// +// If a provider is found it will be cached and any calls to IsExpired() +// will return the expired state of the cached provider. +func (c *ChainProvider) Retrieve() (Value, error) { + var errs []error + for _, p := range c.Providers { + creds, err := p.Retrieve() + if err == nil { + c.curr = p + return creds, nil + } + errs = append(errs, err) + } + c.curr = nil + + var err error + err = ErrNoValidProvidersFoundInChain + if c.VerboseErrors { + err = awserr.NewBatchError("NoCredentialProviders", "no valid providers in chain", errs) + } + return Value{}, err +} + +// IsExpired will returned the expired state of the currently cached provider +// if there is one. If there is no current provider, true will be returned. +func (c *ChainProvider) IsExpired() bool { + if c.curr != nil { + return c.curr.IsExpired() + } + + return true +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go new file mode 100644 index 00000000..42416fc2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go @@ -0,0 +1,246 @@ +// Package credentials provides credential retrieval and management +// +// The Credentials is the primary method of getting access to and managing +// credentials Values. Using dependency injection retrieval of the credential +// values is handled by a object which satisfies the Provider interface. +// +// By default the Credentials.Get() will cache the successful result of a +// Provider's Retrieve() until Provider.IsExpired() returns true. At which +// point Credentials will call Provider's Retrieve() to get new credential Value. +// +// The Provider is responsible for determining when credentials Value have expired. +// It is also important to note that Credentials will always call Retrieve the +// first time Credentials.Get() is called. +// +// Example of using the environment variable credentials. +// +// creds := credentials.NewEnvCredentials() +// +// // Retrieve the credentials value +// credValue, err := creds.Get() +// if err != nil { +// // handle error +// } +// +// Example of forcing credentials to expire and be refreshed on the next Get(). +// This may be helpful to proactively expire credentials and refresh them sooner +// than they would naturally expire on their own. +// +// creds := credentials.NewCredentials(&ec2rolecreds.EC2RoleProvider{}) +// creds.Expire() +// credsValue, err := creds.Get() +// // New credentials will be retrieved instead of from cache. +// +// +// Custom Provider +// +// Each Provider built into this package also provides a helper method to generate +// a Credentials pointer setup with the provider. To use a custom Provider just +// create a type which satisfies the Provider interface and pass it to the +// NewCredentials method. +// +// type MyProvider struct{} +// func (m *MyProvider) Retrieve() (Value, error) {...} +// func (m *MyProvider) IsExpired() bool {...} +// +// creds := credentials.NewCredentials(&MyProvider{}) +// credValue, err := creds.Get() +// +package credentials + +import ( + "sync" + "time" +) + +// AnonymousCredentials is an empty Credential object that can be used as +// dummy placeholder credentials for requests that do not need signed. +// +// This Credentials can be used to configure a service to not sign requests +// when making service API calls. For example, when accessing public +// s3 buckets. +// +// svc := s3.New(session.Must(session.NewSession(&aws.Config{ +// Credentials: credentials.AnonymousCredentials, +// }))) +// // Access public S3 buckets. +// +// @readonly +var AnonymousCredentials = NewStaticCredentials("", "", "") + +// A Value is the AWS credentials value for individual credential fields. +type Value struct { + // AWS Access key ID + AccessKeyID string + + // AWS Secret Access Key + SecretAccessKey string + + // AWS Session Token + SessionToken string + + // Provider used to get credentials + ProviderName string +} + +// A Provider is the interface for any component which will provide credentials +// Value. A provider is required to manage its own Expired state, and what to +// be expired means. +// +// The Provider should not need to implement its own mutexes, because +// that will be managed by Credentials. +type Provider interface { + // Retrieve returns nil if it successfully retrieved the value. + // Error is returned if the value were not obtainable, or empty. + Retrieve() (Value, error) + + // IsExpired returns if the credentials are no longer valid, and need + // to be retrieved. + IsExpired() bool +} + +// An ErrorProvider is a stub credentials provider that always returns an error +// this is used by the SDK when construction a known provider is not possible +// due to an error. +type ErrorProvider struct { + // The error to be returned from Retrieve + Err error + + // The provider name to set on the Retrieved returned Value + ProviderName string +} + +// Retrieve will always return the error that the ErrorProvider was created with. +func (p ErrorProvider) Retrieve() (Value, error) { + return Value{ProviderName: p.ProviderName}, p.Err +} + +// IsExpired will always return not expired. +func (p ErrorProvider) IsExpired() bool { + return false +} + +// A Expiry provides shared expiration logic to be used by credentials +// providers to implement expiry functionality. +// +// The best method to use this struct is as an anonymous field within the +// provider's struct. +// +// Example: +// type EC2RoleProvider struct { +// Expiry +// ... +// } +type Expiry struct { + // The date/time when to expire on + expiration time.Time + + // If set will be used by IsExpired to determine the current time. + // Defaults to time.Now if CurrentTime is not set. Available for testing + // to be able to mock out the current time. + CurrentTime func() time.Time +} + +// SetExpiration sets the expiration IsExpired will check when called. +// +// If window is greater than 0 the expiration time will be reduced by the +// window value. +// +// Using a window is helpful to trigger credentials to expire sooner than +// the expiration time given to ensure no requests are made with expired +// tokens. +func (e *Expiry) SetExpiration(expiration time.Time, window time.Duration) { + e.expiration = expiration + if window > 0 { + e.expiration = e.expiration.Add(-window) + } +} + +// IsExpired returns if the credentials are expired. +func (e *Expiry) IsExpired() bool { + if e.CurrentTime == nil { + e.CurrentTime = time.Now + } + return e.expiration.Before(e.CurrentTime()) +} + +// A Credentials provides synchronous safe retrieval of AWS credentials Value. +// Credentials will cache the credentials value until they expire. Once the value +// expires the next Get will attempt to retrieve valid credentials. +// +// Credentials is safe to use across multiple goroutines and will manage the +// synchronous state so the Providers do not need to implement their own +// synchronization. +// +// The first Credentials.Get() will always call Provider.Retrieve() to get the +// first instance of the credentials Value. All calls to Get() after that +// will return the cached credentials Value until IsExpired() returns true. +type Credentials struct { + creds Value + forceRefresh bool + m sync.Mutex + + provider Provider +} + +// NewCredentials returns a pointer to a new Credentials with the provider set. +func NewCredentials(provider Provider) *Credentials { + return &Credentials{ + provider: provider, + forceRefresh: true, + } +} + +// Get returns the credentials value, or error if the credentials Value failed +// to be retrieved. +// +// Will return the cached credentials Value if it has not expired. If the +// credentials Value has expired the Provider's Retrieve() will be called +// to refresh the credentials. +// +// If Credentials.Expire() was called the credentials Value will be force +// expired, and the next call to Get() will cause them to be refreshed. +func (c *Credentials) Get() (Value, error) { + c.m.Lock() + defer c.m.Unlock() + + if c.isExpired() { + creds, err := c.provider.Retrieve() + if err != nil { + return Value{}, err + } + c.creds = creds + c.forceRefresh = false + } + + return c.creds, nil +} + +// Expire expires the credentials and forces them to be retrieved on the +// next call to Get(). +// +// This will override the Provider's expired state, and force Credentials +// to call the Provider's Retrieve(). +func (c *Credentials) Expire() { + c.m.Lock() + defer c.m.Unlock() + + c.forceRefresh = true +} + +// IsExpired returns if the credentials are no longer valid, and need +// to be retrieved. +// +// If the Credentials were forced to be expired with Expire() this will +// reflect that override. +func (c *Credentials) IsExpired() bool { + c.m.Lock() + defer c.m.Unlock() + + return c.isExpired() +} + +// isExpired helper method wrapping the definition of expired credentials. +func (c *Credentials) isExpired() bool { + return c.forceRefresh || c.provider.IsExpired() +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go new file mode 100644 index 00000000..c3974952 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go @@ -0,0 +1,178 @@ +package ec2rolecreds + +import ( + "bufio" + "encoding/json" + "fmt" + "path" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/ec2metadata" +) + +// ProviderName provides a name of EC2Role provider +const ProviderName = "EC2RoleProvider" + +// A EC2RoleProvider retrieves credentials from the EC2 service, and keeps track if +// those credentials are expired. +// +// Example how to configure the EC2RoleProvider with custom http Client, Endpoint +// or ExpiryWindow +// +// p := &ec2rolecreds.EC2RoleProvider{ +// // Pass in a custom timeout to be used when requesting +// // IAM EC2 Role credentials. +// Client: ec2metadata.New(sess, aws.Config{ +// HTTPClient: &http.Client{Timeout: 10 * time.Second}, +// }), +// +// // Do not use early expiry of credentials. If a non zero value is +// // specified the credentials will be expired early +// ExpiryWindow: 0, +// } +type EC2RoleProvider struct { + credentials.Expiry + + // Required EC2Metadata client to use when connecting to EC2 metadata service. + Client *ec2metadata.EC2Metadata + + // ExpiryWindow will allow the credentials to trigger refreshing prior to + // the credentials actually expiring. This is beneficial so race conditions + // with expiring credentials do not cause request to fail unexpectedly + // due to ExpiredTokenException exceptions. + // + // So a ExpiryWindow of 10s would cause calls to IsExpired() to return true + // 10 seconds before the credentials are actually expired. + // + // If ExpiryWindow is 0 or less it will be ignored. + ExpiryWindow time.Duration +} + +// NewCredentials returns a pointer to a new Credentials object wrapping +// the EC2RoleProvider. Takes a ConfigProvider to create a EC2Metadata client. +// The ConfigProvider is satisfied by the session.Session type. +func NewCredentials(c client.ConfigProvider, options ...func(*EC2RoleProvider)) *credentials.Credentials { + p := &EC2RoleProvider{ + Client: ec2metadata.New(c), + } + + for _, option := range options { + option(p) + } + + return credentials.NewCredentials(p) +} + +// NewCredentialsWithClient returns a pointer to a new Credentials object wrapping +// the EC2RoleProvider. Takes a EC2Metadata client to use when connecting to EC2 +// metadata service. +func NewCredentialsWithClient(client *ec2metadata.EC2Metadata, options ...func(*EC2RoleProvider)) *credentials.Credentials { + p := &EC2RoleProvider{ + Client: client, + } + + for _, option := range options { + option(p) + } + + return credentials.NewCredentials(p) +} + +// Retrieve retrieves credentials from the EC2 service. +// Error will be returned if the request fails, or unable to extract +// the desired credentials. +func (m *EC2RoleProvider) Retrieve() (credentials.Value, error) { + credsList, err := requestCredList(m.Client) + if err != nil { + return credentials.Value{ProviderName: ProviderName}, err + } + + if len(credsList) == 0 { + return credentials.Value{ProviderName: ProviderName}, awserr.New("EmptyEC2RoleList", "empty EC2 Role list", nil) + } + credsName := credsList[0] + + roleCreds, err := requestCred(m.Client, credsName) + if err != nil { + return credentials.Value{ProviderName: ProviderName}, err + } + + m.SetExpiration(roleCreds.Expiration, m.ExpiryWindow) + + return credentials.Value{ + AccessKeyID: roleCreds.AccessKeyID, + SecretAccessKey: roleCreds.SecretAccessKey, + SessionToken: roleCreds.Token, + ProviderName: ProviderName, + }, nil +} + +// A ec2RoleCredRespBody provides the shape for unmarshaling credential +// request responses. +type ec2RoleCredRespBody struct { + // Success State + Expiration time.Time + AccessKeyID string + SecretAccessKey string + Token string + + // Error state + Code string + Message string +} + +const iamSecurityCredsPath = "/iam/security-credentials" + +// requestCredList requests a list of credentials from the EC2 service. +// If there are no credentials, or there is an error making or receiving the request +func requestCredList(client *ec2metadata.EC2Metadata) ([]string, error) { + resp, err := client.GetMetadata(iamSecurityCredsPath) + if err != nil { + return nil, awserr.New("EC2RoleRequestError", "no EC2 instance role found", err) + } + + credsList := []string{} + s := bufio.NewScanner(strings.NewReader(resp)) + for s.Scan() { + credsList = append(credsList, s.Text()) + } + + if err := s.Err(); err != nil { + return nil, awserr.New("SerializationError", "failed to read EC2 instance role from metadata service", err) + } + + return credsList, nil +} + +// requestCred requests the credentials for a specific credentials from the EC2 service. +// +// If the credentials cannot be found, or there is an error reading the response +// and error will be returned. +func requestCred(client *ec2metadata.EC2Metadata, credsName string) (ec2RoleCredRespBody, error) { + resp, err := client.GetMetadata(path.Join(iamSecurityCredsPath, credsName)) + if err != nil { + return ec2RoleCredRespBody{}, + awserr.New("EC2RoleRequestError", + fmt.Sprintf("failed to get %s EC2 instance role credentials", credsName), + err) + } + + respCreds := ec2RoleCredRespBody{} + if err := json.NewDecoder(strings.NewReader(resp)).Decode(&respCreds); err != nil { + return ec2RoleCredRespBody{}, + awserr.New("SerializationError", + fmt.Sprintf("failed to decode %s EC2 instance role credentials", credsName), + err) + } + + if respCreds.Code != "Success" { + // If an error code was returned something failed requesting the role. + return ec2RoleCredRespBody{}, awserr.New(respCreds.Code, respCreds.Message, nil) + } + + return respCreds, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go new file mode 100644 index 00000000..a4cec5c5 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go @@ -0,0 +1,191 @@ +// Package endpointcreds provides support for retrieving credentials from an +// arbitrary HTTP endpoint. +// +// The credentials endpoint Provider can receive both static and refreshable +// credentials that will expire. Credentials are static when an "Expiration" +// value is not provided in the endpoint's response. +// +// Static credentials will never expire once they have been retrieved. The format +// of the static credentials response: +// { +// "AccessKeyId" : "MUA...", +// "SecretAccessKey" : "/7PC5om....", +// } +// +// Refreshable credentials will expire within the "ExpiryWindow" of the Expiration +// value in the response. The format of the refreshable credentials response: +// { +// "AccessKeyId" : "MUA...", +// "SecretAccessKey" : "/7PC5om....", +// "Token" : "AQoDY....=", +// "Expiration" : "2016-02-25T06:03:31Z" +// } +// +// Errors should be returned in the following format and only returned with 400 +// or 500 HTTP status codes. +// { +// "code": "ErrorCode", +// "message": "Helpful error message." +// } +package endpointcreds + +import ( + "encoding/json" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/request" +) + +// ProviderName is the name of the credentials provider. +const ProviderName = `CredentialsEndpointProvider` + +// Provider satisfies the credentials.Provider interface, and is a client to +// retrieve credentials from an arbitrary endpoint. +type Provider struct { + staticCreds bool + credentials.Expiry + + // Requires a AWS Client to make HTTP requests to the endpoint with. + // the Endpoint the request will be made to is provided by the aws.Config's + // Endpoint value. + Client *client.Client + + // ExpiryWindow will allow the credentials to trigger refreshing prior to + // the credentials actually expiring. This is beneficial so race conditions + // with expiring credentials do not cause request to fail unexpectedly + // due to ExpiredTokenException exceptions. + // + // So a ExpiryWindow of 10s would cause calls to IsExpired() to return true + // 10 seconds before the credentials are actually expired. + // + // If ExpiryWindow is 0 or less it will be ignored. + ExpiryWindow time.Duration +} + +// NewProviderClient returns a credentials Provider for retrieving AWS credentials +// from arbitrary endpoint. +func NewProviderClient(cfg aws.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) credentials.Provider { + p := &Provider{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: "CredentialsEndpoint", + Endpoint: endpoint, + }, + handlers, + ), + } + + p.Client.Handlers.Unmarshal.PushBack(unmarshalHandler) + p.Client.Handlers.UnmarshalError.PushBack(unmarshalError) + p.Client.Handlers.Validate.Clear() + p.Client.Handlers.Validate.PushBack(validateEndpointHandler) + + for _, option := range options { + option(p) + } + + return p +} + +// NewCredentialsClient returns a Credentials wrapper for retrieving credentials +// from an arbitrary endpoint concurrently. The client will request the +func NewCredentialsClient(cfg aws.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) *credentials.Credentials { + return credentials.NewCredentials(NewProviderClient(cfg, handlers, endpoint, options...)) +} + +// IsExpired returns true if the credentials retrieved are expired, or not yet +// retrieved. +func (p *Provider) IsExpired() bool { + if p.staticCreds { + return false + } + return p.Expiry.IsExpired() +} + +// Retrieve will attempt to request the credentials from the endpoint the Provider +// was configured for. And error will be returned if the retrieval fails. +func (p *Provider) Retrieve() (credentials.Value, error) { + resp, err := p.getCredentials() + if err != nil { + return credentials.Value{ProviderName: ProviderName}, + awserr.New("CredentialsEndpointError", "failed to load credentials", err) + } + + if resp.Expiration != nil { + p.SetExpiration(*resp.Expiration, p.ExpiryWindow) + } else { + p.staticCreds = true + } + + return credentials.Value{ + AccessKeyID: resp.AccessKeyID, + SecretAccessKey: resp.SecretAccessKey, + SessionToken: resp.Token, + ProviderName: ProviderName, + }, nil +} + +type getCredentialsOutput struct { + Expiration *time.Time + AccessKeyID string + SecretAccessKey string + Token string +} + +type errorOutput struct { + Code string `json:"code"` + Message string `json:"message"` +} + +func (p *Provider) getCredentials() (*getCredentialsOutput, error) { + op := &request.Operation{ + Name: "GetCredentials", + HTTPMethod: "GET", + } + + out := &getCredentialsOutput{} + req := p.Client.NewRequest(op, nil, out) + req.HTTPRequest.Header.Set("Accept", "application/json") + + return out, req.Send() +} + +func validateEndpointHandler(r *request.Request) { + if len(r.ClientInfo.Endpoint) == 0 { + r.Error = aws.ErrMissingEndpoint + } +} + +func unmarshalHandler(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + out := r.Data.(*getCredentialsOutput) + if err := json.NewDecoder(r.HTTPResponse.Body).Decode(&out); err != nil { + r.Error = awserr.New("SerializationError", + "failed to decode endpoint credentials", + err, + ) + } +} + +func unmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + var errOut errorOutput + if err := json.NewDecoder(r.HTTPResponse.Body).Decode(&errOut); err != nil { + r.Error = awserr.New("SerializationError", + "failed to decode endpoint credentials", + err, + ) + } + + // Response body format is not consistent between metadata endpoints. + // Grab the error message as a string and include that as the source error + r.Error = awserr.New(errOut.Code, errOut.Message, nil) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go new file mode 100644 index 00000000..c14231a1 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go @@ -0,0 +1,78 @@ +package credentials + +import ( + "os" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +// EnvProviderName provides a name of Env provider +const EnvProviderName = "EnvProvider" + +var ( + // ErrAccessKeyIDNotFound is returned when the AWS Access Key ID can't be + // found in the process's environment. + // + // @readonly + ErrAccessKeyIDNotFound = awserr.New("EnvAccessKeyNotFound", "AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment", nil) + + // ErrSecretAccessKeyNotFound is returned when the AWS Secret Access Key + // can't be found in the process's environment. + // + // @readonly + ErrSecretAccessKeyNotFound = awserr.New("EnvSecretNotFound", "AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment", nil) +) + +// A EnvProvider retrieves credentials from the environment variables of the +// running process. Environment credentials never expire. +// +// Environment variables used: +// +// * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY +// +// * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY +type EnvProvider struct { + retrieved bool +} + +// NewEnvCredentials returns a pointer to a new Credentials object +// wrapping the environment variable provider. +func NewEnvCredentials() *Credentials { + return NewCredentials(&EnvProvider{}) +} + +// Retrieve retrieves the keys from the environment. +func (e *EnvProvider) Retrieve() (Value, error) { + e.retrieved = false + + id := os.Getenv("AWS_ACCESS_KEY_ID") + if id == "" { + id = os.Getenv("AWS_ACCESS_KEY") + } + + secret := os.Getenv("AWS_SECRET_ACCESS_KEY") + if secret == "" { + secret = os.Getenv("AWS_SECRET_KEY") + } + + if id == "" { + return Value{ProviderName: EnvProviderName}, ErrAccessKeyIDNotFound + } + + if secret == "" { + return Value{ProviderName: EnvProviderName}, ErrSecretAccessKeyNotFound + } + + e.retrieved = true + return Value{ + AccessKeyID: id, + SecretAccessKey: secret, + SessionToken: os.Getenv("AWS_SESSION_TOKEN"), + ProviderName: EnvProviderName, + }, nil +} + +// IsExpired returns if the credentials have been retrieved. +func (e *EnvProvider) IsExpired() bool { + return !e.retrieved +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go new file mode 100644 index 00000000..51e21e0f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go @@ -0,0 +1,150 @@ +package credentials + +import ( + "fmt" + "os" + + "github.com/go-ini/ini" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/internal/shareddefaults" +) + +// SharedCredsProviderName provides a name of SharedCreds provider +const SharedCredsProviderName = "SharedCredentialsProvider" + +var ( + // ErrSharedCredentialsHomeNotFound is emitted when the user directory cannot be found. + ErrSharedCredentialsHomeNotFound = awserr.New("UserHomeNotFound", "user home directory not found.", nil) +) + +// A SharedCredentialsProvider retrieves credentials from the current user's home +// directory, and keeps track if those credentials are expired. +// +// Profile ini file example: $HOME/.aws/credentials +type SharedCredentialsProvider struct { + // Path to the shared credentials file. + // + // If empty will look for "AWS_SHARED_CREDENTIALS_FILE" env variable. If the + // env value is empty will default to current user's home directory. + // Linux/OSX: "$HOME/.aws/credentials" + // Windows: "%USERPROFILE%\.aws\credentials" + Filename string + + // AWS Profile to extract credentials from the shared credentials file. If empty + // will default to environment variable "AWS_PROFILE" or "default" if + // environment variable is also not set. + Profile string + + // retrieved states if the credentials have been successfully retrieved. + retrieved bool +} + +// NewSharedCredentials returns a pointer to a new Credentials object +// wrapping the Profile file provider. +func NewSharedCredentials(filename, profile string) *Credentials { + return NewCredentials(&SharedCredentialsProvider{ + Filename: filename, + Profile: profile, + }) +} + +// Retrieve reads and extracts the shared credentials from the current +// users home directory. +func (p *SharedCredentialsProvider) Retrieve() (Value, error) { + p.retrieved = false + + filename, err := p.filename() + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, err + } + + creds, err := loadProfile(filename, p.profile()) + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, err + } + + p.retrieved = true + return creds, nil +} + +// IsExpired returns if the shared credentials have expired. +func (p *SharedCredentialsProvider) IsExpired() bool { + return !p.retrieved +} + +// loadProfiles loads from the file pointed to by shared credentials filename for profile. +// The credentials retrieved from the profile will be returned or error. Error will be +// returned if it fails to read from the file, or the data is invalid. +func loadProfile(filename, profile string) (Value, error) { + config, err := ini.Load(filename) + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsLoad", "failed to load shared credentials file", err) + } + iniProfile, err := config.GetSection(profile) + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsLoad", "failed to get profile", err) + } + + id, err := iniProfile.GetKey("aws_access_key_id") + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsAccessKey", + fmt.Sprintf("shared credentials %s in %s did not contain aws_access_key_id", profile, filename), + err) + } + + secret, err := iniProfile.GetKey("aws_secret_access_key") + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsSecret", + fmt.Sprintf("shared credentials %s in %s did not contain aws_secret_access_key", profile, filename), + nil) + } + + // Default to empty string if not found + token := iniProfile.Key("aws_session_token") + + return Value{ + AccessKeyID: id.String(), + SecretAccessKey: secret.String(), + SessionToken: token.String(), + ProviderName: SharedCredsProviderName, + }, nil +} + +// filename returns the filename to use to read AWS shared credentials. +// +// Will return an error if the user's home directory path cannot be found. +func (p *SharedCredentialsProvider) filename() (string, error) { + if len(p.Filename) != 0 { + return p.Filename, nil + } + + if p.Filename = os.Getenv("AWS_SHARED_CREDENTIALS_FILE"); len(p.Filename) != 0 { + return p.Filename, nil + } + + if home := shareddefaults.UserHomeDir(); len(home) == 0 { + // Backwards compatibility of home directly not found error being returned. + // This error is too verbose, failure when opening the file would of been + // a better error to return. + return "", ErrSharedCredentialsHomeNotFound + } + + p.Filename = shareddefaults.SharedCredentialsFilename() + + return p.Filename, nil +} + +// profile returns the AWS shared credentials profile. If empty will read +// environment variable "AWS_PROFILE". If that is not set profile will +// return "default". +func (p *SharedCredentialsProvider) profile() string { + if p.Profile == "" { + p.Profile = os.Getenv("AWS_PROFILE") + } + if p.Profile == "" { + p.Profile = "default" + } + + return p.Profile +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go new file mode 100644 index 00000000..4f5dab3f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go @@ -0,0 +1,57 @@ +package credentials + +import ( + "github.com/aws/aws-sdk-go/aws/awserr" +) + +// StaticProviderName provides a name of Static provider +const StaticProviderName = "StaticProvider" + +var ( + // ErrStaticCredentialsEmpty is emitted when static credentials are empty. + // + // @readonly + ErrStaticCredentialsEmpty = awserr.New("EmptyStaticCreds", "static credentials are empty", nil) +) + +// A StaticProvider is a set of credentials which are set programmatically, +// and will never expire. +type StaticProvider struct { + Value +} + +// NewStaticCredentials returns a pointer to a new Credentials object +// wrapping a static credentials value provider. +func NewStaticCredentials(id, secret, token string) *Credentials { + return NewCredentials(&StaticProvider{Value: Value{ + AccessKeyID: id, + SecretAccessKey: secret, + SessionToken: token, + }}) +} + +// NewStaticCredentialsFromCreds returns a pointer to a new Credentials object +// wrapping the static credentials value provide. Same as NewStaticCredentials +// but takes the creds Value instead of individual fields +func NewStaticCredentialsFromCreds(creds Value) *Credentials { + return NewCredentials(&StaticProvider{Value: creds}) +} + +// Retrieve returns the credentials or error if the credentials are invalid. +func (s *StaticProvider) Retrieve() (Value, error) { + if s.AccessKeyID == "" || s.SecretAccessKey == "" { + return Value{ProviderName: StaticProviderName}, ErrStaticCredentialsEmpty + } + + if len(s.Value.ProviderName) == 0 { + s.Value.ProviderName = StaticProviderName + } + return s.Value, nil +} + +// IsExpired returns if the credentials are expired. +// +// For StaticProvider, the credentials never expired. +func (s *StaticProvider) IsExpired() bool { + return false +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/assume_role_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/assume_role_provider.go new file mode 100644 index 00000000..4108e433 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/stscreds/assume_role_provider.go @@ -0,0 +1,298 @@ +/* +Package stscreds are credential Providers to retrieve STS AWS credentials. + +STS provides multiple ways to retrieve credentials which can be used when making +future AWS service API operation calls. + +The SDK will ensure that per instance of credentials.Credentials all requests +to refresh the credentials will be synchronized. But, the SDK is unable to +ensure synchronous usage of the AssumeRoleProvider if the value is shared +between multiple Credentials, Sessions or service clients. + +Assume Role + +To assume an IAM role using STS with the SDK you can create a new Credentials +with the SDKs's stscreds package. + + // Initial credentials loaded from SDK's default credential chain. Such as + // the environment, shared credentials (~/.aws/credentials), or EC2 Instance + // Role. These credentials will be used to to make the STS Assume Role API. + sess := session.Must(session.NewSession()) + + // Create the credentials from AssumeRoleProvider to assume the role + // referenced by the "myRoleARN" ARN. + creds := stscreds.NewCredentials(sess, "myRoleArn") + + // Create service client value configured for credentials + // from assumed role. + svc := s3.New(sess, &aws.Config{Credentials: creds}) + +Assume Role with static MFA Token + +To assume an IAM role with a MFA token you can either specify a MFA token code +directly or provide a function to prompt the user each time the credentials +need to refresh the role's credentials. Specifying the TokenCode should be used +for short lived operations that will not need to be refreshed, and when you do +not want to have direct control over the user provides their MFA token. + +With TokenCode the AssumeRoleProvider will be not be able to refresh the role's +credentials. + + // Create the credentials from AssumeRoleProvider to assume the role + // referenced by the "myRoleARN" ARN using the MFA token code provided. + creds := stscreds.NewCredentials(sess, "myRoleArn", func(p *stscreds.AssumeRoleProvider) { + p.SerialNumber = aws.String("myTokenSerialNumber") + p.TokenCode = aws.String("00000000") + }) + + // Create service client value configured for credentials + // from assumed role. + svc := s3.New(sess, &aws.Config{Credentials: creds}) + +Assume Role with MFA Token Provider + +To assume an IAM role with MFA for longer running tasks where the credentials +may need to be refreshed setting the TokenProvider field of AssumeRoleProvider +will allow the credential provider to prompt for new MFA token code when the +role's credentials need to be refreshed. + +The StdinTokenProvider function is available to prompt on stdin to retrieve +the MFA token code from the user. You can also implement custom prompts by +satisfing the TokenProvider function signature. + +Using StdinTokenProvider with multiple AssumeRoleProviders, or Credentials will +have undesirable results as the StdinTokenProvider will not be synchronized. A +single Credentials with an AssumeRoleProvider can be shared safely. + + // Create the credentials from AssumeRoleProvider to assume the role + // referenced by the "myRoleARN" ARN. Prompting for MFA token from stdin. + creds := stscreds.NewCredentials(sess, "myRoleArn", func(p *stscreds.AssumeRoleProvider) { + p.SerialNumber = aws.String("myTokenSerialNumber") + p.TokenProvider = stscreds.StdinTokenProvider + }) + + // Create service client value configured for credentials + // from assumed role. + svc := s3.New(sess, &aws.Config{Credentials: creds}) + +*/ +package stscreds + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/service/sts" +) + +// StdinTokenProvider will prompt on stdout and read from stdin for a string value. +// An error is returned if reading from stdin fails. +// +// Use this function go read MFA tokens from stdin. The function makes no attempt +// to make atomic prompts from stdin across multiple gorouties. +// +// Using StdinTokenProvider with multiple AssumeRoleProviders, or Credentials will +// have undesirable results as the StdinTokenProvider will not be synchronized. A +// single Credentials with an AssumeRoleProvider can be shared safely +// +// Will wait forever until something is provided on the stdin. +func StdinTokenProvider() (string, error) { + var v string + fmt.Printf("Assume Role MFA token code: ") + _, err := fmt.Scanln(&v) + + return v, err +} + +// ProviderName provides a name of AssumeRole provider +const ProviderName = "AssumeRoleProvider" + +// AssumeRoler represents the minimal subset of the STS client API used by this provider. +type AssumeRoler interface { + AssumeRole(input *sts.AssumeRoleInput) (*sts.AssumeRoleOutput, error) +} + +// DefaultDuration is the default amount of time in minutes that the credentials +// will be valid for. +var DefaultDuration = time.Duration(15) * time.Minute + +// AssumeRoleProvider retrieves temporary credentials from the STS service, and +// keeps track of their expiration time. +// +// This credential provider will be used by the SDKs default credential change +// when shared configuration is enabled, and the shared config or shared credentials +// file configure assume role. See Session docs for how to do this. +// +// AssumeRoleProvider does not provide any synchronization and it is not safe +// to share this value across multiple Credentials, Sessions, or service clients +// without also sharing the same Credentials instance. +type AssumeRoleProvider struct { + credentials.Expiry + + // STS client to make assume role request with. + Client AssumeRoler + + // Role to be assumed. + RoleARN string + + // Session name, if you wish to reuse the credentials elsewhere. + RoleSessionName string + + // Expiry duration of the STS credentials. Defaults to 15 minutes if not set. + Duration time.Duration + + // Optional ExternalID to pass along, defaults to nil if not set. + ExternalID *string + + // The policy plain text must be 2048 bytes or shorter. However, an internal + // conversion compresses it into a packed binary format with a separate limit. + // The PackedPolicySize response element indicates by percentage how close to + // the upper size limit the policy is, with 100% equaling the maximum allowed + // size. + Policy *string + + // The identification number of the MFA device that is associated with the user + // who is making the AssumeRole call. Specify this value if the trust policy + // of the role being assumed includes a condition that requires MFA authentication. + // The value is either the serial number for a hardware device (such as GAHT12345678) + // or an Amazon Resource Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user). + SerialNumber *string + + // The value provided by the MFA device, if the trust policy of the role being + // assumed requires MFA (that is, if the policy includes a condition that tests + // for MFA). If the role being assumed requires MFA and if the TokenCode value + // is missing or expired, the AssumeRole call returns an "access denied" error. + // + // If SerialNumber is set and neither TokenCode nor TokenProvider are also + // set an error will be returned. + TokenCode *string + + // Async method of providing MFA token code for assuming an IAM role with MFA. + // The value returned by the function will be used as the TokenCode in the Retrieve + // call. See StdinTokenProvider for a provider that prompts and reads from stdin. + // + // This token provider will be called when ever the assumed role's + // credentials need to be refreshed when SerialNumber is also set and + // TokenCode is not set. + // + // If both TokenCode and TokenProvider is set, TokenProvider will be used and + // TokenCode is ignored. + TokenProvider func() (string, error) + + // ExpiryWindow will allow the credentials to trigger refreshing prior to + // the credentials actually expiring. This is beneficial so race conditions + // with expiring credentials do not cause request to fail unexpectedly + // due to ExpiredTokenException exceptions. + // + // So a ExpiryWindow of 10s would cause calls to IsExpired() to return true + // 10 seconds before the credentials are actually expired. + // + // If ExpiryWindow is 0 or less it will be ignored. + ExpiryWindow time.Duration +} + +// NewCredentials returns a pointer to a new Credentials object wrapping the +// AssumeRoleProvider. The credentials will expire every 15 minutes and the +// role will be named after a nanosecond timestamp of this operation. +// +// Takes a Config provider to create the STS client. The ConfigProvider is +// satisfied by the session.Session type. +// +// It is safe to share the returned Credentials with multiple Sessions and +// service clients. All access to the credentials and refreshing them +// will be synchronized. +func NewCredentials(c client.ConfigProvider, roleARN string, options ...func(*AssumeRoleProvider)) *credentials.Credentials { + p := &AssumeRoleProvider{ + Client: sts.New(c), + RoleARN: roleARN, + Duration: DefaultDuration, + } + + for _, option := range options { + option(p) + } + + return credentials.NewCredentials(p) +} + +// NewCredentialsWithClient returns a pointer to a new Credentials object wrapping the +// AssumeRoleProvider. The credentials will expire every 15 minutes and the +// role will be named after a nanosecond timestamp of this operation. +// +// Takes an AssumeRoler which can be satisfied by the STS client. +// +// It is safe to share the returned Credentials with multiple Sessions and +// service clients. All access to the credentials and refreshing them +// will be synchronized. +func NewCredentialsWithClient(svc AssumeRoler, roleARN string, options ...func(*AssumeRoleProvider)) *credentials.Credentials { + p := &AssumeRoleProvider{ + Client: svc, + RoleARN: roleARN, + Duration: DefaultDuration, + } + + for _, option := range options { + option(p) + } + + return credentials.NewCredentials(p) +} + +// Retrieve generates a new set of temporary credentials using STS. +func (p *AssumeRoleProvider) Retrieve() (credentials.Value, error) { + + // Apply defaults where parameters are not set. + if p.RoleSessionName == "" { + // Try to work out a role name that will hopefully end up unique. + p.RoleSessionName = fmt.Sprintf("%d", time.Now().UTC().UnixNano()) + } + if p.Duration == 0 { + // Expire as often as AWS permits. + p.Duration = DefaultDuration + } + input := &sts.AssumeRoleInput{ + DurationSeconds: aws.Int64(int64(p.Duration / time.Second)), + RoleArn: aws.String(p.RoleARN), + RoleSessionName: aws.String(p.RoleSessionName), + ExternalId: p.ExternalID, + } + if p.Policy != nil { + input.Policy = p.Policy + } + if p.SerialNumber != nil { + if p.TokenCode != nil { + input.SerialNumber = p.SerialNumber + input.TokenCode = p.TokenCode + } else if p.TokenProvider != nil { + input.SerialNumber = p.SerialNumber + code, err := p.TokenProvider() + if err != nil { + return credentials.Value{ProviderName: ProviderName}, err + } + input.TokenCode = aws.String(code) + } else { + return credentials.Value{ProviderName: ProviderName}, + awserr.New("AssumeRoleTokenNotAvailable", + "assume role with MFA enabled, but neither TokenCode nor TokenProvider are set", nil) + } + } + + roleOutput, err := p.Client.AssumeRole(input) + if err != nil { + return credentials.Value{ProviderName: ProviderName}, err + } + + // We will proactively generate new credentials before they expire. + p.SetExpiration(*roleOutput.Credentials.Expiration, p.ExpiryWindow) + + return credentials.Value{ + AccessKeyID: *roleOutput.Credentials.AccessKeyId, + SecretAccessKey: *roleOutput.Credentials.SecretAccessKey, + SessionToken: *roleOutput.Credentials.SessionToken, + ProviderName: ProviderName, + }, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go new file mode 100644 index 00000000..3cf1036b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go @@ -0,0 +1,194 @@ +// Package defaults is a collection of helpers to retrieve the SDK's default +// configuration and handlers. +// +// Generally this package shouldn't be used directly, but session.Session +// instead. This package is useful when you need to reset the defaults +// of a session or service client to the SDK defaults before setting +// additional parameters. +package defaults + +import ( + "fmt" + "net" + "net/http" + "net/url" + "os" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/corehandlers" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" + "github.com/aws/aws-sdk-go/aws/credentials/endpointcreds" + "github.com/aws/aws-sdk-go/aws/ec2metadata" + "github.com/aws/aws-sdk-go/aws/endpoints" + "github.com/aws/aws-sdk-go/aws/request" +) + +// A Defaults provides a collection of default values for SDK clients. +type Defaults struct { + Config *aws.Config + Handlers request.Handlers +} + +// Get returns the SDK's default values with Config and handlers pre-configured. +func Get() Defaults { + cfg := Config() + handlers := Handlers() + cfg.Credentials = CredChain(cfg, handlers) + + return Defaults{ + Config: cfg, + Handlers: handlers, + } +} + +// Config returns the default configuration without credentials. +// To retrieve a config with credentials also included use +// `defaults.Get().Config` instead. +// +// Generally you shouldn't need to use this method directly, but +// is available if you need to reset the configuration of an +// existing service client or session. +func Config() *aws.Config { + return aws.NewConfig(). + WithCredentials(credentials.AnonymousCredentials). + WithRegion(os.Getenv("AWS_REGION")). + WithHTTPClient(http.DefaultClient). + WithMaxRetries(aws.UseServiceDefaultRetries). + WithLogger(aws.NewDefaultLogger()). + WithLogLevel(aws.LogOff). + WithEndpointResolver(endpoints.DefaultResolver()) +} + +// Handlers returns the default request handlers. +// +// Generally you shouldn't need to use this method directly, but +// is available if you need to reset the request handlers of an +// existing service client or session. +func Handlers() request.Handlers { + var handlers request.Handlers + + handlers.Validate.PushBackNamed(corehandlers.ValidateEndpointHandler) + handlers.Validate.AfterEachFn = request.HandlerListStopOnError + handlers.Build.PushBackNamed(corehandlers.SDKVersionUserAgentHandler) + handlers.Build.PushBackNamed(corehandlers.AddHostExecEnvUserAgentHander) + handlers.Build.AfterEachFn = request.HandlerListStopOnError + handlers.Sign.PushBackNamed(corehandlers.BuildContentLengthHandler) + handlers.Send.PushBackNamed(corehandlers.ValidateReqSigHandler) + handlers.Send.PushBackNamed(corehandlers.SendHandler) + handlers.AfterRetry.PushBackNamed(corehandlers.AfterRetryHandler) + handlers.ValidateResponse.PushBackNamed(corehandlers.ValidateResponseHandler) + + return handlers +} + +// CredChain returns the default credential chain. +// +// Generally you shouldn't need to use this method directly, but +// is available if you need to reset the credentials of an +// existing service client or session's Config. +func CredChain(cfg *aws.Config, handlers request.Handlers) *credentials.Credentials { + return credentials.NewCredentials(&credentials.ChainProvider{ + VerboseErrors: aws.BoolValue(cfg.CredentialsChainVerboseErrors), + Providers: []credentials.Provider{ + &credentials.EnvProvider{}, + &credentials.SharedCredentialsProvider{Filename: "", Profile: ""}, + RemoteCredProvider(*cfg, handlers), + }, + }) +} + +const ( + httpProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_FULL_URI" + ecsCredsProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" +) + +// RemoteCredProvider returns a credentials provider for the default remote +// endpoints such as EC2 or ECS Roles. +func RemoteCredProvider(cfg aws.Config, handlers request.Handlers) credentials.Provider { + if u := os.Getenv(httpProviderEnvVar); len(u) > 0 { + return localHTTPCredProvider(cfg, handlers, u) + } + + if uri := os.Getenv(ecsCredsProviderEnvVar); len(uri) > 0 { + u := fmt.Sprintf("http://169.254.170.2%s", uri) + return httpCredProvider(cfg, handlers, u) + } + + return ec2RoleProvider(cfg, handlers) +} + +var lookupHostFn = net.LookupHost + +func isLoopbackHost(host string) (bool, error) { + ip := net.ParseIP(host) + if ip != nil { + return ip.IsLoopback(), nil + } + + // Host is not an ip, perform lookup + addrs, err := lookupHostFn(host) + if err != nil { + return false, err + } + for _, addr := range addrs { + if !net.ParseIP(addr).IsLoopback() { + return false, nil + } + } + + return true, nil +} + +func localHTTPCredProvider(cfg aws.Config, handlers request.Handlers, u string) credentials.Provider { + var errMsg string + + parsed, err := url.Parse(u) + if err != nil { + errMsg = fmt.Sprintf("invalid URL, %v", err) + } else { + host := aws.URLHostname(parsed) + if len(host) == 0 { + errMsg = "unable to parse host from local HTTP cred provider URL" + } else if isLoopback, loopbackErr := isLoopbackHost(host); loopbackErr != nil { + errMsg = fmt.Sprintf("failed to resolve host %q, %v", host, loopbackErr) + } else if !isLoopback { + errMsg = fmt.Sprintf("invalid endpoint host, %q, only loopback hosts are allowed.", host) + } + } + + if len(errMsg) > 0 { + if cfg.Logger != nil { + cfg.Logger.Log("Ignoring, HTTP credential provider", errMsg, err) + } + return credentials.ErrorProvider{ + Err: awserr.New("CredentialsEndpointError", errMsg, err), + ProviderName: endpointcreds.ProviderName, + } + } + + return httpCredProvider(cfg, handlers, u) +} + +func httpCredProvider(cfg aws.Config, handlers request.Handlers, u string) credentials.Provider { + return endpointcreds.NewProviderClient(cfg, handlers, u, + func(p *endpointcreds.Provider) { + p.ExpiryWindow = 5 * time.Minute + }, + ) +} + +func ec2RoleProvider(cfg aws.Config, handlers request.Handlers) credentials.Provider { + resolver := cfg.EndpointResolver + if resolver == nil { + resolver = endpoints.DefaultResolver() + } + + e, _ := resolver.EndpointFor(endpoints.Ec2metadataServiceID, "") + return &ec2rolecreds.EC2RoleProvider{ + Client: ec2metadata.NewClient(cfg, handlers, e.URL, e.SigningRegion), + ExpiryWindow: 5 * time.Minute, + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/defaults/shared_config.go b/vendor/github.com/aws/aws-sdk-go/aws/defaults/shared_config.go new file mode 100644 index 00000000..ca0ee1dc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/defaults/shared_config.go @@ -0,0 +1,27 @@ +package defaults + +import ( + "github.com/aws/aws-sdk-go/internal/shareddefaults" +) + +// SharedCredentialsFilename returns the SDK's default file path +// for the shared credentials file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.aws/credentials +// - Windows: %USERPROFILE%\.aws\credentials +func SharedCredentialsFilename() string { + return shareddefaults.SharedCredentialsFilename() +} + +// SharedConfigFilename returns the SDK's default file path for +// the shared config file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.aws/config +// - Windows: %USERPROFILE%\.aws\config +func SharedConfigFilename() string { + return shareddefaults.SharedConfigFilename() +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/doc.go b/vendor/github.com/aws/aws-sdk-go/aws/doc.go new file mode 100644 index 00000000..4fcb6161 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/doc.go @@ -0,0 +1,56 @@ +// Package aws provides the core SDK's utilities and shared types. Use this package's +// utilities to simplify setting and reading API operations parameters. +// +// Value and Pointer Conversion Utilities +// +// This package includes a helper conversion utility for each scalar type the SDK's +// API use. These utilities make getting a pointer of the scalar, and dereferencing +// a pointer easier. +// +// Each conversion utility comes in two forms. Value to Pointer and Pointer to Value. +// The Pointer to value will safely dereference the pointer and return its value. +// If the pointer was nil, the scalar's zero value will be returned. +// +// The value to pointer functions will be named after the scalar type. So get a +// *string from a string value use the "String" function. This makes it easy to +// to get pointer of a literal string value, because getting the address of a +// literal requires assigning the value to a variable first. +// +// var strPtr *string +// +// // Without the SDK's conversion functions +// str := "my string" +// strPtr = &str +// +// // With the SDK's conversion functions +// strPtr = aws.String("my string") +// +// // Convert *string to string value +// str = aws.StringValue(strPtr) +// +// In addition to scalars the aws package also includes conversion utilities for +// map and slice for commonly types used in API parameters. The map and slice +// conversion functions use similar naming pattern as the scalar conversion +// functions. +// +// var strPtrs []*string +// var strs []string = []string{"Go", "Gophers", "Go"} +// +// // Convert []string to []*string +// strPtrs = aws.StringSlice(strs) +// +// // Convert []*string to []string +// strs = aws.StringValueSlice(strPtrs) +// +// SDK Default HTTP Client +// +// The SDK will use the http.DefaultClient if a HTTP client is not provided to +// the SDK's Session, or service client constructor. This means that if the +// http.DefaultClient is modified by other components of your application the +// modifications will be picked up by the SDK as well. +// +// In some cases this might be intended, but it is a better practice to create +// a custom HTTP Client to share explicitly through your application. You can +// configure the SDK to use the custom HTTP Client by setting the HTTPClient +// value of the SDK's Config type when creating a Session or service client. +package aws diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go new file mode 100644 index 00000000..984407a5 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go @@ -0,0 +1,162 @@ +package ec2metadata + +import ( + "encoding/json" + "fmt" + "net/http" + "path" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +// GetMetadata uses the path provided to request information from the EC2 +// instance metdata service. The content will be returned as a string, or +// error if the request failed. +func (c *EC2Metadata) GetMetadata(p string) (string, error) { + op := &request.Operation{ + Name: "GetMetadata", + HTTPMethod: "GET", + HTTPPath: path.Join("/", "meta-data", p), + } + + output := &metadataOutput{} + req := c.NewRequest(op, nil, output) + + return output.Content, req.Send() +} + +// GetUserData returns the userdata that was configured for the service. If +// there is no user-data setup for the EC2 instance a "NotFoundError" error +// code will be returned. +func (c *EC2Metadata) GetUserData() (string, error) { + op := &request.Operation{ + Name: "GetUserData", + HTTPMethod: "GET", + HTTPPath: path.Join("/", "user-data"), + } + + output := &metadataOutput{} + req := c.NewRequest(op, nil, output) + req.Handlers.UnmarshalError.PushBack(func(r *request.Request) { + if r.HTTPResponse.StatusCode == http.StatusNotFound { + r.Error = awserr.New("NotFoundError", "user-data not found", r.Error) + } + }) + + return output.Content, req.Send() +} + +// GetDynamicData uses the path provided to request information from the EC2 +// instance metadata service for dynamic data. The content will be returned +// as a string, or error if the request failed. +func (c *EC2Metadata) GetDynamicData(p string) (string, error) { + op := &request.Operation{ + Name: "GetDynamicData", + HTTPMethod: "GET", + HTTPPath: path.Join("/", "dynamic", p), + } + + output := &metadataOutput{} + req := c.NewRequest(op, nil, output) + + return output.Content, req.Send() +} + +// GetInstanceIdentityDocument retrieves an identity document describing an +// instance. Error is returned if the request fails or is unable to parse +// the response. +func (c *EC2Metadata) GetInstanceIdentityDocument() (EC2InstanceIdentityDocument, error) { + resp, err := c.GetDynamicData("instance-identity/document") + if err != nil { + return EC2InstanceIdentityDocument{}, + awserr.New("EC2MetadataRequestError", + "failed to get EC2 instance identity document", err) + } + + doc := EC2InstanceIdentityDocument{} + if err := json.NewDecoder(strings.NewReader(resp)).Decode(&doc); err != nil { + return EC2InstanceIdentityDocument{}, + awserr.New("SerializationError", + "failed to decode EC2 instance identity document", err) + } + + return doc, nil +} + +// IAMInfo retrieves IAM info from the metadata API +func (c *EC2Metadata) IAMInfo() (EC2IAMInfo, error) { + resp, err := c.GetMetadata("iam/info") + if err != nil { + return EC2IAMInfo{}, + awserr.New("EC2MetadataRequestError", + "failed to get EC2 IAM info", err) + } + + info := EC2IAMInfo{} + if err := json.NewDecoder(strings.NewReader(resp)).Decode(&info); err != nil { + return EC2IAMInfo{}, + awserr.New("SerializationError", + "failed to decode EC2 IAM info", err) + } + + if info.Code != "Success" { + errMsg := fmt.Sprintf("failed to get EC2 IAM Info (%s)", info.Code) + return EC2IAMInfo{}, + awserr.New("EC2MetadataError", errMsg, nil) + } + + return info, nil +} + +// Region returns the region the instance is running in. +func (c *EC2Metadata) Region() (string, error) { + resp, err := c.GetMetadata("placement/availability-zone") + if err != nil { + return "", err + } + + // returns region without the suffix. Eg: us-west-2a becomes us-west-2 + return resp[:len(resp)-1], nil +} + +// Available returns if the application has access to the EC2 Metadata service. +// Can be used to determine if application is running within an EC2 Instance and +// the metadata service is available. +func (c *EC2Metadata) Available() bool { + if _, err := c.GetMetadata("instance-id"); err != nil { + return false + } + + return true +} + +// An EC2IAMInfo provides the shape for unmarshaling +// an IAM info from the metadata API +type EC2IAMInfo struct { + Code string + LastUpdated time.Time + InstanceProfileArn string + InstanceProfileID string +} + +// An EC2InstanceIdentityDocument provides the shape for unmarshaling +// an instance identity document +type EC2InstanceIdentityDocument struct { + DevpayProductCodes []string `json:"devpayProductCodes"` + AvailabilityZone string `json:"availabilityZone"` + PrivateIP string `json:"privateIp"` + Version string `json:"version"` + Region string `json:"region"` + InstanceID string `json:"instanceId"` + BillingProducts []string `json:"billingProducts"` + InstanceType string `json:"instanceType"` + AccountID string `json:"accountId"` + PendingTime time.Time `json:"pendingTime"` + ImageID string `json:"imageId"` + KernelID string `json:"kernelId"` + RamdiskID string `json:"ramdiskId"` + Architecture string `json:"architecture"` +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go new file mode 100644 index 00000000..ef5f7329 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go @@ -0,0 +1,148 @@ +// Package ec2metadata provides the client for making API calls to the +// EC2 Metadata service. +// +// This package's client can be disabled completely by setting the environment +// variable "AWS_EC2_METADATA_DISABLED=true". This environment variable set to +// true instructs the SDK to disable the EC2 Metadata client. The client cannot +// be used while the environemnt variable is set to true, (case insensitive). +package ec2metadata + +import ( + "bytes" + "errors" + "io" + "net/http" + "os" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/corehandlers" + "github.com/aws/aws-sdk-go/aws/request" +) + +// ServiceName is the name of the service. +const ServiceName = "ec2metadata" +const disableServiceEnvVar = "AWS_EC2_METADATA_DISABLED" + +// A EC2Metadata is an EC2 Metadata service Client. +type EC2Metadata struct { + *client.Client +} + +// New creates a new instance of the EC2Metadata client with a session. +// This client is safe to use across multiple goroutines. +// +// +// Example: +// // Create a EC2Metadata client from just a session. +// svc := ec2metadata.New(mySession) +// +// // Create a EC2Metadata client with additional configuration +// svc := ec2metadata.New(mySession, aws.NewConfig().WithLogLevel(aws.LogDebugHTTPBody)) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *EC2Metadata { + c := p.ClientConfig(ServiceName, cfgs...) + return NewClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion) +} + +// NewClient returns a new EC2Metadata client. Should be used to create +// a client when not using a session. Generally using just New with a session +// is preferred. +// +// If an unmodified HTTP client is provided from the stdlib default, or no client +// the EC2RoleProvider's EC2Metadata HTTP client's timeout will be shortened. +// To disable this set Config.EC2MetadataDisableTimeoutOverride to false. Enabled by default. +func NewClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion string, opts ...func(*client.Client)) *EC2Metadata { + if !aws.BoolValue(cfg.EC2MetadataDisableTimeoutOverride) && httpClientZero(cfg.HTTPClient) { + // If the http client is unmodified and this feature is not disabled + // set custom timeouts for EC2Metadata requests. + cfg.HTTPClient = &http.Client{ + // use a shorter timeout than default because the metadata + // service is local if it is running, and to fail faster + // if not running on an ec2 instance. + Timeout: 5 * time.Second, + } + } + + svc := &EC2Metadata{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + Endpoint: endpoint, + APIVersion: "latest", + }, + handlers, + ), + } + + svc.Handlers.Unmarshal.PushBack(unmarshalHandler) + svc.Handlers.UnmarshalError.PushBack(unmarshalError) + svc.Handlers.Validate.Clear() + svc.Handlers.Validate.PushBack(validateEndpointHandler) + + // Disable the EC2 Metadata service if the environment variable is set. + // This shortcirctes the service's functionality to always fail to send + // requests. + if strings.ToLower(os.Getenv(disableServiceEnvVar)) == "true" { + svc.Handlers.Send.SwapNamed(request.NamedHandler{ + Name: corehandlers.SendHandler.Name, + Fn: func(r *request.Request) { + r.Error = awserr.New( + request.CanceledErrorCode, + "EC2 IMDS access disabled via "+disableServiceEnvVar+" env var", + nil) + }, + }) + } + + // Add additional options to the service config + for _, option := range opts { + option(svc.Client) + } + + return svc +} + +func httpClientZero(c *http.Client) bool { + return c == nil || (c.Transport == nil && c.CheckRedirect == nil && c.Jar == nil && c.Timeout == 0) +} + +type metadataOutput struct { + Content string +} + +func unmarshalHandler(r *request.Request) { + defer r.HTTPResponse.Body.Close() + b := &bytes.Buffer{} + if _, err := io.Copy(b, r.HTTPResponse.Body); err != nil { + r.Error = awserr.New("SerializationError", "unable to unmarshal EC2 metadata respose", err) + return + } + + if data, ok := r.Data.(*metadataOutput); ok { + data.Content = b.String() + } +} + +func unmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + b := &bytes.Buffer{} + if _, err := io.Copy(b, r.HTTPResponse.Body); err != nil { + r.Error = awserr.New("SerializationError", "unable to unmarshal EC2 metadata error respose", err) + return + } + + // Response body format is not consistent between metadata endpoints. + // Grab the error message as a string and include that as the source error + r.Error = awserr.New("EC2MetadataError", "failed to make EC2Metadata request", errors.New(b.String())) +} + +func validateEndpointHandler(r *request.Request) { + if r.ClientInfo.Endpoint == "" { + r.Error = aws.ErrMissingEndpoint + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go new file mode 100644 index 00000000..74f72de0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go @@ -0,0 +1,133 @@ +package endpoints + +import ( + "encoding/json" + "fmt" + "io" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +type modelDefinition map[string]json.RawMessage + +// A DecodeModelOptions are the options for how the endpoints model definition +// are decoded. +type DecodeModelOptions struct { + SkipCustomizations bool +} + +// Set combines all of the option functions together. +func (d *DecodeModelOptions) Set(optFns ...func(*DecodeModelOptions)) { + for _, fn := range optFns { + fn(d) + } +} + +// DecodeModel unmarshals a Regions and Endpoint model definition file into +// a endpoint Resolver. If the file format is not supported, or an error occurs +// when unmarshaling the model an error will be returned. +// +// Casting the return value of this func to a EnumPartitions will +// allow you to get a list of the partitions in the order the endpoints +// will be resolved in. +// +// resolver, err := endpoints.DecodeModel(reader) +// +// partitions := resolver.(endpoints.EnumPartitions).Partitions() +// for _, p := range partitions { +// // ... inspect partitions +// } +func DecodeModel(r io.Reader, optFns ...func(*DecodeModelOptions)) (Resolver, error) { + var opts DecodeModelOptions + opts.Set(optFns...) + + // Get the version of the partition file to determine what + // unmarshaling model to use. + modelDef := modelDefinition{} + if err := json.NewDecoder(r).Decode(&modelDef); err != nil { + return nil, newDecodeModelError("failed to decode endpoints model", err) + } + + var version string + if b, ok := modelDef["version"]; ok { + version = string(b) + } else { + return nil, newDecodeModelError("endpoints version not found in model", nil) + } + + if version == "3" { + return decodeV3Endpoints(modelDef, opts) + } + + return nil, newDecodeModelError( + fmt.Sprintf("endpoints version %s, not supported", version), nil) +} + +func decodeV3Endpoints(modelDef modelDefinition, opts DecodeModelOptions) (Resolver, error) { + b, ok := modelDef["partitions"] + if !ok { + return nil, newDecodeModelError("endpoints model missing partitions", nil) + } + + ps := partitions{} + if err := json.Unmarshal(b, &ps); err != nil { + return nil, newDecodeModelError("failed to decode endpoints model", err) + } + + if opts.SkipCustomizations { + return ps, nil + } + + // Customization + for i := 0; i < len(ps); i++ { + p := &ps[i] + custAddEC2Metadata(p) + custAddS3DualStack(p) + custRmIotDataService(p) + } + + return ps, nil +} + +func custAddS3DualStack(p *partition) { + if p.ID != "aws" { + return + } + + s, ok := p.Services["s3"] + if !ok { + return + } + + s.Defaults.HasDualStack = boxedTrue + s.Defaults.DualStackHostname = "{service}.dualstack.{region}.{dnsSuffix}" + + p.Services["s3"] = s +} + +func custAddEC2Metadata(p *partition) { + p.Services["ec2metadata"] = service{ + IsRegionalized: boxedFalse, + PartitionEndpoint: "aws-global", + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "169.254.169.254/latest", + Protocols: []string{"http"}, + }, + }, + } +} + +func custRmIotDataService(p *partition) { + delete(p.Services, "data.iot") +} + +type decodeModelError struct { + awsError +} + +func newDecodeModelError(msg string, err error) decodeModelError { + return decodeModelError{ + awsError: awserr.New("DecodeEndpointsModelError", msg, err), + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go new file mode 100644 index 00000000..e57c9acb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -0,0 +1,3127 @@ +// Code generated by aws/endpoints/v3model_codegen.go. DO NOT EDIT. + +package endpoints + +import ( + "regexp" +) + +// Partition identifiers +const ( + AwsPartitionID = "aws" // AWS Standard partition. + AwsCnPartitionID = "aws-cn" // AWS China partition. + AwsUsGovPartitionID = "aws-us-gov" // AWS GovCloud (US) partition. +) + +// AWS Standard partition's regions. +const ( + ApNortheast1RegionID = "ap-northeast-1" // Asia Pacific (Tokyo). + ApNortheast2RegionID = "ap-northeast-2" // Asia Pacific (Seoul). + ApSouth1RegionID = "ap-south-1" // Asia Pacific (Mumbai). + ApSoutheast1RegionID = "ap-southeast-1" // Asia Pacific (Singapore). + ApSoutheast2RegionID = "ap-southeast-2" // Asia Pacific (Sydney). + CaCentral1RegionID = "ca-central-1" // Canada (Central). + EuCentral1RegionID = "eu-central-1" // EU (Frankfurt). + EuWest1RegionID = "eu-west-1" // EU (Ireland). + EuWest2RegionID = "eu-west-2" // EU (London). + EuWest3RegionID = "eu-west-3" // EU (Paris). + SaEast1RegionID = "sa-east-1" // South America (Sao Paulo). + UsEast1RegionID = "us-east-1" // US East (N. Virginia). + UsEast2RegionID = "us-east-2" // US East (Ohio). + UsWest1RegionID = "us-west-1" // US West (N. California). + UsWest2RegionID = "us-west-2" // US West (Oregon). +) + +// AWS China partition's regions. +const ( + CnNorth1RegionID = "cn-north-1" // China (Beijing). + CnNorthwest1RegionID = "cn-northwest-1" // China (Ningxia). +) + +// AWS GovCloud (US) partition's regions. +const ( + UsGovWest1RegionID = "us-gov-west-1" // AWS GovCloud (US). +) + +// Service identifiers +const ( + A4bServiceID = "a4b" // A4b. + AcmServiceID = "acm" // Acm. + AcmPcaServiceID = "acm-pca" // AcmPca. + ApiPricingServiceID = "api.pricing" // ApiPricing. + ApigatewayServiceID = "apigateway" // Apigateway. + ApplicationAutoscalingServiceID = "application-autoscaling" // ApplicationAutoscaling. + Appstream2ServiceID = "appstream2" // Appstream2. + AthenaServiceID = "athena" // Athena. + AutoscalingServiceID = "autoscaling" // Autoscaling. + AutoscalingPlansServiceID = "autoscaling-plans" // AutoscalingPlans. + BatchServiceID = "batch" // Batch. + BudgetsServiceID = "budgets" // Budgets. + CeServiceID = "ce" // Ce. + Cloud9ServiceID = "cloud9" // Cloud9. + ClouddirectoryServiceID = "clouddirectory" // Clouddirectory. + CloudformationServiceID = "cloudformation" // Cloudformation. + CloudfrontServiceID = "cloudfront" // Cloudfront. + CloudhsmServiceID = "cloudhsm" // Cloudhsm. + Cloudhsmv2ServiceID = "cloudhsmv2" // Cloudhsmv2. + CloudsearchServiceID = "cloudsearch" // Cloudsearch. + CloudtrailServiceID = "cloudtrail" // Cloudtrail. + CodebuildServiceID = "codebuild" // Codebuild. + CodecommitServiceID = "codecommit" // Codecommit. + CodedeployServiceID = "codedeploy" // Codedeploy. + CodepipelineServiceID = "codepipeline" // Codepipeline. + CodestarServiceID = "codestar" // Codestar. + CognitoIdentityServiceID = "cognito-identity" // CognitoIdentity. + CognitoIdpServiceID = "cognito-idp" // CognitoIdp. + CognitoSyncServiceID = "cognito-sync" // CognitoSync. + ComprehendServiceID = "comprehend" // Comprehend. + ConfigServiceID = "config" // Config. + CurServiceID = "cur" // Cur. + DatapipelineServiceID = "datapipeline" // Datapipeline. + DaxServiceID = "dax" // Dax. + DevicefarmServiceID = "devicefarm" // Devicefarm. + DirectconnectServiceID = "directconnect" // Directconnect. + DiscoveryServiceID = "discovery" // Discovery. + DmsServiceID = "dms" // Dms. + DsServiceID = "ds" // Ds. + DynamodbServiceID = "dynamodb" // Dynamodb. + Ec2ServiceID = "ec2" // Ec2. + Ec2metadataServiceID = "ec2metadata" // Ec2metadata. + EcrServiceID = "ecr" // Ecr. + EcsServiceID = "ecs" // Ecs. + ElasticacheServiceID = "elasticache" // Elasticache. + ElasticbeanstalkServiceID = "elasticbeanstalk" // Elasticbeanstalk. + ElasticfilesystemServiceID = "elasticfilesystem" // Elasticfilesystem. + ElasticloadbalancingServiceID = "elasticloadbalancing" // Elasticloadbalancing. + ElasticmapreduceServiceID = "elasticmapreduce" // Elasticmapreduce. + ElastictranscoderServiceID = "elastictranscoder" // Elastictranscoder. + EmailServiceID = "email" // Email. + EntitlementMarketplaceServiceID = "entitlement.marketplace" // EntitlementMarketplace. + EsServiceID = "es" // Es. + EventsServiceID = "events" // Events. + FirehoseServiceID = "firehose" // Firehose. + FmsServiceID = "fms" // Fms. + GameliftServiceID = "gamelift" // Gamelift. + GlacierServiceID = "glacier" // Glacier. + GlueServiceID = "glue" // Glue. + GreengrassServiceID = "greengrass" // Greengrass. + GuarddutyServiceID = "guardduty" // Guardduty. + HealthServiceID = "health" // Health. + IamServiceID = "iam" // Iam. + ImportexportServiceID = "importexport" // Importexport. + InspectorServiceID = "inspector" // Inspector. + IotServiceID = "iot" // Iot. + KinesisServiceID = "kinesis" // Kinesis. + KinesisanalyticsServiceID = "kinesisanalytics" // Kinesisanalytics. + KinesisvideoServiceID = "kinesisvideo" // Kinesisvideo. + KmsServiceID = "kms" // Kms. + LambdaServiceID = "lambda" // Lambda. + LightsailServiceID = "lightsail" // Lightsail. + LogsServiceID = "logs" // Logs. + MachinelearningServiceID = "machinelearning" // Machinelearning. + MarketplacecommerceanalyticsServiceID = "marketplacecommerceanalytics" // Marketplacecommerceanalytics. + MediaconvertServiceID = "mediaconvert" // Mediaconvert. + MedialiveServiceID = "medialive" // Medialive. + MediapackageServiceID = "mediapackage" // Mediapackage. + MediastoreServiceID = "mediastore" // Mediastore. + MeteringMarketplaceServiceID = "metering.marketplace" // MeteringMarketplace. + MghServiceID = "mgh" // Mgh. + MobileanalyticsServiceID = "mobileanalytics" // Mobileanalytics. + ModelsLexServiceID = "models.lex" // ModelsLex. + MonitoringServiceID = "monitoring" // Monitoring. + MturkRequesterServiceID = "mturk-requester" // MturkRequester. + NeptuneServiceID = "neptune" // Neptune. + OpsworksServiceID = "opsworks" // Opsworks. + OpsworksCmServiceID = "opsworks-cm" // OpsworksCm. + OrganizationsServiceID = "organizations" // Organizations. + PinpointServiceID = "pinpoint" // Pinpoint. + PollyServiceID = "polly" // Polly. + RdsServiceID = "rds" // Rds. + RedshiftServiceID = "redshift" // Redshift. + RekognitionServiceID = "rekognition" // Rekognition. + ResourceGroupsServiceID = "resource-groups" // ResourceGroups. + Route53ServiceID = "route53" // Route53. + Route53domainsServiceID = "route53domains" // Route53domains. + RuntimeLexServiceID = "runtime.lex" // RuntimeLex. + RuntimeSagemakerServiceID = "runtime.sagemaker" // RuntimeSagemaker. + S3ServiceID = "s3" // S3. + SagemakerServiceID = "sagemaker" // Sagemaker. + SdbServiceID = "sdb" // Sdb. + SecretsmanagerServiceID = "secretsmanager" // Secretsmanager. + ServerlessrepoServiceID = "serverlessrepo" // Serverlessrepo. + ServicecatalogServiceID = "servicecatalog" // Servicecatalog. + ServicediscoveryServiceID = "servicediscovery" // Servicediscovery. + ShieldServiceID = "shield" // Shield. + SmsServiceID = "sms" // Sms. + SnowballServiceID = "snowball" // Snowball. + SnsServiceID = "sns" // Sns. + SqsServiceID = "sqs" // Sqs. + SsmServiceID = "ssm" // Ssm. + StatesServiceID = "states" // States. + StoragegatewayServiceID = "storagegateway" // Storagegateway. + StreamsDynamodbServiceID = "streams.dynamodb" // StreamsDynamodb. + StsServiceID = "sts" // Sts. + SupportServiceID = "support" // Support. + SwfServiceID = "swf" // Swf. + TaggingServiceID = "tagging" // Tagging. + TranslateServiceID = "translate" // Translate. + WafServiceID = "waf" // Waf. + WafRegionalServiceID = "waf-regional" // WafRegional. + WorkdocsServiceID = "workdocs" // Workdocs. + WorkmailServiceID = "workmail" // Workmail. + WorkspacesServiceID = "workspaces" // Workspaces. + XrayServiceID = "xray" // Xray. +) + +// DefaultResolver returns an Endpoint resolver that will be able +// to resolve endpoints for: AWS Standard, AWS China, and AWS GovCloud (US). +// +// Use DefaultPartitions() to get the list of the default partitions. +func DefaultResolver() Resolver { + return defaultPartitions +} + +// DefaultPartitions returns a list of the partitions the SDK is bundled +// with. The available partitions are: AWS Standard, AWS China, and AWS GovCloud (US). +// +// partitions := endpoints.DefaultPartitions +// for _, p := range partitions { +// // ... inspect partitions +// } +func DefaultPartitions() []Partition { + return defaultPartitions.Partitions() +} + +var defaultPartitions = partitions{ + awsPartition, + awscnPartition, + awsusgovPartition, +} + +// AwsPartition returns the Resolver for AWS Standard. +func AwsPartition() Partition { + return awsPartition.Partition() +} + +var awsPartition = partition{ + ID: "aws", + Name: "AWS Standard", + DNSSuffix: "amazonaws.com", + RegionRegex: regionRegex{ + Regexp: func() *regexp.Regexp { + reg, _ := regexp.Compile("^(us|eu|ap|sa|ca)\\-\\w+\\-\\d+$") + return reg + }(), + }, + Defaults: endpoint{ + Hostname: "{service}.{region}.{dnsSuffix}", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + Regions: regions{ + "ap-northeast-1": region{ + Description: "Asia Pacific (Tokyo)", + }, + "ap-northeast-2": region{ + Description: "Asia Pacific (Seoul)", + }, + "ap-south-1": region{ + Description: "Asia Pacific (Mumbai)", + }, + "ap-southeast-1": region{ + Description: "Asia Pacific (Singapore)", + }, + "ap-southeast-2": region{ + Description: "Asia Pacific (Sydney)", + }, + "ca-central-1": region{ + Description: "Canada (Central)", + }, + "eu-central-1": region{ + Description: "EU (Frankfurt)", + }, + "eu-west-1": region{ + Description: "EU (Ireland)", + }, + "eu-west-2": region{ + Description: "EU (London)", + }, + "eu-west-3": region{ + Description: "EU (Paris)", + }, + "sa-east-1": region{ + Description: "South America (Sao Paulo)", + }, + "us-east-1": region{ + Description: "US East (N. Virginia)", + }, + "us-east-2": region{ + Description: "US East (Ohio)", + }, + "us-west-1": region{ + Description: "US West (N. California)", + }, + "us-west-2": region{ + Description: "US West (Oregon)", + }, + }, + Services: services{ + "a4b": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "acm": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "acm-pca": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "api.pricing": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "pricing", + }, + }, + Endpoints: endpoints{ + "ap-south-1": endpoint{}, + "us-east-1": endpoint{}, + }, + }, + "apigateway": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "application-autoscaling": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "application-autoscaling", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "appstream2": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + CredentialScope: credentialScope{ + Service: "appstream", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "athena": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "autoscaling": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "autoscaling-plans": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "autoscaling-plans", + }, + }, + Endpoints: endpoints{ + "ap-southeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "batch": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "budgets": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "budgets.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "ce": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "ce.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "cloud9": service{ + + Endpoints: endpoints{ + "ap-southeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "clouddirectory": service{ + + Endpoints: endpoints{ + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cloudformation": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cloudfront": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "cloudfront.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "cloudhsm": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cloudhsmv2": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "cloudhsm", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cloudsearch": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cloudtrail": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "codebuild": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "codebuild-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "codebuild-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "codebuild-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "codebuild-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "codecommit": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "codedeploy": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "codepipeline": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "codestar": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cognito-identity": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cognito-idp": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cognito-sync": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "comprehend": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "config": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "cur": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "datapipeline": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "dax": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "devicefarm": service{ + + Endpoints: endpoints{ + "us-west-2": endpoint{}, + }, + }, + "directconnect": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "discovery": service{ + + Endpoints: endpoints{ + "us-west-2": endpoint{}, + }, + }, + "dms": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "ds": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "local": endpoint{ + Hostname: "localhost:8000", + Protocols: []string{"http"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "ec2": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "ec2metadata": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "169.254.169.254/latest", + Protocols: []string{"http"}, + }, + }, + }, + "ecr": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "ecs": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elasticache": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elasticbeanstalk": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elasticfilesystem": service{ + + Endpoints: endpoints{ + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elasticloadbalancing": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elasticmapreduce": service{ + Defaults: endpoint{ + SSLCommonName: "{region}.{service}.{dnsSuffix}", + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{ + SSLCommonName: "{service}.{region}.{dnsSuffix}", + }, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + SSLCommonName: "{service}.{region}.{dnsSuffix}", + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elastictranscoder": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "email": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "entitlement.marketplace": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "aws-marketplace", + }, + }, + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "es": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "events": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "firehose": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "fms": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "gamelift": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "glacier": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "glue": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "greengrass": service{ + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "guardduty": service{ + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "health": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "iam": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "iam.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "importexport": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "importexport.amazonaws.com", + SignatureVersions: []string{"v2", "v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + Service: "IngestionService", + }, + }, + }, + }, + "inspector": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "iot": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "execute-api", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "kinesis": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "kinesisanalytics": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "kinesisvideo": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "kms": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "lambda": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "lightsail": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "logs": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "machinelearning": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + }, + }, + "marketplacecommerceanalytics": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "mediaconvert": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "medialive": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "mediapackage": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "mediastore": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "metering.marketplace": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "aws-marketplace", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "mgh": service{ + + Endpoints: endpoints{ + "us-west-2": endpoint{}, + }, + }, + "mobileanalytics": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "models.lex": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "lex", + }, + }, + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "monitoring": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "mturk-requester": service{ + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "sandbox": endpoint{ + Hostname: "mturk-requester-sandbox.us-east-1.amazonaws.com", + }, + "us-east-1": endpoint{}, + }, + }, + "neptune": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{ + Hostname: "rds.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "us-east-1": endpoint{ + Hostname: "rds.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{ + Hostname: "rds.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-2": endpoint{ + Hostname: "rds.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "opsworks": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "opsworks-cm": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "organizations": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "organizations.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "pinpoint": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "mobiletargeting", + }, + }, + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "polly": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "rds": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + SSLCommonName: "{service}.{dnsSuffix}", + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "redshift": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "rekognition": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "resource-groups": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "route53": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "route53.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "route53domains": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "runtime.lex": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "lex", + }, + }, + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "runtime.sagemaker": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "s3": service{ + PartitionEndpoint: "us-east-1", + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"s3v4"}, + + HasDualStack: boxedTrue, + DualStackHostname: "{service}.dualstack.{region}.{dnsSuffix}", + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{ + Hostname: "s3.ap-northeast-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{ + Hostname: "s3.ap-southeast-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "ap-southeast-2": endpoint{ + Hostname: "s3.ap-southeast-2.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{ + Hostname: "s3.eu-west-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "s3-external-1": endpoint{ + Hostname: "s3-external-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "sa-east-1": endpoint{ + Hostname: "s3.sa-east-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "us-east-1": endpoint{ + Hostname: "s3.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{ + Hostname: "s3.us-west-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "us-west-2": endpoint{ + Hostname: "s3.us-west-2.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + }, + }, + "sagemaker": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sdb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"v2"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + Hostname: "sdb.amazonaws.com", + }, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "secretsmanager": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "serverlessrepo": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-northeast-2": endpoint{ + Protocols: []string{"https"}, + }, + "ap-south-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-southeast-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-southeast-2": endpoint{ + Protocols: []string{"https"}, + }, + "ca-central-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-central-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-west-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-west-2": endpoint{ + Protocols: []string{"https"}, + }, + "sa-east-1": endpoint{ + Protocols: []string{"https"}, + }, + "us-east-1": endpoint{ + Protocols: []string{"https"}, + }, + "us-east-2": endpoint{ + Protocols: []string{"https"}, + }, + "us-west-1": endpoint{ + Protocols: []string{"https"}, + }, + "us-west-2": endpoint{ + Protocols: []string{"https"}, + }, + }, + }, + "servicecatalog": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "servicediscovery": service{ + + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "shield": service{ + IsRegionalized: boxedFalse, + Defaults: endpoint{ + SSLCommonName: "Shield.us-east-1.amazonaws.com", + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "sms": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "snowball": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sns": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sqs": service{ + Defaults: endpoint{ + SSLCommonName: "{region}.queue.{dnsSuffix}", + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{}, + "fips-us-east-2": endpoint{}, + "fips-us-west-1": endpoint{}, + "fips-us-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + SSLCommonName: "queue.{dnsSuffix}", + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "ssm": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "states": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "storagegateway": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "streams.dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "dynamodb", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "local": endpoint{ + Hostname: "localhost:8000", + Protocols: []string{"http"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sts": service{ + PartitionEndpoint: "aws-global", + Defaults: endpoint{ + Hostname: "sts.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{ + Hostname: "sts.ap-northeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-2", + }, + }, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "aws-global": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "sts-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "sts-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "sts-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "sts-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "support": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "swf": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "tagging": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "translate": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "waf": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "waf.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "waf-regional": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "workdocs": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "workmail": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "workspaces": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "xray": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + }, +} + +// AwsCnPartition returns the Resolver for AWS China. +func AwsCnPartition() Partition { + return awscnPartition.Partition() +} + +var awscnPartition = partition{ + ID: "aws-cn", + Name: "AWS China", + DNSSuffix: "amazonaws.com.cn", + RegionRegex: regionRegex{ + Regexp: func() *regexp.Regexp { + reg, _ := regexp.Compile("^cn\\-\\w+\\-\\d+$") + return reg + }(), + }, + Defaults: endpoint{ + Hostname: "{service}.{region}.{dnsSuffix}", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + Regions: regions{ + "cn-north-1": region{ + Description: "China (Beijing)", + }, + "cn-northwest-1": region{ + Description: "China (Ningxia)", + }, + }, + Services: services{ + "apigateway": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "application-autoscaling": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "application-autoscaling", + }, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "autoscaling": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "cloudformation": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "cloudtrail": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "codedeploy": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "cognito-identity": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "config": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "directconnect": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "ec2": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "ec2metadata": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "169.254.169.254/latest", + Protocols: []string{"http"}, + }, + }, + }, + "ecr": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "ecs": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "elasticache": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "elasticbeanstalk": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "elasticloadbalancing": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "elasticmapreduce": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "es": service{ + + Endpoints: endpoints{ + "cn-northwest-1": endpoint{}, + }, + }, + "events": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "glacier": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "iam": service{ + PartitionEndpoint: "aws-cn-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-cn-global": endpoint{ + Hostname: "iam.cn-north-1.amazonaws.com.cn", + CredentialScope: credentialScope{ + Region: "cn-north-1", + }, + }, + }, + }, + "iot": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "execute-api", + }, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "kinesis": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "lambda": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "logs": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "monitoring": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "rds": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "redshift": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "s3": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"s3v4"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "sms": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "snowball": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "sns": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "sqs": service{ + Defaults: endpoint{ + SSLCommonName: "{region}.queue.{dnsSuffix}", + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "ssm": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "storagegateway": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "streams.dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "dynamodb", + }, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "sts": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "swf": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "tagging": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + }, +} + +// AwsUsGovPartition returns the Resolver for AWS GovCloud (US). +func AwsUsGovPartition() Partition { + return awsusgovPartition.Partition() +} + +var awsusgovPartition = partition{ + ID: "aws-us-gov", + Name: "AWS GovCloud (US)", + DNSSuffix: "amazonaws.com", + RegionRegex: regionRegex{ + Regexp: func() *regexp.Regexp { + reg, _ := regexp.Compile("^us\\-gov\\-\\w+\\-\\d+$") + return reg + }(), + }, + Defaults: endpoint{ + Hostname: "{service}.{region}.{dnsSuffix}", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + Regions: regions{ + "us-gov-west-1": region{ + Description: "AWS GovCloud (US)", + }, + }, + Services: services{ + "acm": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "apigateway": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "autoscaling": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "cloudformation": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "cloudhsm": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "cloudhsmv2": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "cloudhsm", + }, + }, + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "cloudtrail": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "codedeploy": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "config": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "directconnect": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "dms": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "dynamodb": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "dynamodb.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, + "ec2": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "ec2metadata": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "169.254.169.254/latest", + Protocols: []string{"http"}, + }, + }, + }, + "ecr": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "ecs": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "elasticache": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "elasticbeanstalk": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "elasticloadbalancing": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "elasticmapreduce": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "es": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "events": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "glacier": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "iam": service{ + PartitionEndpoint: "aws-us-gov-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-us-gov-global": endpoint{ + Hostname: "iam.us-gov.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, + "kinesis": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "kms": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "lambda": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "logs": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "metering.marketplace": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "aws-marketplace", + }, + }, + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "monitoring": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "polly": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "rds": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "redshift": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "rekognition": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "s3": service{ + Defaults: endpoint{ + SignatureVersions: []string{"s3", "s3v4"}, + }, + Endpoints: endpoints{ + "fips-us-gov-west-1": endpoint{ + Hostname: "s3-fips-us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "s3.us-gov-west-1.amazonaws.com", + Protocols: []string{"http", "https"}, + }, + }, + }, + "sms": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "snowball": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "sns": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{ + Protocols: []string{"http", "https"}, + }, + }, + }, + "sqs": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{ + SSLCommonName: "{region}.queue.{dnsSuffix}", + Protocols: []string{"http", "https"}, + }, + }, + }, + "ssm": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "storagegateway": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "streams.dynamodb": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "dynamodb", + }, + }, + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "dynamodb.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, + "sts": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "swf": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "tagging": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + }, +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/doc.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/doc.go new file mode 100644 index 00000000..84316b92 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/doc.go @@ -0,0 +1,66 @@ +// Package endpoints provides the types and functionality for defining regions +// and endpoints, as well as querying those definitions. +// +// The SDK's Regions and Endpoints metadata is code generated into the endpoints +// package, and is accessible via the DefaultResolver function. This function +// returns a endpoint Resolver will search the metadata and build an associated +// endpoint if one is found. The default resolver will search all partitions +// known by the SDK. e.g AWS Standard (aws), AWS China (aws-cn), and +// AWS GovCloud (US) (aws-us-gov). +// . +// +// Enumerating Regions and Endpoint Metadata +// +// Casting the Resolver returned by DefaultResolver to a EnumPartitions interface +// will allow you to get access to the list of underlying Partitions with the +// Partitions method. This is helpful if you want to limit the SDK's endpoint +// resolving to a single partition, or enumerate regions, services, and endpoints +// in the partition. +// +// resolver := endpoints.DefaultResolver() +// partitions := resolver.(endpoints.EnumPartitions).Partitions() +// +// for _, p := range partitions { +// fmt.Println("Regions for", p.ID()) +// for id, _ := range p.Regions() { +// fmt.Println("*", id) +// } +// +// fmt.Println("Services for", p.ID()) +// for id, _ := range p.Services() { +// fmt.Println("*", id) +// } +// } +// +// Using Custom Endpoints +// +// The endpoints package also gives you the ability to use your own logic how +// endpoints are resolved. This is a great way to define a custom endpoint +// for select services, without passing that logic down through your code. +// +// If a type implements the Resolver interface it can be used to resolve +// endpoints. To use this with the SDK's Session and Config set the value +// of the type to the EndpointsResolver field of aws.Config when initializing +// the session, or service client. +// +// In addition the ResolverFunc is a wrapper for a func matching the signature +// of Resolver.EndpointFor, converting it to a type that satisfies the +// Resolver interface. +// +// +// myCustomResolver := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) { +// if service == endpoints.S3ServiceID { +// return endpoints.ResolvedEndpoint{ +// URL: "s3.custom.endpoint.com", +// SigningRegion: "custom-signing-region", +// }, nil +// } +// +// return endpoints.DefaultResolver().EndpointFor(service, region, optFns...) +// } +// +// sess := session.Must(session.NewSession(&aws.Config{ +// Region: aws.String("us-west-2"), +// EndpointResolver: endpoints.ResolverFunc(myCustomResolver), +// })) +package endpoints diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go new file mode 100644 index 00000000..e29c0951 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go @@ -0,0 +1,449 @@ +package endpoints + +import ( + "fmt" + "regexp" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +// Options provide the configuration needed to direct how the +// endpoints will be resolved. +type Options struct { + // DisableSSL forces the endpoint to be resolved as HTTP. + // instead of HTTPS if the service supports it. + DisableSSL bool + + // Sets the resolver to resolve the endpoint as a dualstack endpoint + // for the service. If dualstack support for a service is not known and + // StrictMatching is not enabled a dualstack endpoint for the service will + // be returned. This endpoint may not be valid. If StrictMatching is + // enabled only services that are known to support dualstack will return + // dualstack endpoints. + UseDualStack bool + + // Enables strict matching of services and regions resolved endpoints. + // If the partition doesn't enumerate the exact service and region an + // error will be returned. This option will prevent returning endpoints + // that look valid, but may not resolve to any real endpoint. + StrictMatching bool + + // Enables resolving a service endpoint based on the region provided if the + // service does not exist. The service endpoint ID will be used as the service + // domain name prefix. By default the endpoint resolver requires the service + // to be known when resolving endpoints. + // + // If resolving an endpoint on the partition list the provided region will + // be used to determine which partition's domain name pattern to the service + // endpoint ID with. If both the service and region are unkonwn and resolving + // the endpoint on partition list an UnknownEndpointError error will be returned. + // + // If resolving and endpoint on a partition specific resolver that partition's + // domain name pattern will be used with the service endpoint ID. If both + // region and service do not exist when resolving an endpoint on a specific + // partition the partition's domain pattern will be used to combine the + // endpoint and region together. + // + // This option is ignored if StrictMatching is enabled. + ResolveUnknownService bool +} + +// Set combines all of the option functions together. +func (o *Options) Set(optFns ...func(*Options)) { + for _, fn := range optFns { + fn(o) + } +} + +// DisableSSLOption sets the DisableSSL options. Can be used as a functional +// option when resolving endpoints. +func DisableSSLOption(o *Options) { + o.DisableSSL = true +} + +// UseDualStackOption sets the UseDualStack option. Can be used as a functional +// option when resolving endpoints. +func UseDualStackOption(o *Options) { + o.UseDualStack = true +} + +// StrictMatchingOption sets the StrictMatching option. Can be used as a functional +// option when resolving endpoints. +func StrictMatchingOption(o *Options) { + o.StrictMatching = true +} + +// ResolveUnknownServiceOption sets the ResolveUnknownService option. Can be used +// as a functional option when resolving endpoints. +func ResolveUnknownServiceOption(o *Options) { + o.ResolveUnknownService = true +} + +// A Resolver provides the interface for functionality to resolve endpoints. +// The build in Partition and DefaultResolver return value satisfy this interface. +type Resolver interface { + EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) +} + +// ResolverFunc is a helper utility that wraps a function so it satisfies the +// Resolver interface. This is useful when you want to add additional endpoint +// resolving logic, or stub out specific endpoints with custom values. +type ResolverFunc func(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) + +// EndpointFor wraps the ResolverFunc function to satisfy the Resolver interface. +func (fn ResolverFunc) EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return fn(service, region, opts...) +} + +var schemeRE = regexp.MustCompile("^([^:]+)://") + +// AddScheme adds the HTTP or HTTPS schemes to a endpoint URL if there is no +// scheme. If disableSSL is true HTTP will set HTTP instead of the default HTTPS. +// +// If disableSSL is set, it will only set the URL's scheme if the URL does not +// contain a scheme. +func AddScheme(endpoint string, disableSSL bool) string { + if !schemeRE.MatchString(endpoint) { + scheme := "https" + if disableSSL { + scheme = "http" + } + endpoint = fmt.Sprintf("%s://%s", scheme, endpoint) + } + + return endpoint +} + +// EnumPartitions a provides a way to retrieve the underlying partitions that +// make up the SDK's default Resolver, or any resolver decoded from a model +// file. +// +// Use this interface with DefaultResolver and DecodeModels to get the list of +// Partitions. +type EnumPartitions interface { + Partitions() []Partition +} + +// RegionsForService returns a map of regions for the partition and service. +// If either the partition or service does not exist false will be returned +// as the second parameter. +// +// This example shows how to get the regions for DynamoDB in the AWS partition. +// rs, exists := endpoints.RegionsForService(endpoints.DefaultPartitions(), endpoints.AwsPartitionID, endpoints.DynamodbServiceID) +// +// This is equivalent to using the partition directly. +// rs := endpoints.AwsPartition().Services()[endpoints.DynamodbServiceID].Regions() +func RegionsForService(ps []Partition, partitionID, serviceID string) (map[string]Region, bool) { + for _, p := range ps { + if p.ID() != partitionID { + continue + } + if _, ok := p.p.Services[serviceID]; !ok { + break + } + + s := Service{ + id: serviceID, + p: p.p, + } + return s.Regions(), true + } + + return map[string]Region{}, false +} + +// PartitionForRegion returns the first partition which includes the region +// passed in. This includes both known regions and regions which match +// a pattern supported by the partition which may include regions that are +// not explicitly known by the partition. Use the Regions method of the +// returned Partition if explicit support is needed. +func PartitionForRegion(ps []Partition, regionID string) (Partition, bool) { + for _, p := range ps { + if _, ok := p.p.Regions[regionID]; ok || p.p.RegionRegex.MatchString(regionID) { + return p, true + } + } + + return Partition{}, false +} + +// A Partition provides the ability to enumerate the partition's regions +// and services. +type Partition struct { + id string + p *partition +} + +// ID returns the identifier of the partition. +func (p Partition) ID() string { return p.id } + +// EndpointFor attempts to resolve the endpoint based on service and region. +// See Options for information on configuring how the endpoint is resolved. +// +// If the service cannot be found in the metadata the UnknownServiceError +// error will be returned. This validation will occur regardless if +// StrictMatching is enabled. To enable resolving unknown services set the +// "ResolveUnknownService" option to true. When StrictMatching is disabled +// this option allows the partition resolver to resolve a endpoint based on +// the service endpoint ID provided. +// +// When resolving endpoints you can choose to enable StrictMatching. This will +// require the provided service and region to be known by the partition. +// If the endpoint cannot be strictly resolved an error will be returned. This +// mode is useful to ensure the endpoint resolved is valid. Without +// StrictMatching enabled the endpoint returned my look valid but may not work. +// StrictMatching requires the SDK to be updated if you want to take advantage +// of new regions and services expansions. +// +// Errors that can be returned. +// * UnknownServiceError +// * UnknownEndpointError +func (p Partition) EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return p.p.EndpointFor(service, region, opts...) +} + +// Regions returns a map of Regions indexed by their ID. This is useful for +// enumerating over the regions in a partition. +func (p Partition) Regions() map[string]Region { + rs := map[string]Region{} + for id, r := range p.p.Regions { + rs[id] = Region{ + id: id, + desc: r.Description, + p: p.p, + } + } + + return rs +} + +// Services returns a map of Service indexed by their ID. This is useful for +// enumerating over the services in a partition. +func (p Partition) Services() map[string]Service { + ss := map[string]Service{} + for id := range p.p.Services { + ss[id] = Service{ + id: id, + p: p.p, + } + } + + return ss +} + +// A Region provides information about a region, and ability to resolve an +// endpoint from the context of a region, given a service. +type Region struct { + id, desc string + p *partition +} + +// ID returns the region's identifier. +func (r Region) ID() string { return r.id } + +// Description returns the region's description. The region description +// is free text, it can be empty, and it may change between SDK releases. +func (r Region) Description() string { return r.desc } + +// ResolveEndpoint resolves an endpoint from the context of the region given +// a service. See Partition.EndpointFor for usage and errors that can be returned. +func (r Region) ResolveEndpoint(service string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return r.p.EndpointFor(service, r.id, opts...) +} + +// Services returns a list of all services that are known to be in this region. +func (r Region) Services() map[string]Service { + ss := map[string]Service{} + for id, s := range r.p.Services { + if _, ok := s.Endpoints[r.id]; ok { + ss[id] = Service{ + id: id, + p: r.p, + } + } + } + + return ss +} + +// A Service provides information about a service, and ability to resolve an +// endpoint from the context of a service, given a region. +type Service struct { + id string + p *partition +} + +// ID returns the identifier for the service. +func (s Service) ID() string { return s.id } + +// ResolveEndpoint resolves an endpoint from the context of a service given +// a region. See Partition.EndpointFor for usage and errors that can be returned. +func (s Service) ResolveEndpoint(region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return s.p.EndpointFor(s.id, region, opts...) +} + +// Regions returns a map of Regions that the service is present in. +// +// A region is the AWS region the service exists in. Whereas a Endpoint is +// an URL that can be resolved to a instance of a service. +func (s Service) Regions() map[string]Region { + rs := map[string]Region{} + for id := range s.p.Services[s.id].Endpoints { + if r, ok := s.p.Regions[id]; ok { + rs[id] = Region{ + id: id, + desc: r.Description, + p: s.p, + } + } + } + + return rs +} + +// Endpoints returns a map of Endpoints indexed by their ID for all known +// endpoints for a service. +// +// A region is the AWS region the service exists in. Whereas a Endpoint is +// an URL that can be resolved to a instance of a service. +func (s Service) Endpoints() map[string]Endpoint { + es := map[string]Endpoint{} + for id := range s.p.Services[s.id].Endpoints { + es[id] = Endpoint{ + id: id, + serviceID: s.id, + p: s.p, + } + } + + return es +} + +// A Endpoint provides information about endpoints, and provides the ability +// to resolve that endpoint for the service, and the region the endpoint +// represents. +type Endpoint struct { + id string + serviceID string + p *partition +} + +// ID returns the identifier for an endpoint. +func (e Endpoint) ID() string { return e.id } + +// ServiceID returns the identifier the endpoint belongs to. +func (e Endpoint) ServiceID() string { return e.serviceID } + +// ResolveEndpoint resolves an endpoint from the context of a service and +// region the endpoint represents. See Partition.EndpointFor for usage and +// errors that can be returned. +func (e Endpoint) ResolveEndpoint(opts ...func(*Options)) (ResolvedEndpoint, error) { + return e.p.EndpointFor(e.serviceID, e.id, opts...) +} + +// A ResolvedEndpoint is an endpoint that has been resolved based on a partition +// service, and region. +type ResolvedEndpoint struct { + // The endpoint URL + URL string + + // The region that should be used for signing requests. + SigningRegion string + + // The service name that should be used for signing requests. + SigningName string + + // States that the signing name for this endpoint was derived from metadata + // passed in, but was not explicitly modeled. + SigningNameDerived bool + + // The signing method that should be used for signing requests. + SigningMethod string +} + +// So that the Error interface type can be included as an anonymous field +// in the requestError struct and not conflict with the error.Error() method. +type awsError awserr.Error + +// A EndpointNotFoundError is returned when in StrictMatching mode, and the +// endpoint for the service and region cannot be found in any of the partitions. +type EndpointNotFoundError struct { + awsError + Partition string + Service string + Region string +} + +// A UnknownServiceError is returned when the service does not resolve to an +// endpoint. Includes a list of all known services for the partition. Returned +// when a partition does not support the service. +type UnknownServiceError struct { + awsError + Partition string + Service string + Known []string +} + +// NewUnknownServiceError builds and returns UnknownServiceError. +func NewUnknownServiceError(p, s string, known []string) UnknownServiceError { + return UnknownServiceError{ + awsError: awserr.New("UnknownServiceError", + "could not resolve endpoint for unknown service", nil), + Partition: p, + Service: s, + Known: known, + } +} + +// String returns the string representation of the error. +func (e UnknownServiceError) Error() string { + extra := fmt.Sprintf("partition: %q, service: %q", + e.Partition, e.Service) + if len(e.Known) > 0 { + extra += fmt.Sprintf(", known: %v", e.Known) + } + return awserr.SprintError(e.Code(), e.Message(), extra, e.OrigErr()) +} + +// String returns the string representation of the error. +func (e UnknownServiceError) String() string { + return e.Error() +} + +// A UnknownEndpointError is returned when in StrictMatching mode and the +// service is valid, but the region does not resolve to an endpoint. Includes +// a list of all known endpoints for the service. +type UnknownEndpointError struct { + awsError + Partition string + Service string + Region string + Known []string +} + +// NewUnknownEndpointError builds and returns UnknownEndpointError. +func NewUnknownEndpointError(p, s, r string, known []string) UnknownEndpointError { + return UnknownEndpointError{ + awsError: awserr.New("UnknownEndpointError", + "could not resolve endpoint", nil), + Partition: p, + Service: s, + Region: r, + Known: known, + } +} + +// String returns the string representation of the error. +func (e UnknownEndpointError) Error() string { + extra := fmt.Sprintf("partition: %q, service: %q, region: %q", + e.Partition, e.Service, e.Region) + if len(e.Known) > 0 { + extra += fmt.Sprintf(", known: %v", e.Known) + } + return awserr.SprintError(e.Code(), e.Message(), extra, e.OrigErr()) +} + +// String returns the string representation of the error. +func (e UnknownEndpointError) String() string { + return e.Error() +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go new file mode 100644 index 00000000..ff6f76db --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go @@ -0,0 +1,307 @@ +package endpoints + +import ( + "fmt" + "regexp" + "strconv" + "strings" +) + +type partitions []partition + +func (ps partitions) EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + var opt Options + opt.Set(opts...) + + for i := 0; i < len(ps); i++ { + if !ps[i].canResolveEndpoint(service, region, opt.StrictMatching) { + continue + } + + return ps[i].EndpointFor(service, region, opts...) + } + + // If loose matching fallback to first partition format to use + // when resolving the endpoint. + if !opt.StrictMatching && len(ps) > 0 { + return ps[0].EndpointFor(service, region, opts...) + } + + return ResolvedEndpoint{}, NewUnknownEndpointError("all partitions", service, region, []string{}) +} + +// Partitions satisfies the EnumPartitions interface and returns a list +// of Partitions representing each partition represented in the SDK's +// endpoints model. +func (ps partitions) Partitions() []Partition { + parts := make([]Partition, 0, len(ps)) + for i := 0; i < len(ps); i++ { + parts = append(parts, ps[i].Partition()) + } + + return parts +} + +type partition struct { + ID string `json:"partition"` + Name string `json:"partitionName"` + DNSSuffix string `json:"dnsSuffix"` + RegionRegex regionRegex `json:"regionRegex"` + Defaults endpoint `json:"defaults"` + Regions regions `json:"regions"` + Services services `json:"services"` +} + +func (p partition) Partition() Partition { + return Partition{ + id: p.ID, + p: &p, + } +} + +func (p partition) canResolveEndpoint(service, region string, strictMatch bool) bool { + s, hasService := p.Services[service] + _, hasEndpoint := s.Endpoints[region] + + if hasEndpoint && hasService { + return true + } + + if strictMatch { + return false + } + + return p.RegionRegex.MatchString(region) +} + +func (p partition) EndpointFor(service, region string, opts ...func(*Options)) (resolved ResolvedEndpoint, err error) { + var opt Options + opt.Set(opts...) + + s, hasService := p.Services[service] + if !(hasService || opt.ResolveUnknownService) { + // Only return error if the resolver will not fallback to creating + // endpoint based on service endpoint ID passed in. + return resolved, NewUnknownServiceError(p.ID, service, serviceList(p.Services)) + } + + e, hasEndpoint := s.endpointForRegion(region) + if !hasEndpoint && opt.StrictMatching { + return resolved, NewUnknownEndpointError(p.ID, service, region, endpointList(s.Endpoints)) + } + + defs := []endpoint{p.Defaults, s.Defaults} + return e.resolve(service, region, p.DNSSuffix, defs, opt), nil +} + +func serviceList(ss services) []string { + list := make([]string, 0, len(ss)) + for k := range ss { + list = append(list, k) + } + return list +} +func endpointList(es endpoints) []string { + list := make([]string, 0, len(es)) + for k := range es { + list = append(list, k) + } + return list +} + +type regionRegex struct { + *regexp.Regexp +} + +func (rr *regionRegex) UnmarshalJSON(b []byte) (err error) { + // Strip leading and trailing quotes + regex, err := strconv.Unquote(string(b)) + if err != nil { + return fmt.Errorf("unable to strip quotes from regex, %v", err) + } + + rr.Regexp, err = regexp.Compile(regex) + if err != nil { + return fmt.Errorf("unable to unmarshal region regex, %v", err) + } + return nil +} + +type regions map[string]region + +type region struct { + Description string `json:"description"` +} + +type services map[string]service + +type service struct { + PartitionEndpoint string `json:"partitionEndpoint"` + IsRegionalized boxedBool `json:"isRegionalized,omitempty"` + Defaults endpoint `json:"defaults"` + Endpoints endpoints `json:"endpoints"` +} + +func (s *service) endpointForRegion(region string) (endpoint, bool) { + if s.IsRegionalized == boxedFalse { + return s.Endpoints[s.PartitionEndpoint], region == s.PartitionEndpoint + } + + if e, ok := s.Endpoints[region]; ok { + return e, true + } + + // Unable to find any matching endpoint, return + // blank that will be used for generic endpoint creation. + return endpoint{}, false +} + +type endpoints map[string]endpoint + +type endpoint struct { + Hostname string `json:"hostname"` + Protocols []string `json:"protocols"` + CredentialScope credentialScope `json:"credentialScope"` + + // Custom fields not modeled + HasDualStack boxedBool `json:"-"` + DualStackHostname string `json:"-"` + + // Signature Version not used + SignatureVersions []string `json:"signatureVersions"` + + // SSLCommonName not used. + SSLCommonName string `json:"sslCommonName"` +} + +const ( + defaultProtocol = "https" + defaultSigner = "v4" +) + +var ( + protocolPriority = []string{"https", "http"} + signerPriority = []string{"v4", "v2"} +) + +func getByPriority(s []string, p []string, def string) string { + if len(s) == 0 { + return def + } + + for i := 0; i < len(p); i++ { + for j := 0; j < len(s); j++ { + if s[j] == p[i] { + return s[j] + } + } + } + + return s[0] +} + +func (e endpoint) resolve(service, region, dnsSuffix string, defs []endpoint, opts Options) ResolvedEndpoint { + var merged endpoint + for _, def := range defs { + merged.mergeIn(def) + } + merged.mergeIn(e) + e = merged + + hostname := e.Hostname + + // Offset the hostname for dualstack if enabled + if opts.UseDualStack && e.HasDualStack == boxedTrue { + hostname = e.DualStackHostname + } + + u := strings.Replace(hostname, "{service}", service, 1) + u = strings.Replace(u, "{region}", region, 1) + u = strings.Replace(u, "{dnsSuffix}", dnsSuffix, 1) + + scheme := getEndpointScheme(e.Protocols, opts.DisableSSL) + u = fmt.Sprintf("%s://%s", scheme, u) + + signingRegion := e.CredentialScope.Region + if len(signingRegion) == 0 { + signingRegion = region + } + + signingName := e.CredentialScope.Service + var signingNameDerived bool + if len(signingName) == 0 { + signingName = service + signingNameDerived = true + } + + return ResolvedEndpoint{ + URL: u, + SigningRegion: signingRegion, + SigningName: signingName, + SigningNameDerived: signingNameDerived, + SigningMethod: getByPriority(e.SignatureVersions, signerPriority, defaultSigner), + } +} + +func getEndpointScheme(protocols []string, disableSSL bool) string { + if disableSSL { + return "http" + } + + return getByPriority(protocols, protocolPriority, defaultProtocol) +} + +func (e *endpoint) mergeIn(other endpoint) { + if len(other.Hostname) > 0 { + e.Hostname = other.Hostname + } + if len(other.Protocols) > 0 { + e.Protocols = other.Protocols + } + if len(other.SignatureVersions) > 0 { + e.SignatureVersions = other.SignatureVersions + } + if len(other.CredentialScope.Region) > 0 { + e.CredentialScope.Region = other.CredentialScope.Region + } + if len(other.CredentialScope.Service) > 0 { + e.CredentialScope.Service = other.CredentialScope.Service + } + if len(other.SSLCommonName) > 0 { + e.SSLCommonName = other.SSLCommonName + } + if other.HasDualStack != boxedBoolUnset { + e.HasDualStack = other.HasDualStack + } + if len(other.DualStackHostname) > 0 { + e.DualStackHostname = other.DualStackHostname + } +} + +type credentialScope struct { + Region string `json:"region"` + Service string `json:"service"` +} + +type boxedBool int + +func (b *boxedBool) UnmarshalJSON(buf []byte) error { + v, err := strconv.ParseBool(string(buf)) + if err != nil { + return err + } + + if v { + *b = boxedTrue + } else { + *b = boxedFalse + } + + return nil +} + +const ( + boxedBoolUnset boxedBool = iota + boxedFalse + boxedTrue +) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model_codegen.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model_codegen.go new file mode 100644 index 00000000..05e92df2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model_codegen.go @@ -0,0 +1,337 @@ +// +build codegen + +package endpoints + +import ( + "fmt" + "io" + "reflect" + "strings" + "text/template" + "unicode" +) + +// A CodeGenOptions are the options for code generating the endpoints into +// Go code from the endpoints model definition. +type CodeGenOptions struct { + // Options for how the model will be decoded. + DecodeModelOptions DecodeModelOptions +} + +// Set combines all of the option functions together +func (d *CodeGenOptions) Set(optFns ...func(*CodeGenOptions)) { + for _, fn := range optFns { + fn(d) + } +} + +// CodeGenModel given a endpoints model file will decode it and attempt to +// generate Go code from the model definition. Error will be returned if +// the code is unable to be generated, or decoded. +func CodeGenModel(modelFile io.Reader, outFile io.Writer, optFns ...func(*CodeGenOptions)) error { + var opts CodeGenOptions + opts.Set(optFns...) + + resolver, err := DecodeModel(modelFile, func(d *DecodeModelOptions) { + *d = opts.DecodeModelOptions + }) + if err != nil { + return err + } + + tmpl := template.Must(template.New("tmpl").Funcs(funcMap).Parse(v3Tmpl)) + if err := tmpl.ExecuteTemplate(outFile, "defaults", resolver); err != nil { + return fmt.Errorf("failed to execute template, %v", err) + } + + return nil +} + +func toSymbol(v string) string { + out := []rune{} + for _, c := range strings.Title(v) { + if !(unicode.IsNumber(c) || unicode.IsLetter(c)) { + continue + } + + out = append(out, c) + } + + return string(out) +} + +func quoteString(v string) string { + return fmt.Sprintf("%q", v) +} + +func regionConstName(p, r string) string { + return toSymbol(p) + toSymbol(r) +} + +func partitionGetter(id string) string { + return fmt.Sprintf("%sPartition", toSymbol(id)) +} + +func partitionVarName(id string) string { + return fmt.Sprintf("%sPartition", strings.ToLower(toSymbol(id))) +} + +func listPartitionNames(ps partitions) string { + names := []string{} + switch len(ps) { + case 1: + return ps[0].Name + case 2: + return fmt.Sprintf("%s and %s", ps[0].Name, ps[1].Name) + default: + for i, p := range ps { + if i == len(ps)-1 { + names = append(names, "and "+p.Name) + } else { + names = append(names, p.Name) + } + } + return strings.Join(names, ", ") + } +} + +func boxedBoolIfSet(msg string, v boxedBool) string { + switch v { + case boxedTrue: + return fmt.Sprintf(msg, "boxedTrue") + case boxedFalse: + return fmt.Sprintf(msg, "boxedFalse") + default: + return "" + } +} + +func stringIfSet(msg, v string) string { + if len(v) == 0 { + return "" + } + + return fmt.Sprintf(msg, v) +} + +func stringSliceIfSet(msg string, vs []string) string { + if len(vs) == 0 { + return "" + } + + names := []string{} + for _, v := range vs { + names = append(names, `"`+v+`"`) + } + + return fmt.Sprintf(msg, strings.Join(names, ",")) +} + +func endpointIsSet(v endpoint) bool { + return !reflect.DeepEqual(v, endpoint{}) +} + +func serviceSet(ps partitions) map[string]struct{} { + set := map[string]struct{}{} + for _, p := range ps { + for id := range p.Services { + set[id] = struct{}{} + } + } + + return set +} + +var funcMap = template.FuncMap{ + "ToSymbol": toSymbol, + "QuoteString": quoteString, + "RegionConst": regionConstName, + "PartitionGetter": partitionGetter, + "PartitionVarName": partitionVarName, + "ListPartitionNames": listPartitionNames, + "BoxedBoolIfSet": boxedBoolIfSet, + "StringIfSet": stringIfSet, + "StringSliceIfSet": stringSliceIfSet, + "EndpointIsSet": endpointIsSet, + "ServicesSet": serviceSet, +} + +const v3Tmpl = ` +{{ define "defaults" -}} +// Code generated by aws/endpoints/v3model_codegen.go. DO NOT EDIT. + +package endpoints + +import ( + "regexp" +) + + {{ template "partition consts" . }} + + {{ range $_, $partition := . }} + {{ template "partition region consts" $partition }} + {{ end }} + + {{ template "service consts" . }} + + {{ template "endpoint resolvers" . }} +{{- end }} + +{{ define "partition consts" }} + // Partition identifiers + const ( + {{ range $_, $p := . -}} + {{ ToSymbol $p.ID }}PartitionID = {{ QuoteString $p.ID }} // {{ $p.Name }} partition. + {{ end -}} + ) +{{- end }} + +{{ define "partition region consts" }} + // {{ .Name }} partition's regions. + const ( + {{ range $id, $region := .Regions -}} + {{ ToSymbol $id }}RegionID = {{ QuoteString $id }} // {{ $region.Description }}. + {{ end -}} + ) +{{- end }} + +{{ define "service consts" }} + // Service identifiers + const ( + {{ $serviceSet := ServicesSet . -}} + {{ range $id, $_ := $serviceSet -}} + {{ ToSymbol $id }}ServiceID = {{ QuoteString $id }} // {{ ToSymbol $id }}. + {{ end -}} + ) +{{- end }} + +{{ define "endpoint resolvers" }} + // DefaultResolver returns an Endpoint resolver that will be able + // to resolve endpoints for: {{ ListPartitionNames . }}. + // + // Use DefaultPartitions() to get the list of the default partitions. + func DefaultResolver() Resolver { + return defaultPartitions + } + + // DefaultPartitions returns a list of the partitions the SDK is bundled + // with. The available partitions are: {{ ListPartitionNames . }}. + // + // partitions := endpoints.DefaultPartitions + // for _, p := range partitions { + // // ... inspect partitions + // } + func DefaultPartitions() []Partition { + return defaultPartitions.Partitions() + } + + var defaultPartitions = partitions{ + {{ range $_, $partition := . -}} + {{ PartitionVarName $partition.ID }}, + {{ end }} + } + + {{ range $_, $partition := . -}} + {{ $name := PartitionGetter $partition.ID -}} + // {{ $name }} returns the Resolver for {{ $partition.Name }}. + func {{ $name }}() Partition { + return {{ PartitionVarName $partition.ID }}.Partition() + } + var {{ PartitionVarName $partition.ID }} = {{ template "gocode Partition" $partition }} + {{ end }} +{{ end }} + +{{ define "default partitions" }} + func DefaultPartitions() []Partition { + return []partition{ + {{ range $_, $partition := . -}} + // {{ ToSymbol $partition.ID}}Partition(), + {{ end }} + } + } +{{ end }} + +{{ define "gocode Partition" -}} +partition{ + {{ StringIfSet "ID: %q,\n" .ID -}} + {{ StringIfSet "Name: %q,\n" .Name -}} + {{ StringIfSet "DNSSuffix: %q,\n" .DNSSuffix -}} + RegionRegex: {{ template "gocode RegionRegex" .RegionRegex }}, + {{ if EndpointIsSet .Defaults -}} + Defaults: {{ template "gocode Endpoint" .Defaults }}, + {{- end }} + Regions: {{ template "gocode Regions" .Regions }}, + Services: {{ template "gocode Services" .Services }}, +} +{{- end }} + +{{ define "gocode RegionRegex" -}} +regionRegex{ + Regexp: func() *regexp.Regexp{ + reg, _ := regexp.Compile({{ QuoteString .Regexp.String }}) + return reg + }(), +} +{{- end }} + +{{ define "gocode Regions" -}} +regions{ + {{ range $id, $region := . -}} + "{{ $id }}": {{ template "gocode Region" $region }}, + {{ end -}} +} +{{- end }} + +{{ define "gocode Region" -}} +region{ + {{ StringIfSet "Description: %q,\n" .Description -}} +} +{{- end }} + +{{ define "gocode Services" -}} +services{ + {{ range $id, $service := . -}} + "{{ $id }}": {{ template "gocode Service" $service }}, + {{ end }} +} +{{- end }} + +{{ define "gocode Service" -}} +service{ + {{ StringIfSet "PartitionEndpoint: %q,\n" .PartitionEndpoint -}} + {{ BoxedBoolIfSet "IsRegionalized: %s,\n" .IsRegionalized -}} + {{ if EndpointIsSet .Defaults -}} + Defaults: {{ template "gocode Endpoint" .Defaults -}}, + {{- end }} + {{ if .Endpoints -}} + Endpoints: {{ template "gocode Endpoints" .Endpoints }}, + {{- end }} +} +{{- end }} + +{{ define "gocode Endpoints" -}} +endpoints{ + {{ range $id, $endpoint := . -}} + "{{ $id }}": {{ template "gocode Endpoint" $endpoint }}, + {{ end }} +} +{{- end }} + +{{ define "gocode Endpoint" -}} +endpoint{ + {{ StringIfSet "Hostname: %q,\n" .Hostname -}} + {{ StringIfSet "SSLCommonName: %q,\n" .SSLCommonName -}} + {{ StringSliceIfSet "Protocols: []string{%s},\n" .Protocols -}} + {{ StringSliceIfSet "SignatureVersions: []string{%s},\n" .SignatureVersions -}} + {{ if or .CredentialScope.Region .CredentialScope.Service -}} + CredentialScope: credentialScope{ + {{ StringIfSet "Region: %q,\n" .CredentialScope.Region -}} + {{ StringIfSet "Service: %q,\n" .CredentialScope.Service -}} + }, + {{- end }} + {{ BoxedBoolIfSet "HasDualStack: %s,\n" .HasDualStack -}} + {{ StringIfSet "DualStackHostname: %q,\n" .DualStackHostname -}} + +} +{{- end }} +` diff --git a/vendor/github.com/aws/aws-sdk-go/aws/errors.go b/vendor/github.com/aws/aws-sdk-go/aws/errors.go new file mode 100644 index 00000000..57663616 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/errors.go @@ -0,0 +1,17 @@ +package aws + +import "github.com/aws/aws-sdk-go/aws/awserr" + +var ( + // ErrMissingRegion is an error that is returned if region configuration is + // not found. + // + // @readonly + ErrMissingRegion = awserr.New("MissingRegion", "could not find region configuration", nil) + + // ErrMissingEndpoint is an error that is returned if an endpoint cannot be + // resolved for a service. + // + // @readonly + ErrMissingEndpoint = awserr.New("MissingEndpoint", "'Endpoint' configuration is required for this service", nil) +) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/jsonvalue.go b/vendor/github.com/aws/aws-sdk-go/aws/jsonvalue.go new file mode 100644 index 00000000..91a6f277 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/jsonvalue.go @@ -0,0 +1,12 @@ +package aws + +// JSONValue is a representation of a grab bag type that will be marshaled +// into a json string. This type can be used just like any other map. +// +// Example: +// +// values := aws.JSONValue{ +// "Foo": "Bar", +// } +// values["Baz"] = "Qux" +type JSONValue map[string]interface{} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/logger.go b/vendor/github.com/aws/aws-sdk-go/aws/logger.go new file mode 100644 index 00000000..3babb5ab --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/logger.go @@ -0,0 +1,112 @@ +package aws + +import ( + "log" + "os" +) + +// A LogLevelType defines the level logging should be performed at. Used to instruct +// the SDK which statements should be logged. +type LogLevelType uint + +// LogLevel returns the pointer to a LogLevel. Should be used to workaround +// not being able to take the address of a non-composite literal. +func LogLevel(l LogLevelType) *LogLevelType { + return &l +} + +// Value returns the LogLevel value or the default value LogOff if the LogLevel +// is nil. Safe to use on nil value LogLevelTypes. +func (l *LogLevelType) Value() LogLevelType { + if l != nil { + return *l + } + return LogOff +} + +// Matches returns true if the v LogLevel is enabled by this LogLevel. Should be +// used with logging sub levels. Is safe to use on nil value LogLevelTypes. If +// LogLevel is nil, will default to LogOff comparison. +func (l *LogLevelType) Matches(v LogLevelType) bool { + c := l.Value() + return c&v == v +} + +// AtLeast returns true if this LogLevel is at least high enough to satisfies v. +// Is safe to use on nil value LogLevelTypes. If LogLevel is nil, will default +// to LogOff comparison. +func (l *LogLevelType) AtLeast(v LogLevelType) bool { + c := l.Value() + return c >= v +} + +const ( + // LogOff states that no logging should be performed by the SDK. This is the + // default state of the SDK, and should be use to disable all logging. + LogOff LogLevelType = iota * 0x1000 + + // LogDebug state that debug output should be logged by the SDK. This should + // be used to inspect request made and responses received. + LogDebug +) + +// Debug Logging Sub Levels +const ( + // LogDebugWithSigning states that the SDK should log request signing and + // presigning events. This should be used to log the signing details of + // requests for debugging. Will also enable LogDebug. + LogDebugWithSigning LogLevelType = LogDebug | (1 << iota) + + // LogDebugWithHTTPBody states the SDK should log HTTP request and response + // HTTP bodys in addition to the headers and path. This should be used to + // see the body content of requests and responses made while using the SDK + // Will also enable LogDebug. + LogDebugWithHTTPBody + + // LogDebugWithRequestRetries states the SDK should log when service requests will + // be retried. This should be used to log when you want to log when service + // requests are being retried. Will also enable LogDebug. + LogDebugWithRequestRetries + + // LogDebugWithRequestErrors states the SDK should log when service requests fail + // to build, send, validate, or unmarshal. + LogDebugWithRequestErrors +) + +// A Logger is a minimalistic interface for the SDK to log messages to. Should +// be used to provide custom logging writers for the SDK to use. +type Logger interface { + Log(...interface{}) +} + +// A LoggerFunc is a convenience type to convert a function taking a variadic +// list of arguments and wrap it so the Logger interface can be used. +// +// Example: +// s3.New(sess, &aws.Config{Logger: aws.LoggerFunc(func(args ...interface{}) { +// fmt.Fprintln(os.Stdout, args...) +// })}) +type LoggerFunc func(...interface{}) + +// Log calls the wrapped function with the arguments provided +func (f LoggerFunc) Log(args ...interface{}) { + f(args...) +} + +// NewDefaultLogger returns a Logger which will write log messages to stdout, and +// use same formatting runes as the stdlib log.Logger +func NewDefaultLogger() Logger { + return &defaultLogger{ + logger: log.New(os.Stdout, "", log.LstdFlags), + } +} + +// A defaultLogger provides a minimalistic logger satisfying the Logger interface. +type defaultLogger struct { + logger *log.Logger +} + +// Log logs the parameters to the stdlib logger. See log.Println. +func (l defaultLogger) Log(args ...interface{}) { + l.logger.Println(args...) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/connection_reset_error.go b/vendor/github.com/aws/aws-sdk-go/aws/request/connection_reset_error.go new file mode 100644 index 00000000..271da432 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/connection_reset_error.go @@ -0,0 +1,19 @@ +// +build !appengine,!plan9 + +package request + +import ( + "net" + "os" + "syscall" +) + +func isErrConnectionReset(err error) bool { + if opErr, ok := err.(*net.OpError); ok { + if sysErr, ok := opErr.Err.(*os.SyscallError); ok { + return sysErr.Err == syscall.ECONNRESET + } + } + + return false +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/connection_reset_error_other.go b/vendor/github.com/aws/aws-sdk-go/aws/request/connection_reset_error_other.go new file mode 100644 index 00000000..daf9eca4 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/connection_reset_error_other.go @@ -0,0 +1,11 @@ +// +build appengine plan9 + +package request + +import ( + "strings" +) + +func isErrConnectionReset(err error) bool { + return strings.Contains(err.Error(), "connection reset") +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go b/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go new file mode 100644 index 00000000..802ac88a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go @@ -0,0 +1,256 @@ +package request + +import ( + "fmt" + "strings" +) + +// A Handlers provides a collection of request handlers for various +// stages of handling requests. +type Handlers struct { + Validate HandlerList + Build HandlerList + Sign HandlerList + Send HandlerList + ValidateResponse HandlerList + Unmarshal HandlerList + UnmarshalMeta HandlerList + UnmarshalError HandlerList + Retry HandlerList + AfterRetry HandlerList + Complete HandlerList +} + +// Copy returns of this handler's lists. +func (h *Handlers) Copy() Handlers { + return Handlers{ + Validate: h.Validate.copy(), + Build: h.Build.copy(), + Sign: h.Sign.copy(), + Send: h.Send.copy(), + ValidateResponse: h.ValidateResponse.copy(), + Unmarshal: h.Unmarshal.copy(), + UnmarshalError: h.UnmarshalError.copy(), + UnmarshalMeta: h.UnmarshalMeta.copy(), + Retry: h.Retry.copy(), + AfterRetry: h.AfterRetry.copy(), + Complete: h.Complete.copy(), + } +} + +// Clear removes callback functions for all handlers +func (h *Handlers) Clear() { + h.Validate.Clear() + h.Build.Clear() + h.Send.Clear() + h.Sign.Clear() + h.Unmarshal.Clear() + h.UnmarshalMeta.Clear() + h.UnmarshalError.Clear() + h.ValidateResponse.Clear() + h.Retry.Clear() + h.AfterRetry.Clear() + h.Complete.Clear() +} + +// A HandlerListRunItem represents an entry in the HandlerList which +// is being run. +type HandlerListRunItem struct { + Index int + Handler NamedHandler + Request *Request +} + +// A HandlerList manages zero or more handlers in a list. +type HandlerList struct { + list []NamedHandler + + // Called after each request handler in the list is called. If set + // and the func returns true the HandlerList will continue to iterate + // over the request handlers. If false is returned the HandlerList + // will stop iterating. + // + // Should be used if extra logic to be performed between each handler + // in the list. This can be used to terminate a list's iteration + // based on a condition such as error like, HandlerListStopOnError. + // Or for logging like HandlerListLogItem. + AfterEachFn func(item HandlerListRunItem) bool +} + +// A NamedHandler is a struct that contains a name and function callback. +type NamedHandler struct { + Name string + Fn func(*Request) +} + +// copy creates a copy of the handler list. +func (l *HandlerList) copy() HandlerList { + n := HandlerList{ + AfterEachFn: l.AfterEachFn, + } + if len(l.list) == 0 { + return n + } + + n.list = append(make([]NamedHandler, 0, len(l.list)), l.list...) + return n +} + +// Clear clears the handler list. +func (l *HandlerList) Clear() { + l.list = l.list[0:0] +} + +// Len returns the number of handlers in the list. +func (l *HandlerList) Len() int { + return len(l.list) +} + +// PushBack pushes handler f to the back of the handler list. +func (l *HandlerList) PushBack(f func(*Request)) { + l.PushBackNamed(NamedHandler{"__anonymous", f}) +} + +// PushBackNamed pushes named handler f to the back of the handler list. +func (l *HandlerList) PushBackNamed(n NamedHandler) { + if cap(l.list) == 0 { + l.list = make([]NamedHandler, 0, 5) + } + l.list = append(l.list, n) +} + +// PushFront pushes handler f to the front of the handler list. +func (l *HandlerList) PushFront(f func(*Request)) { + l.PushFrontNamed(NamedHandler{"__anonymous", f}) +} + +// PushFrontNamed pushes named handler f to the front of the handler list. +func (l *HandlerList) PushFrontNamed(n NamedHandler) { + if cap(l.list) == len(l.list) { + // Allocating new list required + l.list = append([]NamedHandler{n}, l.list...) + } else { + // Enough room to prepend into list. + l.list = append(l.list, NamedHandler{}) + copy(l.list[1:], l.list) + l.list[0] = n + } +} + +// Remove removes a NamedHandler n +func (l *HandlerList) Remove(n NamedHandler) { + l.RemoveByName(n.Name) +} + +// RemoveByName removes a NamedHandler by name. +func (l *HandlerList) RemoveByName(name string) { + for i := 0; i < len(l.list); i++ { + m := l.list[i] + if m.Name == name { + // Shift array preventing creating new arrays + copy(l.list[i:], l.list[i+1:]) + l.list[len(l.list)-1] = NamedHandler{} + l.list = l.list[:len(l.list)-1] + + // decrement list so next check to length is correct + i-- + } + } +} + +// SwapNamed will swap out any existing handlers with the same name as the +// passed in NamedHandler returning true if handlers were swapped. False is +// returned otherwise. +func (l *HandlerList) SwapNamed(n NamedHandler) (swapped bool) { + for i := 0; i < len(l.list); i++ { + if l.list[i].Name == n.Name { + l.list[i].Fn = n.Fn + swapped = true + } + } + + return swapped +} + +// SetBackNamed will replace the named handler if it exists in the handler list. +// If the handler does not exist the handler will be added to the end of the list. +func (l *HandlerList) SetBackNamed(n NamedHandler) { + if !l.SwapNamed(n) { + l.PushBackNamed(n) + } +} + +// SetFrontNamed will replace the named handler if it exists in the handler list. +// If the handler does not exist the handler will be added to the beginning of +// the list. +func (l *HandlerList) SetFrontNamed(n NamedHandler) { + if !l.SwapNamed(n) { + l.PushFrontNamed(n) + } +} + +// Run executes all handlers in the list with a given request object. +func (l *HandlerList) Run(r *Request) { + for i, h := range l.list { + h.Fn(r) + item := HandlerListRunItem{ + Index: i, Handler: h, Request: r, + } + if l.AfterEachFn != nil && !l.AfterEachFn(item) { + return + } + } +} + +// HandlerListLogItem logs the request handler and the state of the +// request's Error value. Always returns true to continue iterating +// request handlers in a HandlerList. +func HandlerListLogItem(item HandlerListRunItem) bool { + if item.Request.Config.Logger == nil { + return true + } + item.Request.Config.Logger.Log("DEBUG: RequestHandler", + item.Index, item.Handler.Name, item.Request.Error) + + return true +} + +// HandlerListStopOnError returns false to stop the HandlerList iterating +// over request handlers if Request.Error is not nil. True otherwise +// to continue iterating. +func HandlerListStopOnError(item HandlerListRunItem) bool { + return item.Request.Error == nil +} + +// WithAppendUserAgent will add a string to the user agent prefixed with a +// single white space. +func WithAppendUserAgent(s string) Option { + return func(r *Request) { + r.Handlers.Build.PushBack(func(r2 *Request) { + AddToUserAgent(r, s) + }) + } +} + +// MakeAddToUserAgentHandler will add the name/version pair to the User-Agent request +// header. If the extra parameters are provided they will be added as metadata to the +// name/version pair resulting in the following format. +// "name/version (extra0; extra1; ...)" +// The user agent part will be concatenated with this current request's user agent string. +func MakeAddToUserAgentHandler(name, version string, extra ...string) func(*Request) { + ua := fmt.Sprintf("%s/%s", name, version) + if len(extra) > 0 { + ua += fmt.Sprintf(" (%s)", strings.Join(extra, "; ")) + } + return func(r *Request) { + AddToUserAgent(r, ua) + } +} + +// MakeAddToUserAgentFreeFormHandler adds the input to the User-Agent request header. +// The input string will be concatenated with the current request's user agent string. +func MakeAddToUserAgentFreeFormHandler(s string) func(*Request) { + return func(r *Request) { + AddToUserAgent(r, s) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/http_request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/http_request.go new file mode 100644 index 00000000..79f79602 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/http_request.go @@ -0,0 +1,24 @@ +package request + +import ( + "io" + "net/http" + "net/url" +) + +func copyHTTPRequest(r *http.Request, body io.ReadCloser) *http.Request { + req := new(http.Request) + *req = *r + req.URL = &url.URL{} + *req.URL = *r.URL + req.Body = body + + req.Header = http.Header{} + for k, v := range r.Header { + for _, vv := range v { + req.Header.Add(k, vv) + } + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/offset_reader.go b/vendor/github.com/aws/aws-sdk-go/aws/request/offset_reader.go new file mode 100644 index 00000000..b0c2ef4f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/offset_reader.go @@ -0,0 +1,60 @@ +package request + +import ( + "io" + "sync" + + "github.com/aws/aws-sdk-go/internal/sdkio" +) + +// offsetReader is a thread-safe io.ReadCloser to prevent racing +// with retrying requests +type offsetReader struct { + buf io.ReadSeeker + lock sync.Mutex + closed bool +} + +func newOffsetReader(buf io.ReadSeeker, offset int64) *offsetReader { + reader := &offsetReader{} + buf.Seek(offset, sdkio.SeekStart) + + reader.buf = buf + return reader +} + +// Close will close the instance of the offset reader's access to +// the underlying io.ReadSeeker. +func (o *offsetReader) Close() error { + o.lock.Lock() + defer o.lock.Unlock() + o.closed = true + return nil +} + +// Read is a thread-safe read of the underlying io.ReadSeeker +func (o *offsetReader) Read(p []byte) (int, error) { + o.lock.Lock() + defer o.lock.Unlock() + + if o.closed { + return 0, io.EOF + } + + return o.buf.Read(p) +} + +// Seek is a thread-safe seeking operation. +func (o *offsetReader) Seek(offset int64, whence int) (int64, error) { + o.lock.Lock() + defer o.lock.Unlock() + + return o.buf.Seek(offset, whence) +} + +// CloseAndCopy will return a new offsetReader with a copy of the old buffer +// and close the old buffer. +func (o *offsetReader) CloseAndCopy(offset int64) *offsetReader { + o.Close() + return newOffsetReader(o.buf, offset) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go new file mode 100644 index 00000000..69b7a01a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go @@ -0,0 +1,654 @@ +package request + +import ( + "bytes" + "fmt" + "io" + "net" + "net/http" + "net/url" + "reflect" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/internal/sdkio" +) + +const ( + // ErrCodeSerialization is the serialization error code that is received + // during protocol unmarshaling. + ErrCodeSerialization = "SerializationError" + + // ErrCodeRead is an error that is returned during HTTP reads. + ErrCodeRead = "ReadError" + + // ErrCodeResponseTimeout is the connection timeout error that is received + // during body reads. + ErrCodeResponseTimeout = "ResponseTimeout" + + // ErrCodeInvalidPresignExpire is returned when the expire time provided to + // presign is invalid + ErrCodeInvalidPresignExpire = "InvalidPresignExpireError" + + // CanceledErrorCode is the error code that will be returned by an + // API request that was canceled. Requests given a aws.Context may + // return this error when canceled. + CanceledErrorCode = "RequestCanceled" +) + +// A Request is the service request to be made. +type Request struct { + Config aws.Config + ClientInfo metadata.ClientInfo + Handlers Handlers + + Retryer + Time time.Time + Operation *Operation + HTTPRequest *http.Request + HTTPResponse *http.Response + Body io.ReadSeeker + BodyStart int64 // offset from beginning of Body that the request body starts + Params interface{} + Error error + Data interface{} + RequestID string + RetryCount int + Retryable *bool + RetryDelay time.Duration + NotHoist bool + SignedHeaderVals http.Header + LastSignedAt time.Time + DisableFollowRedirects bool + + // A value greater than 0 instructs the request to be signed as Presigned URL + // You should not set this field directly. Instead use Request's + // Presign or PresignRequest methods. + ExpireTime time.Duration + + context aws.Context + + built bool + + // Need to persist an intermediate body between the input Body and HTTP + // request body because the HTTP Client's transport can maintain a reference + // to the HTTP request's body after the client has returned. This value is + // safe to use concurrently and wrap the input Body for each HTTP request. + safeBody *offsetReader +} + +// An Operation is the service API operation to be made. +type Operation struct { + Name string + HTTPMethod string + HTTPPath string + *Paginator + + BeforePresignFn func(r *Request) error +} + +// New returns a new Request pointer for the service API +// operation and parameters. +// +// Params is any value of input parameters to be the request payload. +// Data is pointer value to an object which the request's response +// payload will be deserialized to. +func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers, + retryer Retryer, operation *Operation, params interface{}, data interface{}) *Request { + + method := operation.HTTPMethod + if method == "" { + method = "POST" + } + + httpReq, _ := http.NewRequest(method, "", nil) + + var err error + httpReq.URL, err = url.Parse(clientInfo.Endpoint + operation.HTTPPath) + if err != nil { + httpReq.URL = &url.URL{} + err = awserr.New("InvalidEndpointURL", "invalid endpoint uri", err) + } + + SanitizeHostForHeader(httpReq) + + r := &Request{ + Config: cfg, + ClientInfo: clientInfo, + Handlers: handlers.Copy(), + + Retryer: retryer, + Time: time.Now(), + ExpireTime: 0, + Operation: operation, + HTTPRequest: httpReq, + Body: nil, + Params: params, + Error: err, + Data: data, + } + r.SetBufferBody([]byte{}) + + return r +} + +// A Option is a functional option that can augment or modify a request when +// using a WithContext API operation method. +type Option func(*Request) + +// WithGetResponseHeader builds a request Option which will retrieve a single +// header value from the HTTP Response. If there are multiple values for the +// header key use WithGetResponseHeaders instead to access the http.Header +// map directly. The passed in val pointer must be non-nil. +// +// This Option can be used multiple times with a single API operation. +// +// var id2, versionID string +// svc.PutObjectWithContext(ctx, params, +// request.WithGetResponseHeader("x-amz-id-2", &id2), +// request.WithGetResponseHeader("x-amz-version-id", &versionID), +// ) +func WithGetResponseHeader(key string, val *string) Option { + return func(r *Request) { + r.Handlers.Complete.PushBack(func(req *Request) { + *val = req.HTTPResponse.Header.Get(key) + }) + } +} + +// WithGetResponseHeaders builds a request Option which will retrieve the +// headers from the HTTP response and assign them to the passed in headers +// variable. The passed in headers pointer must be non-nil. +// +// var headers http.Header +// svc.PutObjectWithContext(ctx, params, request.WithGetResponseHeaders(&headers)) +func WithGetResponseHeaders(headers *http.Header) Option { + return func(r *Request) { + r.Handlers.Complete.PushBack(func(req *Request) { + *headers = req.HTTPResponse.Header + }) + } +} + +// WithLogLevel is a request option that will set the request to use a specific +// log level when the request is made. +// +// svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebugWithHTTPBody) +func WithLogLevel(l aws.LogLevelType) Option { + return func(r *Request) { + r.Config.LogLevel = aws.LogLevel(l) + } +} + +// ApplyOptions will apply each option to the request calling them in the order +// the were provided. +func (r *Request) ApplyOptions(opts ...Option) { + for _, opt := range opts { + opt(r) + } +} + +// Context will always returns a non-nil context. If Request does not have a +// context aws.BackgroundContext will be returned. +func (r *Request) Context() aws.Context { + if r.context != nil { + return r.context + } + return aws.BackgroundContext() +} + +// SetContext adds a Context to the current request that can be used to cancel +// a in-flight request. The Context value must not be nil, or this method will +// panic. +// +// Unlike http.Request.WithContext, SetContext does not return a copy of the +// Request. It is not safe to use use a single Request value for multiple +// requests. A new Request should be created for each API operation request. +// +// Go 1.6 and below: +// The http.Request's Cancel field will be set to the Done() value of +// the context. This will overwrite the Cancel field's value. +// +// Go 1.7 and above: +// The http.Request.WithContext will be used to set the context on the underlying +// http.Request. This will create a shallow copy of the http.Request. The SDK +// may create sub contexts in the future for nested requests such as retries. +func (r *Request) SetContext(ctx aws.Context) { + if ctx == nil { + panic("context cannot be nil") + } + setRequestContext(r, ctx) +} + +// WillRetry returns if the request's can be retried. +func (r *Request) WillRetry() bool { + if !aws.IsReaderSeekable(r.Body) && r.HTTPRequest.Body != NoBody { + return false + } + return r.Error != nil && aws.BoolValue(r.Retryable) && r.RetryCount < r.MaxRetries() +} + +// ParamsFilled returns if the request's parameters have been populated +// and the parameters are valid. False is returned if no parameters are +// provided or invalid. +func (r *Request) ParamsFilled() bool { + return r.Params != nil && reflect.ValueOf(r.Params).Elem().IsValid() +} + +// DataFilled returns true if the request's data for response deserialization +// target has been set and is a valid. False is returned if data is not +// set, or is invalid. +func (r *Request) DataFilled() bool { + return r.Data != nil && reflect.ValueOf(r.Data).Elem().IsValid() +} + +// SetBufferBody will set the request's body bytes that will be sent to +// the service API. +func (r *Request) SetBufferBody(buf []byte) { + r.SetReaderBody(bytes.NewReader(buf)) +} + +// SetStringBody sets the body of the request to be backed by a string. +func (r *Request) SetStringBody(s string) { + r.SetReaderBody(strings.NewReader(s)) +} + +// SetReaderBody will set the request's body reader. +func (r *Request) SetReaderBody(reader io.ReadSeeker) { + r.Body = reader + r.BodyStart, _ = reader.Seek(0, sdkio.SeekCurrent) // Get the Bodies current offset. + r.ResetBody() +} + +// Presign returns the request's signed URL. Error will be returned +// if the signing fails. +// +// It is invalid to create a presigned URL with a expire duration 0 or less. An +// error is returned if expire duration is 0 or less. +func (r *Request) Presign(expire time.Duration) (string, error) { + r = r.copy() + + // Presign requires all headers be hoisted. There is no way to retrieve + // the signed headers not hoisted without this. Making the presigned URL + // useless. + r.NotHoist = false + + u, _, err := getPresignedURL(r, expire) + return u, err +} + +// PresignRequest behaves just like presign, with the addition of returning a +// set of headers that were signed. +// +// It is invalid to create a presigned URL with a expire duration 0 or less. An +// error is returned if expire duration is 0 or less. +// +// Returns the URL string for the API operation with signature in the query string, +// and the HTTP headers that were included in the signature. These headers must +// be included in any HTTP request made with the presigned URL. +// +// To prevent hoisting any headers to the query string set NotHoist to true on +// this Request value prior to calling PresignRequest. +func (r *Request) PresignRequest(expire time.Duration) (string, http.Header, error) { + r = r.copy() + return getPresignedURL(r, expire) +} + +// IsPresigned returns true if the request represents a presigned API url. +func (r *Request) IsPresigned() bool { + return r.ExpireTime != 0 +} + +func getPresignedURL(r *Request, expire time.Duration) (string, http.Header, error) { + if expire <= 0 { + return "", nil, awserr.New( + ErrCodeInvalidPresignExpire, + "presigned URL requires an expire duration greater than 0", + nil, + ) + } + + r.ExpireTime = expire + + if r.Operation.BeforePresignFn != nil { + if err := r.Operation.BeforePresignFn(r); err != nil { + return "", nil, err + } + } + + if err := r.Sign(); err != nil { + return "", nil, err + } + + return r.HTTPRequest.URL.String(), r.SignedHeaderVals, nil +} + +func debugLogReqError(r *Request, stage string, retrying bool, err error) { + if !r.Config.LogLevel.Matches(aws.LogDebugWithRequestErrors) { + return + } + + retryStr := "not retrying" + if retrying { + retryStr = "will retry" + } + + r.Config.Logger.Log(fmt.Sprintf("DEBUG: %s %s/%s failed, %s, error %v", + stage, r.ClientInfo.ServiceName, r.Operation.Name, retryStr, err)) +} + +// Build will build the request's object so it can be signed and sent +// to the service. Build will also validate all the request's parameters. +// Any additional build Handlers set on this request will be run +// in the order they were set. +// +// The request will only be built once. Multiple calls to build will have +// no effect. +// +// If any Validate or Build errors occur the build will stop and the error +// which occurred will be returned. +func (r *Request) Build() error { + if !r.built { + r.Handlers.Validate.Run(r) + if r.Error != nil { + debugLogReqError(r, "Validate Request", false, r.Error) + return r.Error + } + r.Handlers.Build.Run(r) + if r.Error != nil { + debugLogReqError(r, "Build Request", false, r.Error) + return r.Error + } + r.built = true + } + + return r.Error +} + +// Sign will sign the request returning error if errors are encountered. +// +// Send will build the request prior to signing. All Sign Handlers will +// be executed in the order they were set. +func (r *Request) Sign() error { + r.Build() + if r.Error != nil { + debugLogReqError(r, "Build Request", false, r.Error) + return r.Error + } + + r.Handlers.Sign.Run(r) + return r.Error +} + +func (r *Request) getNextRequestBody() (io.ReadCloser, error) { + if r.safeBody != nil { + r.safeBody.Close() + } + + r.safeBody = newOffsetReader(r.Body, r.BodyStart) + + // Go 1.8 tightened and clarified the rules code needs to use when building + // requests with the http package. Go 1.8 removed the automatic detection + // of if the Request.Body was empty, or actually had bytes in it. The SDK + // always sets the Request.Body even if it is empty and should not actually + // be sent. This is incorrect. + // + // Go 1.8 did add a http.NoBody value that the SDK can use to tell the http + // client that the request really should be sent without a body. The + // Request.Body cannot be set to nil, which is preferable, because the + // field is exported and could introduce nil pointer dereferences for users + // of the SDK if they used that field. + // + // Related golang/go#18257 + l, err := aws.SeekerLen(r.Body) + if err != nil { + return nil, awserr.New(ErrCodeSerialization, "failed to compute request body size", err) + } + + var body io.ReadCloser + if l == 0 { + body = NoBody + } else if l > 0 { + body = r.safeBody + } else { + // Hack to prevent sending bodies for methods where the body + // should be ignored by the server. Sending bodies on these + // methods without an associated ContentLength will cause the + // request to socket timeout because the server does not handle + // Transfer-Encoding: chunked bodies for these methods. + // + // This would only happen if a aws.ReaderSeekerCloser was used with + // a io.Reader that was not also an io.Seeker, or did not implement + // Len() method. + switch r.Operation.HTTPMethod { + case "GET", "HEAD", "DELETE": + body = NoBody + default: + body = r.safeBody + } + } + + return body, nil +} + +// GetBody will return an io.ReadSeeker of the Request's underlying +// input body with a concurrency safe wrapper. +func (r *Request) GetBody() io.ReadSeeker { + return r.safeBody +} + +// Send will send the request returning error if errors are encountered. +// +// Send will sign the request prior to sending. All Send Handlers will +// be executed in the order they were set. +// +// Canceling a request is non-deterministic. If a request has been canceled, +// then the transport will choose, randomly, one of the state channels during +// reads or getting the connection. +// +// readLoop() and getConn(req *Request, cm connectMethod) +// https://github.com/golang/go/blob/master/src/net/http/transport.go +// +// Send will not close the request.Request's body. +func (r *Request) Send() error { + defer func() { + // Regardless of success or failure of the request trigger the Complete + // request handlers. + r.Handlers.Complete.Run(r) + }() + + for { + if aws.BoolValue(r.Retryable) { + if r.Config.LogLevel.Matches(aws.LogDebugWithRequestRetries) { + r.Config.Logger.Log(fmt.Sprintf("DEBUG: Retrying Request %s/%s, attempt %d", + r.ClientInfo.ServiceName, r.Operation.Name, r.RetryCount)) + } + + // The previous http.Request will have a reference to the r.Body + // and the HTTP Client's Transport may still be reading from + // the request's body even though the Client's Do returned. + r.HTTPRequest = copyHTTPRequest(r.HTTPRequest, nil) + r.ResetBody() + + // Closing response body to ensure that no response body is leaked + // between retry attempts. + if r.HTTPResponse != nil && r.HTTPResponse.Body != nil { + r.HTTPResponse.Body.Close() + } + } + + r.Sign() + if r.Error != nil { + return r.Error + } + + r.Retryable = nil + + r.Handlers.Send.Run(r) + if r.Error != nil { + if !shouldRetryCancel(r) { + return r.Error + } + + err := r.Error + r.Handlers.Retry.Run(r) + r.Handlers.AfterRetry.Run(r) + if r.Error != nil { + debugLogReqError(r, "Send Request", false, err) + return r.Error + } + debugLogReqError(r, "Send Request", true, err) + continue + } + r.Handlers.UnmarshalMeta.Run(r) + r.Handlers.ValidateResponse.Run(r) + if r.Error != nil { + r.Handlers.UnmarshalError.Run(r) + err := r.Error + + r.Handlers.Retry.Run(r) + r.Handlers.AfterRetry.Run(r) + if r.Error != nil { + debugLogReqError(r, "Validate Response", false, err) + return r.Error + } + debugLogReqError(r, "Validate Response", true, err) + continue + } + + r.Handlers.Unmarshal.Run(r) + if r.Error != nil { + err := r.Error + r.Handlers.Retry.Run(r) + r.Handlers.AfterRetry.Run(r) + if r.Error != nil { + debugLogReqError(r, "Unmarshal Response", false, err) + return r.Error + } + debugLogReqError(r, "Unmarshal Response", true, err) + continue + } + + break + } + + return nil +} + +// copy will copy a request which will allow for local manipulation of the +// request. +func (r *Request) copy() *Request { + req := &Request{} + *req = *r + req.Handlers = r.Handlers.Copy() + op := *r.Operation + req.Operation = &op + return req +} + +// AddToUserAgent adds the string to the end of the request's current user agent. +func AddToUserAgent(r *Request, s string) { + curUA := r.HTTPRequest.Header.Get("User-Agent") + if len(curUA) > 0 { + s = curUA + " " + s + } + r.HTTPRequest.Header.Set("User-Agent", s) +} + +func shouldRetryCancel(r *Request) bool { + awsErr, ok := r.Error.(awserr.Error) + timeoutErr := false + errStr := r.Error.Error() + if ok { + if awsErr.Code() == CanceledErrorCode { + return false + } + err := awsErr.OrigErr() + netErr, netOK := err.(net.Error) + timeoutErr = netOK && netErr.Temporary() + if urlErr, ok := err.(*url.Error); !timeoutErr && ok { + errStr = urlErr.Err.Error() + } + } + + // There can be two types of canceled errors here. + // The first being a net.Error and the other being an error. + // If the request was timed out, we want to continue the retry + // process. Otherwise, return the canceled error. + return timeoutErr || + (errStr != "net/http: request canceled" && + errStr != "net/http: request canceled while waiting for connection") + +} + +// SanitizeHostForHeader removes default port from host and updates request.Host +func SanitizeHostForHeader(r *http.Request) { + host := getHost(r) + port := portOnly(host) + if port != "" && isDefaultPort(r.URL.Scheme, port) { + r.Host = stripPort(host) + } +} + +// Returns host from request +func getHost(r *http.Request) string { + if r.Host != "" { + return r.Host + } + + return r.URL.Host +} + +// Hostname returns u.Host, without any port number. +// +// If Host is an IPv6 literal with a port number, Hostname returns the +// IPv6 literal without the square brackets. IPv6 literals may include +// a zone identifier. +// +// Copied from the Go 1.8 standard library (net/url) +func stripPort(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return hostport + } + if i := strings.IndexByte(hostport, ']'); i != -1 { + return strings.TrimPrefix(hostport[:i], "[") + } + return hostport[:colon] +} + +// Port returns the port part of u.Host, without the leading colon. +// If u.Host doesn't contain a port, Port returns an empty string. +// +// Copied from the Go 1.8 standard library (net/url) +func portOnly(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return "" + } + if i := strings.Index(hostport, "]:"); i != -1 { + return hostport[i+len("]:"):] + } + if strings.Contains(hostport, "]") { + return "" + } + return hostport[colon+len(":"):] +} + +// Returns true if the specified URI is using the standard port +// (i.e. port 80 for HTTP URIs or 443 for HTTPS URIs) +func isDefaultPort(scheme, port string) bool { + if port == "" { + return true + } + + lowerCaseScheme := strings.ToLower(scheme) + if (lowerCaseScheme == "http" && port == "80") || (lowerCaseScheme == "https" && port == "443") { + return true + } + + return false +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go new file mode 100644 index 00000000..869b97a1 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go @@ -0,0 +1,39 @@ +// +build !go1.8 + +package request + +import "io" + +// NoBody is an io.ReadCloser with no bytes. Read always returns EOF +// and Close always returns nil. It can be used in an outgoing client +// request to explicitly signal that a request has zero bytes. +// An alternative, however, is to simply set Request.Body to nil. +// +// Copy of Go 1.8 NoBody type from net/http/http.go +type noBody struct{} + +func (noBody) Read([]byte) (int, error) { return 0, io.EOF } +func (noBody) Close() error { return nil } +func (noBody) WriteTo(io.Writer) (int64, error) { return 0, nil } + +// NoBody is an empty reader that will trigger the Go HTTP client to not include +// and body in the HTTP request. +var NoBody = noBody{} + +// ResetBody rewinds the request body back to its starting position, and +// set's the HTTP Request body reference. When the body is read prior +// to being sent in the HTTP request it will need to be rewound. +// +// ResetBody will automatically be called by the SDK's build handler, but if +// the request is being used directly ResetBody must be called before the request +// is Sent. SetStringBody, SetBufferBody, and SetReaderBody will automatically +// call ResetBody. +func (r *Request) ResetBody() { + body, err := r.getNextRequestBody() + if err != nil { + r.Error = err + return + } + + r.HTTPRequest.Body = body +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go new file mode 100644 index 00000000..c32fc69b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go @@ -0,0 +1,33 @@ +// +build go1.8 + +package request + +import ( + "net/http" +) + +// NoBody is a http.NoBody reader instructing Go HTTP client to not include +// and body in the HTTP request. +var NoBody = http.NoBody + +// ResetBody rewinds the request body back to its starting position, and +// set's the HTTP Request body reference. When the body is read prior +// to being sent in the HTTP request it will need to be rewound. +// +// ResetBody will automatically be called by the SDK's build handler, but if +// the request is being used directly ResetBody must be called before the request +// is Sent. SetStringBody, SetBufferBody, and SetReaderBody will automatically +// call ResetBody. +// +// Will also set the Go 1.8's http.Request.GetBody member to allow retrying +// PUT/POST redirects. +func (r *Request) ResetBody() { + body, err := r.getNextRequestBody() + if err != nil { + r.Error = err + return + } + + r.HTTPRequest.Body = body + r.HTTPRequest.GetBody = r.getNextRequestBody +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_context.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_context.go new file mode 100644 index 00000000..a7365cd1 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_context.go @@ -0,0 +1,14 @@ +// +build go1.7 + +package request + +import "github.com/aws/aws-sdk-go/aws" + +// setContext updates the Request to use the passed in context for cancellation. +// Context will also be used for request retry delay. +// +// Creates shallow copy of the http.Request with the WithContext method. +func setRequestContext(r *Request, ctx aws.Context) { + r.context = ctx + r.HTTPRequest = r.HTTPRequest.WithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_context_1_6.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_context_1_6.go new file mode 100644 index 00000000..307fa070 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_context_1_6.go @@ -0,0 +1,14 @@ +// +build !go1.7 + +package request + +import "github.com/aws/aws-sdk-go/aws" + +// setContext updates the Request to use the passed in context for cancellation. +// Context will also be used for request retry delay. +// +// Creates shallow copy of the http.Request with the WithContext method. +func setRequestContext(r *Request, ctx aws.Context) { + r.context = ctx + r.HTTPRequest.Cancel = ctx.Done() +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go new file mode 100644 index 00000000..a633ed5a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go @@ -0,0 +1,264 @@ +package request + +import ( + "reflect" + "sync/atomic" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" +) + +// A Pagination provides paginating of SDK API operations which are paginatable. +// Generally you should not use this type directly, but use the "Pages" API +// operations method to automatically perform pagination for you. Such as, +// "S3.ListObjectsPages", and "S3.ListObjectsPagesWithContext" methods. +// +// Pagination differs from a Paginator type in that pagination is the type that +// does the pagination between API operations, and Paginator defines the +// configuration that will be used per page request. +// +// cont := true +// for p.Next() && cont { +// data := p.Page().(*s3.ListObjectsOutput) +// // process the page's data +// } +// return p.Err() +// +// See service client API operation Pages methods for examples how the SDK will +// use the Pagination type. +type Pagination struct { + // Function to return a Request value for each pagination request. + // Any configuration or handlers that need to be applied to the request + // prior to getting the next page should be done here before the request + // returned. + // + // NewRequest should always be built from the same API operations. It is + // undefined if different API operations are returned on subsequent calls. + NewRequest func() (*Request, error) + // EndPageOnSameToken, when enabled, will allow the paginator to stop on + // token that are the same as its previous tokens. + EndPageOnSameToken bool + + started bool + prevTokens []interface{} + nextTokens []interface{} + + err error + curPage interface{} +} + +// HasNextPage will return true if Pagination is able to determine that the API +// operation has additional pages. False will be returned if there are no more +// pages remaining. +// +// Will always return true if Next has not been called yet. +func (p *Pagination) HasNextPage() bool { + if !p.started { + return true + } + + hasNextPage := len(p.nextTokens) != 0 + if p.EndPageOnSameToken { + return hasNextPage && !awsutil.DeepEqual(p.nextTokens, p.prevTokens) + } + return hasNextPage +} + +// Err returns the error Pagination encountered when retrieving the next page. +func (p *Pagination) Err() error { + return p.err +} + +// Page returns the current page. Page should only be called after a successful +// call to Next. It is undefined what Page will return if Page is called after +// Next returns false. +func (p *Pagination) Page() interface{} { + return p.curPage +} + +// Next will attempt to retrieve the next page for the API operation. When a page +// is retrieved true will be returned. If the page cannot be retrieved, or there +// are no more pages false will be returned. +// +// Use the Page method to retrieve the current page data. The data will need +// to be cast to the API operation's output type. +// +// Use the Err method to determine if an error occurred if Page returns false. +func (p *Pagination) Next() bool { + if !p.HasNextPage() { + return false + } + + req, err := p.NewRequest() + if err != nil { + p.err = err + return false + } + + if p.started { + for i, intok := range req.Operation.InputTokens { + awsutil.SetValueAtPath(req.Params, intok, p.nextTokens[i]) + } + } + p.started = true + + err = req.Send() + if err != nil { + p.err = err + return false + } + + p.prevTokens = p.nextTokens + p.nextTokens = req.nextPageTokens() + p.curPage = req.Data + + return true +} + +// A Paginator is the configuration data that defines how an API operation +// should be paginated. This type is used by the API service models to define +// the generated pagination config for service APIs. +// +// The Pagination type is what provides iterating between pages of an API. It +// is only used to store the token metadata the SDK should use for performing +// pagination. +type Paginator struct { + InputTokens []string + OutputTokens []string + LimitToken string + TruncationToken string +} + +// nextPageTokens returns the tokens to use when asking for the next page of data. +func (r *Request) nextPageTokens() []interface{} { + if r.Operation.Paginator == nil { + return nil + } + if r.Operation.TruncationToken != "" { + tr, _ := awsutil.ValuesAtPath(r.Data, r.Operation.TruncationToken) + if len(tr) == 0 { + return nil + } + + switch v := tr[0].(type) { + case *bool: + if !aws.BoolValue(v) { + return nil + } + case bool: + if v == false { + return nil + } + } + } + + tokens := []interface{}{} + tokenAdded := false + for _, outToken := range r.Operation.OutputTokens { + vs, _ := awsutil.ValuesAtPath(r.Data, outToken) + if len(vs) == 0 { + tokens = append(tokens, nil) + continue + } + v := vs[0] + + switch tv := v.(type) { + case *string: + if len(aws.StringValue(tv)) == 0 { + tokens = append(tokens, nil) + continue + } + case string: + if len(tv) == 0 { + tokens = append(tokens, nil) + continue + } + } + + tokenAdded = true + tokens = append(tokens, v) + } + if !tokenAdded { + return nil + } + + return tokens +} + +// Ensure a deprecated item is only logged once instead of each time its used. +func logDeprecatedf(logger aws.Logger, flag *int32, msg string) { + if logger == nil { + return + } + if atomic.CompareAndSwapInt32(flag, 0, 1) { + logger.Log(msg) + } +} + +var ( + logDeprecatedHasNextPage int32 + logDeprecatedNextPage int32 + logDeprecatedEachPage int32 +) + +// HasNextPage returns true if this request has more pages of data available. +// +// Deprecated Use Pagination type for configurable pagination of API operations +func (r *Request) HasNextPage() bool { + logDeprecatedf(r.Config.Logger, &logDeprecatedHasNextPage, + "Request.HasNextPage deprecated. Use Pagination type for configurable pagination of API operations") + + return len(r.nextPageTokens()) > 0 +} + +// NextPage returns a new Request that can be executed to return the next +// page of result data. Call .Send() on this request to execute it. +// +// Deprecated Use Pagination type for configurable pagination of API operations +func (r *Request) NextPage() *Request { + logDeprecatedf(r.Config.Logger, &logDeprecatedNextPage, + "Request.NextPage deprecated. Use Pagination type for configurable pagination of API operations") + + tokens := r.nextPageTokens() + if len(tokens) == 0 { + return nil + } + + data := reflect.New(reflect.TypeOf(r.Data).Elem()).Interface() + nr := New(r.Config, r.ClientInfo, r.Handlers, r.Retryer, r.Operation, awsutil.CopyOf(r.Params), data) + for i, intok := range nr.Operation.InputTokens { + awsutil.SetValueAtPath(nr.Params, intok, tokens[i]) + } + return nr +} + +// EachPage iterates over each page of a paginated request object. The fn +// parameter should be a function with the following sample signature: +// +// func(page *T, lastPage bool) bool { +// return true // return false to stop iterating +// } +// +// Where "T" is the structure type matching the output structure of the given +// operation. For example, a request object generated by +// DynamoDB.ListTablesRequest() would expect to see dynamodb.ListTablesOutput +// as the structure "T". The lastPage value represents whether the page is +// the last page of data or not. The return value of this function should +// return true to keep iterating or false to stop. +// +// Deprecated Use Pagination type for configurable pagination of API operations +func (r *Request) EachPage(fn func(data interface{}, isLastPage bool) (shouldContinue bool)) error { + logDeprecatedf(r.Config.Logger, &logDeprecatedEachPage, + "Request.EachPage deprecated. Use Pagination type for configurable pagination of API operations") + + for page := r; page != nil; page = page.NextPage() { + if err := page.Send(); err != nil { + return err + } + if getNextPage := fn(page.Data, !page.HasNextPage()); !getNextPage { + return page.Error + } + } + + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go new file mode 100644 index 00000000..f35fef21 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go @@ -0,0 +1,161 @@ +package request + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" +) + +// Retryer is an interface to control retry logic for a given service. +// The default implementation used by most services is the client.DefaultRetryer +// structure, which contains basic retry logic using exponential backoff. +type Retryer interface { + RetryRules(*Request) time.Duration + ShouldRetry(*Request) bool + MaxRetries() int +} + +// WithRetryer sets a config Retryer value to the given Config returning it +// for chaining. +func WithRetryer(cfg *aws.Config, retryer Retryer) *aws.Config { + cfg.Retryer = retryer + return cfg +} + +// retryableCodes is a collection of service response codes which are retry-able +// without any further action. +var retryableCodes = map[string]struct{}{ + "RequestError": {}, + "RequestTimeout": {}, + ErrCodeResponseTimeout: {}, + "RequestTimeoutException": {}, // Glacier's flavor of RequestTimeout +} + +var throttleCodes = map[string]struct{}{ + "ProvisionedThroughputExceededException": {}, + "Throttling": {}, + "ThrottlingException": {}, + "RequestLimitExceeded": {}, + "RequestThrottled": {}, + "TooManyRequestsException": {}, // Lambda functions + "PriorRequestNotComplete": {}, // Route53 +} + +// credsExpiredCodes is a collection of error codes which signify the credentials +// need to be refreshed. Expired tokens require refreshing of credentials, and +// resigning before the request can be retried. +var credsExpiredCodes = map[string]struct{}{ + "ExpiredToken": {}, + "ExpiredTokenException": {}, + "RequestExpired": {}, // EC2 Only +} + +func isCodeThrottle(code string) bool { + _, ok := throttleCodes[code] + return ok +} + +func isCodeRetryable(code string) bool { + if _, ok := retryableCodes[code]; ok { + return true + } + + return isCodeExpiredCreds(code) +} + +func isCodeExpiredCreds(code string) bool { + _, ok := credsExpiredCodes[code] + return ok +} + +var validParentCodes = map[string]struct{}{ + ErrCodeSerialization: {}, + ErrCodeRead: {}, +} + +type temporaryError interface { + Temporary() bool +} + +func isNestedErrorRetryable(parentErr awserr.Error) bool { + if parentErr == nil { + return false + } + + if _, ok := validParentCodes[parentErr.Code()]; !ok { + return false + } + + err := parentErr.OrigErr() + if err == nil { + return false + } + + if aerr, ok := err.(awserr.Error); ok { + return isCodeRetryable(aerr.Code()) + } + + if t, ok := err.(temporaryError); ok { + return t.Temporary() + } + + return isErrConnectionReset(err) +} + +// IsErrorRetryable returns whether the error is retryable, based on its Code. +// Returns false if error is nil. +func IsErrorRetryable(err error) bool { + if err != nil { + if aerr, ok := err.(awserr.Error); ok { + return isCodeRetryable(aerr.Code()) || isNestedErrorRetryable(aerr) + } + } + return false +} + +// IsErrorThrottle returns whether the error is to be throttled based on its code. +// Returns false if error is nil. +func IsErrorThrottle(err error) bool { + if err != nil { + if aerr, ok := err.(awserr.Error); ok { + return isCodeThrottle(aerr.Code()) + } + } + return false +} + +// IsErrorExpiredCreds returns whether the error code is a credential expiry error. +// Returns false if error is nil. +func IsErrorExpiredCreds(err error) bool { + if err != nil { + if aerr, ok := err.(awserr.Error); ok { + return isCodeExpiredCreds(aerr.Code()) + } + } + return false +} + +// IsErrorRetryable returns whether the error is retryable, based on its Code. +// Returns false if the request has no Error set. +// +// Alias for the utility function IsErrorRetryable +func (r *Request) IsErrorRetryable() bool { + return IsErrorRetryable(r.Error) +} + +// IsErrorThrottle returns whether the error is to be throttled based on its code. +// Returns false if the request has no Error set +// +// Alias for the utility function IsErrorThrottle +func (r *Request) IsErrorThrottle() bool { + return IsErrorThrottle(r.Error) +} + +// IsErrorExpired returns whether the error code is a credential expiry error. +// Returns false if the request has no Error set. +// +// Alias for the utility function IsErrorExpiredCreds +func (r *Request) IsErrorExpired() bool { + return IsErrorExpiredCreds(r.Error) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/timeout_read_closer.go b/vendor/github.com/aws/aws-sdk-go/aws/request/timeout_read_closer.go new file mode 100644 index 00000000..09a44eb9 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/timeout_read_closer.go @@ -0,0 +1,94 @@ +package request + +import ( + "io" + "time" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +var timeoutErr = awserr.New( + ErrCodeResponseTimeout, + "read on body has reached the timeout limit", + nil, +) + +type readResult struct { + n int + err error +} + +// timeoutReadCloser will handle body reads that take too long. +// We will return a ErrReadTimeout error if a timeout occurs. +type timeoutReadCloser struct { + reader io.ReadCloser + duration time.Duration +} + +// Read will spin off a goroutine to call the reader's Read method. We will +// select on the timer's channel or the read's channel. Whoever completes first +// will be returned. +func (r *timeoutReadCloser) Read(b []byte) (int, error) { + timer := time.NewTimer(r.duration) + c := make(chan readResult, 1) + + go func() { + n, err := r.reader.Read(b) + timer.Stop() + c <- readResult{n: n, err: err} + }() + + select { + case data := <-c: + return data.n, data.err + case <-timer.C: + return 0, timeoutErr + } +} + +func (r *timeoutReadCloser) Close() error { + return r.reader.Close() +} + +const ( + // HandlerResponseTimeout is what we use to signify the name of the + // response timeout handler. + HandlerResponseTimeout = "ResponseTimeoutHandler" +) + +// adaptToResponseTimeoutError is a handler that will replace any top level error +// to a ErrCodeResponseTimeout, if its child is that. +func adaptToResponseTimeoutError(req *Request) { + if err, ok := req.Error.(awserr.Error); ok { + aerr, ok := err.OrigErr().(awserr.Error) + if ok && aerr.Code() == ErrCodeResponseTimeout { + req.Error = aerr + } + } +} + +// WithResponseReadTimeout is a request option that will wrap the body in a timeout read closer. +// This will allow for per read timeouts. If a timeout occurred, we will return the +// ErrCodeResponseTimeout. +// +// svc.PutObjectWithContext(ctx, params, request.WithTimeoutReadCloser(30 * time.Second) +func WithResponseReadTimeout(duration time.Duration) Option { + return func(r *Request) { + + var timeoutHandler = NamedHandler{ + HandlerResponseTimeout, + func(req *Request) { + req.HTTPResponse.Body = &timeoutReadCloser{ + reader: req.HTTPResponse.Body, + duration: duration, + } + }} + + // remove the handler so we are not stomping over any new durations. + r.Handlers.Send.RemoveByName(HandlerResponseTimeout) + r.Handlers.Send.PushBackNamed(timeoutHandler) + + r.Handlers.Unmarshal.PushBack(adaptToResponseTimeoutError) + r.Handlers.UnmarshalError.PushBack(adaptToResponseTimeoutError) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go b/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go new file mode 100644 index 00000000..40124622 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go @@ -0,0 +1,234 @@ +package request + +import ( + "bytes" + "fmt" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +const ( + // InvalidParameterErrCode is the error code for invalid parameters errors + InvalidParameterErrCode = "InvalidParameter" + // ParamRequiredErrCode is the error code for required parameter errors + ParamRequiredErrCode = "ParamRequiredError" + // ParamMinValueErrCode is the error code for fields with too low of a + // number value. + ParamMinValueErrCode = "ParamMinValueError" + // ParamMinLenErrCode is the error code for fields without enough elements. + ParamMinLenErrCode = "ParamMinLenError" +) + +// Validator provides a way for types to perform validation logic on their +// input values that external code can use to determine if a type's values +// are valid. +type Validator interface { + Validate() error +} + +// An ErrInvalidParams provides wrapping of invalid parameter errors found when +// validating API operation input parameters. +type ErrInvalidParams struct { + // Context is the base context of the invalid parameter group. + Context string + errs []ErrInvalidParam +} + +// Add adds a new invalid parameter error to the collection of invalid +// parameters. The context of the invalid parameter will be updated to reflect +// this collection. +func (e *ErrInvalidParams) Add(err ErrInvalidParam) { + err.SetContext(e.Context) + e.errs = append(e.errs, err) +} + +// AddNested adds the invalid parameter errors from another ErrInvalidParams +// value into this collection. The nested errors will have their nested context +// updated and base context to reflect the merging. +// +// Use for nested validations errors. +func (e *ErrInvalidParams) AddNested(nestedCtx string, nested ErrInvalidParams) { + for _, err := range nested.errs { + err.SetContext(e.Context) + err.AddNestedContext(nestedCtx) + e.errs = append(e.errs, err) + } +} + +// Len returns the number of invalid parameter errors +func (e ErrInvalidParams) Len() int { + return len(e.errs) +} + +// Code returns the code of the error +func (e ErrInvalidParams) Code() string { + return InvalidParameterErrCode +} + +// Message returns the message of the error +func (e ErrInvalidParams) Message() string { + return fmt.Sprintf("%d validation error(s) found.", len(e.errs)) +} + +// Error returns the string formatted form of the invalid parameters. +func (e ErrInvalidParams) Error() string { + w := &bytes.Buffer{} + fmt.Fprintf(w, "%s: %s\n", e.Code(), e.Message()) + + for _, err := range e.errs { + fmt.Fprintf(w, "- %s\n", err.Message()) + } + + return w.String() +} + +// OrigErr returns the invalid parameters as a awserr.BatchedErrors value +func (e ErrInvalidParams) OrigErr() error { + return awserr.NewBatchError( + InvalidParameterErrCode, e.Message(), e.OrigErrs()) +} + +// OrigErrs returns a slice of the invalid parameters +func (e ErrInvalidParams) OrigErrs() []error { + errs := make([]error, len(e.errs)) + for i := 0; i < len(errs); i++ { + errs[i] = e.errs[i] + } + + return errs +} + +// An ErrInvalidParam represents an invalid parameter error type. +type ErrInvalidParam interface { + awserr.Error + + // Field name the error occurred on. + Field() string + + // SetContext updates the context of the error. + SetContext(string) + + // AddNestedContext updates the error's context to include a nested level. + AddNestedContext(string) +} + +type errInvalidParam struct { + context string + nestedContext string + field string + code string + msg string +} + +// Code returns the error code for the type of invalid parameter. +func (e *errInvalidParam) Code() string { + return e.code +} + +// Message returns the reason the parameter was invalid, and its context. +func (e *errInvalidParam) Message() string { + return fmt.Sprintf("%s, %s.", e.msg, e.Field()) +} + +// Error returns the string version of the invalid parameter error. +func (e *errInvalidParam) Error() string { + return fmt.Sprintf("%s: %s", e.code, e.Message()) +} + +// OrigErr returns nil, Implemented for awserr.Error interface. +func (e *errInvalidParam) OrigErr() error { + return nil +} + +// Field Returns the field and context the error occurred. +func (e *errInvalidParam) Field() string { + field := e.context + if len(field) > 0 { + field += "." + } + if len(e.nestedContext) > 0 { + field += fmt.Sprintf("%s.", e.nestedContext) + } + field += e.field + + return field +} + +// SetContext updates the base context of the error. +func (e *errInvalidParam) SetContext(ctx string) { + e.context = ctx +} + +// AddNestedContext prepends a context to the field's path. +func (e *errInvalidParam) AddNestedContext(ctx string) { + if len(e.nestedContext) == 0 { + e.nestedContext = ctx + } else { + e.nestedContext = fmt.Sprintf("%s.%s", ctx, e.nestedContext) + } + +} + +// An ErrParamRequired represents an required parameter error. +type ErrParamRequired struct { + errInvalidParam +} + +// NewErrParamRequired creates a new required parameter error. +func NewErrParamRequired(field string) *ErrParamRequired { + return &ErrParamRequired{ + errInvalidParam{ + code: ParamRequiredErrCode, + field: field, + msg: fmt.Sprintf("missing required field"), + }, + } +} + +// An ErrParamMinValue represents a minimum value parameter error. +type ErrParamMinValue struct { + errInvalidParam + min float64 +} + +// NewErrParamMinValue creates a new minimum value parameter error. +func NewErrParamMinValue(field string, min float64) *ErrParamMinValue { + return &ErrParamMinValue{ + errInvalidParam: errInvalidParam{ + code: ParamMinValueErrCode, + field: field, + msg: fmt.Sprintf("minimum field value of %v", min), + }, + min: min, + } +} + +// MinValue returns the field's require minimum value. +// +// float64 is returned for both int and float min values. +func (e *ErrParamMinValue) MinValue() float64 { + return e.min +} + +// An ErrParamMinLen represents a minimum length parameter error. +type ErrParamMinLen struct { + errInvalidParam + min int +} + +// NewErrParamMinLen creates a new minimum length parameter error. +func NewErrParamMinLen(field string, min int) *ErrParamMinLen { + return &ErrParamMinLen{ + errInvalidParam: errInvalidParam{ + code: ParamMinLenErrCode, + field: field, + msg: fmt.Sprintf("minimum field size of %v", min), + }, + min: min, + } +} + +// MinLen returns the field's required minimum length. +func (e *ErrParamMinLen) MinLen() int { + return e.min +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/waiter.go b/vendor/github.com/aws/aws-sdk-go/aws/request/waiter.go new file mode 100644 index 00000000..4601f883 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/waiter.go @@ -0,0 +1,295 @@ +package request + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/awsutil" +) + +// WaiterResourceNotReadyErrorCode is the error code returned by a waiter when +// the waiter's max attempts have been exhausted. +const WaiterResourceNotReadyErrorCode = "ResourceNotReady" + +// A WaiterOption is a function that will update the Waiter value's fields to +// configure the waiter. +type WaiterOption func(*Waiter) + +// WithWaiterMaxAttempts returns the maximum number of times the waiter should +// attempt to check the resource for the target state. +func WithWaiterMaxAttempts(max int) WaiterOption { + return func(w *Waiter) { + w.MaxAttempts = max + } +} + +// WaiterDelay will return a delay the waiter should pause between attempts to +// check the resource state. The passed in attempt is the number of times the +// Waiter has checked the resource state. +// +// Attempt is the number of attempts the Waiter has made checking the resource +// state. +type WaiterDelay func(attempt int) time.Duration + +// ConstantWaiterDelay returns a WaiterDelay that will always return a constant +// delay the waiter should use between attempts. It ignores the number of +// attempts made. +func ConstantWaiterDelay(delay time.Duration) WaiterDelay { + return func(attempt int) time.Duration { + return delay + } +} + +// WithWaiterDelay will set the Waiter to use the WaiterDelay passed in. +func WithWaiterDelay(delayer WaiterDelay) WaiterOption { + return func(w *Waiter) { + w.Delay = delayer + } +} + +// WithWaiterLogger returns a waiter option to set the logger a waiter +// should use to log warnings and errors to. +func WithWaiterLogger(logger aws.Logger) WaiterOption { + return func(w *Waiter) { + w.Logger = logger + } +} + +// WithWaiterRequestOptions returns a waiter option setting the request +// options for each request the waiter makes. Appends to waiter's request +// options already set. +func WithWaiterRequestOptions(opts ...Option) WaiterOption { + return func(w *Waiter) { + w.RequestOptions = append(w.RequestOptions, opts...) + } +} + +// A Waiter provides the functionality to perform a blocking call which will +// wait for a resource state to be satisfied by a service. +// +// This type should not be used directly. The API operations provided in the +// service packages prefixed with "WaitUntil" should be used instead. +type Waiter struct { + Name string + Acceptors []WaiterAcceptor + Logger aws.Logger + + MaxAttempts int + Delay WaiterDelay + + RequestOptions []Option + NewRequest func([]Option) (*Request, error) + SleepWithContext func(aws.Context, time.Duration) error +} + +// ApplyOptions updates the waiter with the list of waiter options provided. +func (w *Waiter) ApplyOptions(opts ...WaiterOption) { + for _, fn := range opts { + fn(w) + } +} + +// WaiterState are states the waiter uses based on WaiterAcceptor definitions +// to identify if the resource state the waiter is waiting on has occurred. +type WaiterState int + +// String returns the string representation of the waiter state. +func (s WaiterState) String() string { + switch s { + case SuccessWaiterState: + return "success" + case FailureWaiterState: + return "failure" + case RetryWaiterState: + return "retry" + default: + return "unknown waiter state" + } +} + +// States the waiter acceptors will use to identify target resource states. +const ( + SuccessWaiterState WaiterState = iota // waiter successful + FailureWaiterState // waiter failed + RetryWaiterState // waiter needs to be retried +) + +// WaiterMatchMode is the mode that the waiter will use to match the WaiterAcceptor +// definition's Expected attribute. +type WaiterMatchMode int + +// Modes the waiter will use when inspecting API response to identify target +// resource states. +const ( + PathAllWaiterMatch WaiterMatchMode = iota // match on all paths + PathWaiterMatch // match on specific path + PathAnyWaiterMatch // match on any path + PathListWaiterMatch // match on list of paths + StatusWaiterMatch // match on status code + ErrorWaiterMatch // match on error +) + +// String returns the string representation of the waiter match mode. +func (m WaiterMatchMode) String() string { + switch m { + case PathAllWaiterMatch: + return "pathAll" + case PathWaiterMatch: + return "path" + case PathAnyWaiterMatch: + return "pathAny" + case PathListWaiterMatch: + return "pathList" + case StatusWaiterMatch: + return "status" + case ErrorWaiterMatch: + return "error" + default: + return "unknown waiter match mode" + } +} + +// WaitWithContext will make requests for the API operation using NewRequest to +// build API requests. The request's response will be compared against the +// Waiter's Acceptors to determine the successful state of the resource the +// waiter is inspecting. +// +// The passed in context must not be nil. If it is nil a panic will occur. The +// Context will be used to cancel the waiter's pending requests and retry delays. +// Use aws.BackgroundContext if no context is available. +// +// The waiter will continue until the target state defined by the Acceptors, +// or the max attempts expires. +// +// Will return the WaiterResourceNotReadyErrorCode error code if the waiter's +// retryer ShouldRetry returns false. This normally will happen when the max +// wait attempts expires. +func (w Waiter) WaitWithContext(ctx aws.Context) error { + + for attempt := 1; ; attempt++ { + req, err := w.NewRequest(w.RequestOptions) + if err != nil { + waiterLogf(w.Logger, "unable to create request %v", err) + return err + } + req.Handlers.Build.PushBack(MakeAddToUserAgentFreeFormHandler("Waiter")) + err = req.Send() + + // See if any of the acceptors match the request's response, or error + for _, a := range w.Acceptors { + if matched, matchErr := a.match(w.Name, w.Logger, req, err); matched { + return matchErr + } + } + + // The Waiter should only check the resource state MaxAttempts times + // This is here instead of in the for loop above to prevent delaying + // unnecessary when the waiter will not retry. + if attempt == w.MaxAttempts { + break + } + + // Delay to wait before inspecting the resource again + delay := w.Delay(attempt) + if sleepFn := req.Config.SleepDelay; sleepFn != nil { + // Support SleepDelay for backwards compatibility and testing + sleepFn(delay) + } else { + sleepCtxFn := w.SleepWithContext + if sleepCtxFn == nil { + sleepCtxFn = aws.SleepWithContext + } + + if err := sleepCtxFn(ctx, delay); err != nil { + return awserr.New(CanceledErrorCode, "waiter context canceled", err) + } + } + } + + return awserr.New(WaiterResourceNotReadyErrorCode, "exceeded wait attempts", nil) +} + +// A WaiterAcceptor provides the information needed to wait for an API operation +// to complete. +type WaiterAcceptor struct { + State WaiterState + Matcher WaiterMatchMode + Argument string + Expected interface{} +} + +// match returns if the acceptor found a match with the passed in request +// or error. True is returned if the acceptor made a match, error is returned +// if there was an error attempting to perform the match. +func (a *WaiterAcceptor) match(name string, l aws.Logger, req *Request, err error) (bool, error) { + result := false + var vals []interface{} + + switch a.Matcher { + case PathAllWaiterMatch, PathWaiterMatch: + // Require all matches to be equal for result to match + vals, _ = awsutil.ValuesAtPath(req.Data, a.Argument) + if len(vals) == 0 { + break + } + result = true + for _, val := range vals { + if !awsutil.DeepEqual(val, a.Expected) { + result = false + break + } + } + case PathAnyWaiterMatch: + // Only a single match needs to equal for the result to match + vals, _ = awsutil.ValuesAtPath(req.Data, a.Argument) + for _, val := range vals { + if awsutil.DeepEqual(val, a.Expected) { + result = true + break + } + } + case PathListWaiterMatch: + // ignored matcher + case StatusWaiterMatch: + s := a.Expected.(int) + result = s == req.HTTPResponse.StatusCode + case ErrorWaiterMatch: + if aerr, ok := err.(awserr.Error); ok { + result = aerr.Code() == a.Expected.(string) + } + default: + waiterLogf(l, "WARNING: Waiter %s encountered unexpected matcher: %s", + name, a.Matcher) + } + + if !result { + // If there was no matching result found there is nothing more to do + // for this response, retry the request. + return false, nil + } + + switch a.State { + case SuccessWaiterState: + // waiter completed + return true, nil + case FailureWaiterState: + // Waiter failure state triggered + return true, awserr.New(WaiterResourceNotReadyErrorCode, + "failed waiting for successful resource state", err) + case RetryWaiterState: + // clear the error and retry the operation + return false, nil + default: + waiterLogf(l, "WARNING: Waiter %s encountered unexpected state: %s", + name, a.State) + return false, nil + } +} + +func waiterLogf(logger aws.Logger, msg string, args ...interface{}) { + if logger != nil { + logger.Log(fmt.Sprintf(msg, args...)) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go b/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go new file mode 100644 index 00000000..ea7b886f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go @@ -0,0 +1,273 @@ +/* +Package session provides configuration for the SDK's service clients. + +Sessions can be shared across all service clients that share the same base +configuration. The Session is built from the SDK's default configuration and +request handlers. + +Sessions should be cached when possible, because creating a new Session will +load all configuration values from the environment, and config files each time +the Session is created. Sharing the Session value across all of your service +clients will ensure the configuration is loaded the fewest number of times possible. + +Concurrency + +Sessions are safe to use concurrently as long as the Session is not being +modified. The SDK will not modify the Session once the Session has been created. +Creating service clients concurrently from a shared Session is safe. + +Sessions from Shared Config + +Sessions can be created using the method above that will only load the +additional config if the AWS_SDK_LOAD_CONFIG environment variable is set. +Alternatively you can explicitly create a Session with shared config enabled. +To do this you can use NewSessionWithOptions to configure how the Session will +be created. Using the NewSessionWithOptions with SharedConfigState set to +SharedConfigEnable will create the session as if the AWS_SDK_LOAD_CONFIG +environment variable was set. + +Creating Sessions + +When creating Sessions optional aws.Config values can be passed in that will +override the default, or loaded config values the Session is being created +with. This allows you to provide additional, or case based, configuration +as needed. + +By default NewSession will only load credentials from the shared credentials +file (~/.aws/credentials). If the AWS_SDK_LOAD_CONFIG environment variable is +set to a truthy value the Session will be created from the configuration +values from the shared config (~/.aws/config) and shared credentials +(~/.aws/credentials) files. See the section Sessions from Shared Config for +more information. + +Create a Session with the default config and request handlers. With credentials +region, and profile loaded from the environment and shared config automatically. +Requires the AWS_PROFILE to be set, or "default" is used. + + // Create Session + sess := session.Must(session.NewSession()) + + // Create a Session with a custom region + sess := session.Must(session.NewSession(&aws.Config{ + Region: aws.String("us-east-1"), + })) + + // Create a S3 client instance from a session + sess := session.Must(session.NewSession()) + + svc := s3.New(sess) + +Create Session With Option Overrides + +In addition to NewSession, Sessions can be created using NewSessionWithOptions. +This func allows you to control and override how the Session will be created +through code instead of being driven by environment variables only. + +Use NewSessionWithOptions when you want to provide the config profile, or +override the shared config state (AWS_SDK_LOAD_CONFIG). + + // Equivalent to session.NewSession() + sess := session.Must(session.NewSessionWithOptions(session.Options{ + // Options + })) + + // Specify profile to load for the session's config + sess := session.Must(session.NewSessionWithOptions(session.Options{ + Profile: "profile_name", + })) + + // Specify profile for config and region for requests + sess := session.Must(session.NewSessionWithOptions(session.Options{ + Config: aws.Config{Region: aws.String("us-east-1")}, + Profile: "profile_name", + })) + + // Force enable Shared Config support + sess := session.Must(session.NewSessionWithOptions(session.Options{ + SharedConfigState: session.SharedConfigEnable, + })) + +Adding Handlers + +You can add handlers to a session for processing HTTP requests. All service +clients that use the session inherit the handlers. For example, the following +handler logs every request and its payload made by a service client: + + // Create a session, and add additional handlers for all service + // clients created with the Session to inherit. Adds logging handler. + sess := session.Must(session.NewSession()) + + sess.Handlers.Send.PushFront(func(r *request.Request) { + // Log every request made and its payload + logger.Println("Request: %s/%s, Payload: %s", + r.ClientInfo.ServiceName, r.Operation, r.Params) + }) + +Deprecated "New" function + +The New session function has been deprecated because it does not provide good +way to return errors that occur when loading the configuration files and values. +Because of this, NewSession was created so errors can be retrieved when +creating a session fails. + +Shared Config Fields + +By default the SDK will only load the shared credentials file's (~/.aws/credentials) +credentials values, and all other config is provided by the environment variables, +SDK defaults, and user provided aws.Config values. + +If the AWS_SDK_LOAD_CONFIG environment variable is set, or SharedConfigEnable +option is used to create the Session the full shared config values will be +loaded. This includes credentials, region, and support for assume role. In +addition the Session will load its configuration from both the shared config +file (~/.aws/config) and shared credentials file (~/.aws/credentials). Both +files have the same format. + +If both config files are present the configuration from both files will be +read. The Session will be created from configuration values from the shared +credentials file (~/.aws/credentials) over those in the shared config file (~/.aws/config). + +Credentials are the values the SDK should use for authenticating requests with +AWS Services. They arfrom a configuration file will need to include both +aws_access_key_id and aws_secret_access_key must be provided together in the +same file to be considered valid. The values will be ignored if not a complete +group. aws_session_token is an optional field that can be provided if both of +the other two fields are also provided. + + aws_access_key_id = AKID + aws_secret_access_key = SECRET + aws_session_token = TOKEN + +Assume Role values allow you to configure the SDK to assume an IAM role using +a set of credentials provided in a config file via the source_profile field. +Both "role_arn" and "source_profile" are required. The SDK supports assuming +a role with MFA token if the session option AssumeRoleTokenProvider +is set. + + role_arn = arn:aws:iam:::role/ + source_profile = profile_with_creds + external_id = 1234 + mfa_serial = + role_session_name = session_name + +Region is the region the SDK should use for looking up AWS service endpoints +and signing requests. + + region = us-east-1 + +Assume Role with MFA token + +To create a session with support for assuming an IAM role with MFA set the +session option AssumeRoleTokenProvider to a function that will prompt for the +MFA token code when the SDK assumes the role and refreshes the role's credentials. +This allows you to configure the SDK via the shared config to assumea role +with MFA tokens. + +In order for the SDK to assume a role with MFA the SharedConfigState +session option must be set to SharedConfigEnable, or AWS_SDK_LOAD_CONFIG +environment variable set. + +The shared configuration instructs the SDK to assume an IAM role with MFA +when the mfa_serial configuration field is set in the shared config +(~/.aws/config) or shared credentials (~/.aws/credentials) file. + +If mfa_serial is set in the configuration, the SDK will assume the role, and +the AssumeRoleTokenProvider session option is not set an an error will +be returned when creating the session. + + sess := session.Must(session.NewSessionWithOptions(session.Options{ + AssumeRoleTokenProvider: stscreds.StdinTokenProvider, + })) + + // Create service client value configured for credentials + // from assumed role. + svc := s3.New(sess) + +To setup assume role outside of a session see the stscrds.AssumeRoleProvider +documentation. + +Environment Variables + +When a Session is created several environment variables can be set to adjust +how the SDK functions, and what configuration data it loads when creating +Sessions. All environment values are optional, but some values like credentials +require multiple of the values to set or the partial values will be ignored. +All environment variable values are strings unless otherwise noted. + +Environment configuration values. If set both Access Key ID and Secret Access +Key must be provided. Session Token and optionally also be provided, but is +not required. + + # Access Key ID + AWS_ACCESS_KEY_ID=AKID + AWS_ACCESS_KEY=AKID # only read if AWS_ACCESS_KEY_ID is not set. + + # Secret Access Key + AWS_SECRET_ACCESS_KEY=SECRET + AWS_SECRET_KEY=SECRET=SECRET # only read if AWS_SECRET_ACCESS_KEY is not set. + + # Session Token + AWS_SESSION_TOKEN=TOKEN + +Region value will instruct the SDK where to make service API requests to. If is +not provided in the environment the region must be provided before a service +client request is made. + + AWS_REGION=us-east-1 + + # AWS_DEFAULT_REGION is only read if AWS_SDK_LOAD_CONFIG is also set, + # and AWS_REGION is not also set. + AWS_DEFAULT_REGION=us-east-1 + +Profile name the SDK should load use when loading shared config from the +configuration files. If not provided "default" will be used as the profile name. + + AWS_PROFILE=my_profile + + # AWS_DEFAULT_PROFILE is only read if AWS_SDK_LOAD_CONFIG is also set, + # and AWS_PROFILE is not also set. + AWS_DEFAULT_PROFILE=my_profile + +SDK load config instructs the SDK to load the shared config in addition to +shared credentials. This also expands the configuration loaded so the shared +credentials will have parity with the shared config file. This also enables +Region and Profile support for the AWS_DEFAULT_REGION and AWS_DEFAULT_PROFILE +env values as well. + + AWS_SDK_LOAD_CONFIG=1 + +Shared credentials file path can be set to instruct the SDK to use an alternative +file for the shared credentials. If not set the file will be loaded from +$HOME/.aws/credentials on Linux/Unix based systems, and +%USERPROFILE%\.aws\credentials on Windows. + + AWS_SHARED_CREDENTIALS_FILE=$HOME/my_shared_credentials + +Shared config file path can be set to instruct the SDK to use an alternative +file for the shared config. If not set the file will be loaded from +$HOME/.aws/config on Linux/Unix based systems, and +%USERPROFILE%\.aws\config on Windows. + + AWS_CONFIG_FILE=$HOME/my_shared_config + +Path to a custom Credentials Authority (CA) bundle PEM file that the SDK +will use instead of the default system's root CA bundle. Use this only +if you want to replace the CA bundle the SDK uses for TLS requests. + + AWS_CA_BUNDLE=$HOME/my_custom_ca_bundle + +Enabling this option will attempt to merge the Transport into the SDK's HTTP +client. If the client's Transport is not a http.Transport an error will be +returned. If the Transport's TLS config is set this option will cause the SDK +to overwrite the Transport's TLS config's RootCAs value. If the CA bundle file +contains multiple certificates all of them will be loaded. + +The Session option CustomCABundle is also available when creating sessions +to also enable this feature. CustomCABundle session option field has priority +over the AWS_CA_BUNDLE environment variable, and will be used if both are set. + +Setting a custom HTTPClient in the aws.Config options will override this setting. +To use this option and custom HTTP client, the HTTP client needs to be provided +when creating the session. Not the service client. +*/ +package session diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go new file mode 100644 index 00000000..12b45217 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go @@ -0,0 +1,199 @@ +package session + +import ( + "os" + "strconv" + + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/defaults" +) + +// EnvProviderName provides a name of the provider when config is loaded from environment. +const EnvProviderName = "EnvConfigCredentials" + +// envConfig is a collection of environment values the SDK will read +// setup config from. All environment values are optional. But some values +// such as credentials require multiple values to be complete or the values +// will be ignored. +type envConfig struct { + // Environment configuration values. If set both Access Key ID and Secret Access + // Key must be provided. Session Token and optionally also be provided, but is + // not required. + // + // # Access Key ID + // AWS_ACCESS_KEY_ID=AKID + // AWS_ACCESS_KEY=AKID # only read if AWS_ACCESS_KEY_ID is not set. + // + // # Secret Access Key + // AWS_SECRET_ACCESS_KEY=SECRET + // AWS_SECRET_KEY=SECRET=SECRET # only read if AWS_SECRET_ACCESS_KEY is not set. + // + // # Session Token + // AWS_SESSION_TOKEN=TOKEN + Creds credentials.Value + + // Region value will instruct the SDK where to make service API requests to. If is + // not provided in the environment the region must be provided before a service + // client request is made. + // + // AWS_REGION=us-east-1 + // + // # AWS_DEFAULT_REGION is only read if AWS_SDK_LOAD_CONFIG is also set, + // # and AWS_REGION is not also set. + // AWS_DEFAULT_REGION=us-east-1 + Region string + + // Profile name the SDK should load use when loading shared configuration from the + // shared configuration files. If not provided "default" will be used as the + // profile name. + // + // AWS_PROFILE=my_profile + // + // # AWS_DEFAULT_PROFILE is only read if AWS_SDK_LOAD_CONFIG is also set, + // # and AWS_PROFILE is not also set. + // AWS_DEFAULT_PROFILE=my_profile + Profile string + + // SDK load config instructs the SDK to load the shared config in addition to + // shared credentials. This also expands the configuration loaded from the shared + // credentials to have parity with the shared config file. This also enables + // Region and Profile support for the AWS_DEFAULT_REGION and AWS_DEFAULT_PROFILE + // env values as well. + // + // AWS_SDK_LOAD_CONFIG=1 + EnableSharedConfig bool + + // Shared credentials file path can be set to instruct the SDK to use an alternate + // file for the shared credentials. If not set the file will be loaded from + // $HOME/.aws/credentials on Linux/Unix based systems, and + // %USERPROFILE%\.aws\credentials on Windows. + // + // AWS_SHARED_CREDENTIALS_FILE=$HOME/my_shared_credentials + SharedCredentialsFile string + + // Shared config file path can be set to instruct the SDK to use an alternate + // file for the shared config. If not set the file will be loaded from + // $HOME/.aws/config on Linux/Unix based systems, and + // %USERPROFILE%\.aws\config on Windows. + // + // AWS_CONFIG_FILE=$HOME/my_shared_config + SharedConfigFile string + + // Sets the path to a custom Credentials Authroity (CA) Bundle PEM file + // that the SDK will use instead of the system's root CA bundle. + // Only use this if you want to configure the SDK to use a custom set + // of CAs. + // + // Enabling this option will attempt to merge the Transport + // into the SDK's HTTP client. If the client's Transport is + // not a http.Transport an error will be returned. If the + // Transport's TLS config is set this option will cause the + // SDK to overwrite the Transport's TLS config's RootCAs value. + // + // Setting a custom HTTPClient in the aws.Config options will override this setting. + // To use this option and custom HTTP client, the HTTP client needs to be provided + // when creating the session. Not the service client. + // + // AWS_CA_BUNDLE=$HOME/my_custom_ca_bundle + CustomCABundle string +} + +var ( + credAccessEnvKey = []string{ + "AWS_ACCESS_KEY_ID", + "AWS_ACCESS_KEY", + } + credSecretEnvKey = []string{ + "AWS_SECRET_ACCESS_KEY", + "AWS_SECRET_KEY", + } + credSessionEnvKey = []string{ + "AWS_SESSION_TOKEN", + } + + regionEnvKeys = []string{ + "AWS_REGION", + "AWS_DEFAULT_REGION", // Only read if AWS_SDK_LOAD_CONFIG is also set + } + profileEnvKeys = []string{ + "AWS_PROFILE", + "AWS_DEFAULT_PROFILE", // Only read if AWS_SDK_LOAD_CONFIG is also set + } + sharedCredsFileEnvKey = []string{ + "AWS_SHARED_CREDENTIALS_FILE", + } + sharedConfigFileEnvKey = []string{ + "AWS_CONFIG_FILE", + } +) + +// loadEnvConfig retrieves the SDK's environment configuration. +// See `envConfig` for the values that will be retrieved. +// +// If the environment variable `AWS_SDK_LOAD_CONFIG` is set to a truthy value +// the shared SDK config will be loaded in addition to the SDK's specific +// configuration values. +func loadEnvConfig() envConfig { + enableSharedConfig, _ := strconv.ParseBool(os.Getenv("AWS_SDK_LOAD_CONFIG")) + return envConfigLoad(enableSharedConfig) +} + +// loadEnvSharedConfig retrieves the SDK's environment configuration, and the +// SDK shared config. See `envConfig` for the values that will be retrieved. +// +// Loads the shared configuration in addition to the SDK's specific configuration. +// This will load the same values as `loadEnvConfig` if the `AWS_SDK_LOAD_CONFIG` +// environment variable is set. +func loadSharedEnvConfig() envConfig { + return envConfigLoad(true) +} + +func envConfigLoad(enableSharedConfig bool) envConfig { + cfg := envConfig{} + + cfg.EnableSharedConfig = enableSharedConfig + + setFromEnvVal(&cfg.Creds.AccessKeyID, credAccessEnvKey) + setFromEnvVal(&cfg.Creds.SecretAccessKey, credSecretEnvKey) + setFromEnvVal(&cfg.Creds.SessionToken, credSessionEnvKey) + + // Require logical grouping of credentials + if len(cfg.Creds.AccessKeyID) == 0 || len(cfg.Creds.SecretAccessKey) == 0 { + cfg.Creds = credentials.Value{} + } else { + cfg.Creds.ProviderName = EnvProviderName + } + + regionKeys := regionEnvKeys + profileKeys := profileEnvKeys + if !cfg.EnableSharedConfig { + regionKeys = regionKeys[:1] + profileKeys = profileKeys[:1] + } + + setFromEnvVal(&cfg.Region, regionKeys) + setFromEnvVal(&cfg.Profile, profileKeys) + + setFromEnvVal(&cfg.SharedCredentialsFile, sharedCredsFileEnvKey) + setFromEnvVal(&cfg.SharedConfigFile, sharedConfigFileEnvKey) + + if len(cfg.SharedCredentialsFile) == 0 { + cfg.SharedCredentialsFile = defaults.SharedCredentialsFilename() + } + if len(cfg.SharedConfigFile) == 0 { + cfg.SharedConfigFile = defaults.SharedConfigFilename() + } + + cfg.CustomCABundle = os.Getenv("AWS_CA_BUNDLE") + + return cfg +} + +func setFromEnvVal(dst *string, keys []string) { + for _, k := range keys { + if v := os.Getenv(k); len(v) > 0 { + *dst = v + break + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go new file mode 100644 index 00000000..259b5c0f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go @@ -0,0 +1,606 @@ +package session + +import ( + "crypto/tls" + "crypto/x509" + "fmt" + "io" + "io/ioutil" + "net/http" + "os" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/corehandlers" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/stscreds" + "github.com/aws/aws-sdk-go/aws/defaults" + "github.com/aws/aws-sdk-go/aws/endpoints" + "github.com/aws/aws-sdk-go/aws/request" +) + +// A Session provides a central location to create service clients from and +// store configurations and request handlers for those services. +// +// Sessions are safe to create service clients concurrently, but it is not safe +// to mutate the Session concurrently. +// +// The Session satisfies the service client's client.ConfigProvider. +type Session struct { + Config *aws.Config + Handlers request.Handlers +} + +// New creates a new instance of the handlers merging in the provided configs +// on top of the SDK's default configurations. Once the Session is created it +// can be mutated to modify the Config or Handlers. The Session is safe to be +// read concurrently, but it should not be written to concurrently. +// +// If the AWS_SDK_LOAD_CONFIG environment is set to a truthy value, the New +// method could now encounter an error when loading the configuration. When +// The environment variable is set, and an error occurs, New will return a +// session that will fail all requests reporting the error that occurred while +// loading the session. Use NewSession to get the error when creating the +// session. +// +// If the AWS_SDK_LOAD_CONFIG environment variable is set to a truthy value +// the shared config file (~/.aws/config) will also be loaded, in addition to +// the shared credentials file (~/.aws/credentials). Values set in both the +// shared config, and shared credentials will be taken from the shared +// credentials file. +// +// Deprecated: Use NewSession functions to create sessions instead. NewSession +// has the same functionality as New except an error can be returned when the +// func is called instead of waiting to receive an error until a request is made. +func New(cfgs ...*aws.Config) *Session { + // load initial config from environment + envCfg := loadEnvConfig() + + if envCfg.EnableSharedConfig { + var cfg aws.Config + cfg.MergeIn(cfgs...) + s, err := NewSessionWithOptions(Options{ + Config: cfg, + SharedConfigState: SharedConfigEnable, + }) + if err != nil { + // Old session.New expected all errors to be discovered when + // a request is made, and would report the errors then. This + // needs to be replicated if an error occurs while creating + // the session. + msg := "failed to create session with AWS_SDK_LOAD_CONFIG enabled. " + + "Use session.NewSession to handle errors occurring during session creation." + + // Session creation failed, need to report the error and prevent + // any requests from succeeding. + s = &Session{Config: defaults.Config()} + s.Config.MergeIn(cfgs...) + s.Config.Logger.Log("ERROR:", msg, "Error:", err) + s.Handlers.Validate.PushBack(func(r *request.Request) { + r.Error = err + }) + } + return s + } + + return deprecatedNewSession(cfgs...) +} + +// NewSession returns a new Session created from SDK defaults, config files, +// environment, and user provided config files. Once the Session is created +// it can be mutated to modify the Config or Handlers. The Session is safe to +// be read concurrently, but it should not be written to concurrently. +// +// If the AWS_SDK_LOAD_CONFIG environment variable is set to a truthy value +// the shared config file (~/.aws/config) will also be loaded in addition to +// the shared credentials file (~/.aws/credentials). Values set in both the +// shared config, and shared credentials will be taken from the shared +// credentials file. Enabling the Shared Config will also allow the Session +// to be built with retrieving credentials with AssumeRole set in the config. +// +// See the NewSessionWithOptions func for information on how to override or +// control through code how the Session will be created. Such as specifying the +// config profile, and controlling if shared config is enabled or not. +func NewSession(cfgs ...*aws.Config) (*Session, error) { + opts := Options{} + opts.Config.MergeIn(cfgs...) + + return NewSessionWithOptions(opts) +} + +// SharedConfigState provides the ability to optionally override the state +// of the session's creation based on the shared config being enabled or +// disabled. +type SharedConfigState int + +const ( + // SharedConfigStateFromEnv does not override any state of the + // AWS_SDK_LOAD_CONFIG env var. It is the default value of the + // SharedConfigState type. + SharedConfigStateFromEnv SharedConfigState = iota + + // SharedConfigDisable overrides the AWS_SDK_LOAD_CONFIG env var value + // and disables the shared config functionality. + SharedConfigDisable + + // SharedConfigEnable overrides the AWS_SDK_LOAD_CONFIG env var value + // and enables the shared config functionality. + SharedConfigEnable +) + +// Options provides the means to control how a Session is created and what +// configuration values will be loaded. +// +type Options struct { + // Provides config values for the SDK to use when creating service clients + // and making API requests to services. Any value set in with this field + // will override the associated value provided by the SDK defaults, + // environment or config files where relevant. + // + // If not set, configuration values from from SDK defaults, environment, + // config will be used. + Config aws.Config + + // Overrides the config profile the Session should be created from. If not + // set the value of the environment variable will be loaded (AWS_PROFILE, + // or AWS_DEFAULT_PROFILE if the Shared Config is enabled). + // + // If not set and environment variables are not set the "default" + // (DefaultSharedConfigProfile) will be used as the profile to load the + // session config from. + Profile string + + // Instructs how the Session will be created based on the AWS_SDK_LOAD_CONFIG + // environment variable. By default a Session will be created using the + // value provided by the AWS_SDK_LOAD_CONFIG environment variable. + // + // Setting this value to SharedConfigEnable or SharedConfigDisable + // will allow you to override the AWS_SDK_LOAD_CONFIG environment variable + // and enable or disable the shared config functionality. + SharedConfigState SharedConfigState + + // Ordered list of files the session will load configuration from. + // It will override environment variable AWS_SHARED_CREDENTIALS_FILE, AWS_CONFIG_FILE. + SharedConfigFiles []string + + // When the SDK's shared config is configured to assume a role with MFA + // this option is required in order to provide the mechanism that will + // retrieve the MFA token. There is no default value for this field. If + // it is not set an error will be returned when creating the session. + // + // This token provider will be called when ever the assumed role's + // credentials need to be refreshed. Within the context of service clients + // all sharing the same session the SDK will ensure calls to the token + // provider are atomic. When sharing a token provider across multiple + // sessions additional synchronization logic is needed to ensure the + // token providers do not introduce race conditions. It is recommend to + // share the session where possible. + // + // stscreds.StdinTokenProvider is a basic implementation that will prompt + // from stdin for the MFA token code. + // + // This field is only used if the shared configuration is enabled, and + // the config enables assume role wit MFA via the mfa_serial field. + AssumeRoleTokenProvider func() (string, error) + + // Reader for a custom Credentials Authority (CA) bundle in PEM format that + // the SDK will use instead of the default system's root CA bundle. Use this + // only if you want to replace the CA bundle the SDK uses for TLS requests. + // + // Enabling this option will attempt to merge the Transport into the SDK's HTTP + // client. If the client's Transport is not a http.Transport an error will be + // returned. If the Transport's TLS config is set this option will cause the SDK + // to overwrite the Transport's TLS config's RootCAs value. If the CA + // bundle reader contains multiple certificates all of them will be loaded. + // + // The Session option CustomCABundle is also available when creating sessions + // to also enable this feature. CustomCABundle session option field has priority + // over the AWS_CA_BUNDLE environment variable, and will be used if both are set. + CustomCABundle io.Reader +} + +// NewSessionWithOptions returns a new Session created from SDK defaults, config files, +// environment, and user provided config files. This func uses the Options +// values to configure how the Session is created. +// +// If the AWS_SDK_LOAD_CONFIG environment variable is set to a truthy value +// the shared config file (~/.aws/config) will also be loaded in addition to +// the shared credentials file (~/.aws/credentials). Values set in both the +// shared config, and shared credentials will be taken from the shared +// credentials file. Enabling the Shared Config will also allow the Session +// to be built with retrieving credentials with AssumeRole set in the config. +// +// // Equivalent to session.New +// sess := session.Must(session.NewSessionWithOptions(session.Options{})) +// +// // Specify profile to load for the session's config +// sess := session.Must(session.NewSessionWithOptions(session.Options{ +// Profile: "profile_name", +// })) +// +// // Specify profile for config and region for requests +// sess := session.Must(session.NewSessionWithOptions(session.Options{ +// Config: aws.Config{Region: aws.String("us-east-1")}, +// Profile: "profile_name", +// })) +// +// // Force enable Shared Config support +// sess := session.Must(session.NewSessionWithOptions(session.Options{ +// SharedConfigState: session.SharedConfigEnable, +// })) +func NewSessionWithOptions(opts Options) (*Session, error) { + var envCfg envConfig + if opts.SharedConfigState == SharedConfigEnable { + envCfg = loadSharedEnvConfig() + } else { + envCfg = loadEnvConfig() + } + + if len(opts.Profile) > 0 { + envCfg.Profile = opts.Profile + } + + switch opts.SharedConfigState { + case SharedConfigDisable: + envCfg.EnableSharedConfig = false + case SharedConfigEnable: + envCfg.EnableSharedConfig = true + } + + // Only use AWS_CA_BUNDLE if session option is not provided. + if len(envCfg.CustomCABundle) != 0 && opts.CustomCABundle == nil { + f, err := os.Open(envCfg.CustomCABundle) + if err != nil { + return nil, awserr.New("LoadCustomCABundleError", + "failed to open custom CA bundle PEM file", err) + } + defer f.Close() + opts.CustomCABundle = f + } + + return newSession(opts, envCfg, &opts.Config) +} + +// Must is a helper function to ensure the Session is valid and there was no +// error when calling a NewSession function. +// +// This helper is intended to be used in variable initialization to load the +// Session and configuration at startup. Such as: +// +// var sess = session.Must(session.NewSession()) +func Must(sess *Session, err error) *Session { + if err != nil { + panic(err) + } + + return sess +} + +func deprecatedNewSession(cfgs ...*aws.Config) *Session { + cfg := defaults.Config() + handlers := defaults.Handlers() + + // Apply the passed in configs so the configuration can be applied to the + // default credential chain + cfg.MergeIn(cfgs...) + if cfg.EndpointResolver == nil { + // An endpoint resolver is required for a session to be able to provide + // endpoints for service client configurations. + cfg.EndpointResolver = endpoints.DefaultResolver() + } + cfg.Credentials = defaults.CredChain(cfg, handlers) + + // Reapply any passed in configs to override credentials if set + cfg.MergeIn(cfgs...) + + s := &Session{ + Config: cfg, + Handlers: handlers, + } + + initHandlers(s) + + return s +} + +func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, error) { + cfg := defaults.Config() + handlers := defaults.Handlers() + + // Get a merged version of the user provided config to determine if + // credentials were. + userCfg := &aws.Config{} + userCfg.MergeIn(cfgs...) + + // Ordered config files will be loaded in with later files overwriting + // previous config file values. + var cfgFiles []string + if opts.SharedConfigFiles != nil { + cfgFiles = opts.SharedConfigFiles + } else { + cfgFiles = []string{envCfg.SharedConfigFile, envCfg.SharedCredentialsFile} + if !envCfg.EnableSharedConfig { + // The shared config file (~/.aws/config) is only loaded if instructed + // to load via the envConfig.EnableSharedConfig (AWS_SDK_LOAD_CONFIG). + cfgFiles = cfgFiles[1:] + } + } + + // Load additional config from file(s) + sharedCfg, err := loadSharedConfig(envCfg.Profile, cfgFiles) + if err != nil { + return nil, err + } + + if err := mergeConfigSrcs(cfg, userCfg, envCfg, sharedCfg, handlers, opts); err != nil { + return nil, err + } + + s := &Session{ + Config: cfg, + Handlers: handlers, + } + + initHandlers(s) + + // Setup HTTP client with custom cert bundle if enabled + if opts.CustomCABundle != nil { + if err := loadCustomCABundle(s, opts.CustomCABundle); err != nil { + return nil, err + } + } + + return s, nil +} + +func loadCustomCABundle(s *Session, bundle io.Reader) error { + var t *http.Transport + switch v := s.Config.HTTPClient.Transport.(type) { + case *http.Transport: + t = v + default: + if s.Config.HTTPClient.Transport != nil { + return awserr.New("LoadCustomCABundleError", + "unable to load custom CA bundle, HTTPClient's transport unsupported type", nil) + } + } + if t == nil { + t = &http.Transport{} + } + + p, err := loadCertPool(bundle) + if err != nil { + return err + } + if t.TLSClientConfig == nil { + t.TLSClientConfig = &tls.Config{} + } + t.TLSClientConfig.RootCAs = p + + s.Config.HTTPClient.Transport = t + + return nil +} + +func loadCertPool(r io.Reader) (*x509.CertPool, error) { + b, err := ioutil.ReadAll(r) + if err != nil { + return nil, awserr.New("LoadCustomCABundleError", + "failed to read custom CA bundle PEM file", err) + } + + p := x509.NewCertPool() + if !p.AppendCertsFromPEM(b) { + return nil, awserr.New("LoadCustomCABundleError", + "failed to load custom CA bundle PEM file", err) + } + + return p, nil +} + +func mergeConfigSrcs(cfg, userCfg *aws.Config, envCfg envConfig, sharedCfg sharedConfig, handlers request.Handlers, sessOpts Options) error { + // Merge in user provided configuration + cfg.MergeIn(userCfg) + + // Region if not already set by user + if len(aws.StringValue(cfg.Region)) == 0 { + if len(envCfg.Region) > 0 { + cfg.WithRegion(envCfg.Region) + } else if envCfg.EnableSharedConfig && len(sharedCfg.Region) > 0 { + cfg.WithRegion(sharedCfg.Region) + } + } + + // Configure credentials if not already set + if cfg.Credentials == credentials.AnonymousCredentials && userCfg.Credentials == nil { + if len(envCfg.Creds.AccessKeyID) > 0 { + cfg.Credentials = credentials.NewStaticCredentialsFromCreds( + envCfg.Creds, + ) + } else if envCfg.EnableSharedConfig && len(sharedCfg.AssumeRole.RoleARN) > 0 && sharedCfg.AssumeRoleSource != nil { + cfgCp := *cfg + cfgCp.Credentials = credentials.NewStaticCredentialsFromCreds( + sharedCfg.AssumeRoleSource.Creds, + ) + if len(sharedCfg.AssumeRole.MFASerial) > 0 && sessOpts.AssumeRoleTokenProvider == nil { + // AssumeRole Token provider is required if doing Assume Role + // with MFA. + return AssumeRoleTokenProviderNotSetError{} + } + cfg.Credentials = stscreds.NewCredentials( + &Session{ + Config: &cfgCp, + Handlers: handlers.Copy(), + }, + sharedCfg.AssumeRole.RoleARN, + func(opt *stscreds.AssumeRoleProvider) { + opt.RoleSessionName = sharedCfg.AssumeRole.RoleSessionName + + // Assume role with external ID + if len(sharedCfg.AssumeRole.ExternalID) > 0 { + opt.ExternalID = aws.String(sharedCfg.AssumeRole.ExternalID) + } + + // Assume role with MFA + if len(sharedCfg.AssumeRole.MFASerial) > 0 { + opt.SerialNumber = aws.String(sharedCfg.AssumeRole.MFASerial) + opt.TokenProvider = sessOpts.AssumeRoleTokenProvider + } + }, + ) + } else if len(sharedCfg.Creds.AccessKeyID) > 0 { + cfg.Credentials = credentials.NewStaticCredentialsFromCreds( + sharedCfg.Creds, + ) + } else { + // Fallback to default credentials provider, include mock errors + // for the credential chain so user can identify why credentials + // failed to be retrieved. + cfg.Credentials = credentials.NewCredentials(&credentials.ChainProvider{ + VerboseErrors: aws.BoolValue(cfg.CredentialsChainVerboseErrors), + Providers: []credentials.Provider{ + &credProviderError{Err: awserr.New("EnvAccessKeyNotFound", "failed to find credentials in the environment.", nil)}, + &credProviderError{Err: awserr.New("SharedCredsLoad", fmt.Sprintf("failed to load profile, %s.", envCfg.Profile), nil)}, + defaults.RemoteCredProvider(*cfg, handlers), + }, + }) + } + } + + return nil +} + +// AssumeRoleTokenProviderNotSetError is an error returned when creating a session when the +// MFAToken option is not set when shared config is configured load assume a +// role with an MFA token. +type AssumeRoleTokenProviderNotSetError struct{} + +// Code is the short id of the error. +func (e AssumeRoleTokenProviderNotSetError) Code() string { + return "AssumeRoleTokenProviderNotSetError" +} + +// Message is the description of the error +func (e AssumeRoleTokenProviderNotSetError) Message() string { + return fmt.Sprintf("assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.") +} + +// OrigErr is the underlying error that caused the failure. +func (e AssumeRoleTokenProviderNotSetError) OrigErr() error { + return nil +} + +// Error satisfies the error interface. +func (e AssumeRoleTokenProviderNotSetError) Error() string { + return awserr.SprintError(e.Code(), e.Message(), "", nil) +} + +type credProviderError struct { + Err error +} + +var emptyCreds = credentials.Value{} + +func (c credProviderError) Retrieve() (credentials.Value, error) { + return credentials.Value{}, c.Err +} +func (c credProviderError) IsExpired() bool { + return true +} + +func initHandlers(s *Session) { + // Add the Validate parameter handler if it is not disabled. + s.Handlers.Validate.Remove(corehandlers.ValidateParametersHandler) + if !aws.BoolValue(s.Config.DisableParamValidation) { + s.Handlers.Validate.PushBackNamed(corehandlers.ValidateParametersHandler) + } +} + +// Copy creates and returns a copy of the current Session, coping the config +// and handlers. If any additional configs are provided they will be merged +// on top of the Session's copied config. +// +// // Create a copy of the current Session, configured for the us-west-2 region. +// sess.Copy(&aws.Config{Region: aws.String("us-west-2")}) +func (s *Session) Copy(cfgs ...*aws.Config) *Session { + newSession := &Session{ + Config: s.Config.Copy(cfgs...), + Handlers: s.Handlers.Copy(), + } + + initHandlers(newSession) + + return newSession +} + +// ClientConfig satisfies the client.ConfigProvider interface and is used to +// configure the service client instances. Passing the Session to the service +// client's constructor (New) will use this method to configure the client. +func (s *Session) ClientConfig(serviceName string, cfgs ...*aws.Config) client.Config { + // Backwards compatibility, the error will be eaten if user calls ClientConfig + // directly. All SDK services will use ClientconfigWithError. + cfg, _ := s.clientConfigWithErr(serviceName, cfgs...) + + return cfg +} + +func (s *Session) clientConfigWithErr(serviceName string, cfgs ...*aws.Config) (client.Config, error) { + s = s.Copy(cfgs...) + + var resolved endpoints.ResolvedEndpoint + var err error + + region := aws.StringValue(s.Config.Region) + + if endpoint := aws.StringValue(s.Config.Endpoint); len(endpoint) != 0 { + resolved.URL = endpoints.AddScheme(endpoint, aws.BoolValue(s.Config.DisableSSL)) + resolved.SigningRegion = region + } else { + resolved, err = s.Config.EndpointResolver.EndpointFor( + serviceName, region, + func(opt *endpoints.Options) { + opt.DisableSSL = aws.BoolValue(s.Config.DisableSSL) + opt.UseDualStack = aws.BoolValue(s.Config.UseDualStack) + + // Support the condition where the service is modeled but its + // endpoint metadata is not available. + opt.ResolveUnknownService = true + }, + ) + } + + return client.Config{ + Config: s.Config, + Handlers: s.Handlers, + Endpoint: resolved.URL, + SigningRegion: resolved.SigningRegion, + SigningNameDerived: resolved.SigningNameDerived, + SigningName: resolved.SigningName, + }, err +} + +// ClientConfigNoResolveEndpoint is the same as ClientConfig with the exception +// that the EndpointResolver will not be used to resolve the endpoint. The only +// endpoint set must come from the aws.Config.Endpoint field. +func (s *Session) ClientConfigNoResolveEndpoint(cfgs ...*aws.Config) client.Config { + s = s.Copy(cfgs...) + + var resolved endpoints.ResolvedEndpoint + + region := aws.StringValue(s.Config.Region) + + if ep := aws.StringValue(s.Config.Endpoint); len(ep) > 0 { + resolved.URL = endpoints.AddScheme(ep, aws.BoolValue(s.Config.DisableSSL)) + resolved.SigningRegion = region + } + + return client.Config{ + Config: s.Config, + Handlers: s.Handlers, + Endpoint: resolved.URL, + SigningRegion: resolved.SigningRegion, + SigningNameDerived: resolved.SigningNameDerived, + SigningName: resolved.SigningName, + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go new file mode 100644 index 00000000..09c8e5bc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go @@ -0,0 +1,295 @@ +package session + +import ( + "fmt" + "io/ioutil" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/go-ini/ini" +) + +const ( + // Static Credentials group + accessKeyIDKey = `aws_access_key_id` // group required + secretAccessKey = `aws_secret_access_key` // group required + sessionTokenKey = `aws_session_token` // optional + + // Assume Role Credentials group + roleArnKey = `role_arn` // group required + sourceProfileKey = `source_profile` // group required + externalIDKey = `external_id` // optional + mfaSerialKey = `mfa_serial` // optional + roleSessionNameKey = `role_session_name` // optional + + // Additional Config fields + regionKey = `region` + + // DefaultSharedConfigProfile is the default profile to be used when + // loading configuration from the config files if another profile name + // is not provided. + DefaultSharedConfigProfile = `default` +) + +type assumeRoleConfig struct { + RoleARN string + SourceProfile string + ExternalID string + MFASerial string + RoleSessionName string +} + +// sharedConfig represents the configuration fields of the SDK config files. +type sharedConfig struct { + // Credentials values from the config file. Both aws_access_key_id + // and aws_secret_access_key must be provided together in the same file + // to be considered valid. The values will be ignored if not a complete group. + // aws_session_token is an optional field that can be provided if both of the + // other two fields are also provided. + // + // aws_access_key_id + // aws_secret_access_key + // aws_session_token + Creds credentials.Value + + AssumeRole assumeRoleConfig + AssumeRoleSource *sharedConfig + + // Region is the region the SDK should use for looking up AWS service endpoints + // and signing requests. + // + // region + Region string +} + +type sharedConfigFile struct { + Filename string + IniData *ini.File +} + +// loadSharedConfig retrieves the configuration from the list of files +// using the profile provided. The order the files are listed will determine +// precedence. Values in subsequent files will overwrite values defined in +// earlier files. +// +// For example, given two files A and B. Both define credentials. If the order +// of the files are A then B, B's credential values will be used instead of A's. +// +// See sharedConfig.setFromFile for information how the config files +// will be loaded. +func loadSharedConfig(profile string, filenames []string) (sharedConfig, error) { + if len(profile) == 0 { + profile = DefaultSharedConfigProfile + } + + files, err := loadSharedConfigIniFiles(filenames) + if err != nil { + return sharedConfig{}, err + } + + cfg := sharedConfig{} + if err = cfg.setFromIniFiles(profile, files); err != nil { + return sharedConfig{}, err + } + + if len(cfg.AssumeRole.SourceProfile) > 0 { + if err := cfg.setAssumeRoleSource(profile, files); err != nil { + return sharedConfig{}, err + } + } + + return cfg, nil +} + +func loadSharedConfigIniFiles(filenames []string) ([]sharedConfigFile, error) { + files := make([]sharedConfigFile, 0, len(filenames)) + + for _, filename := range filenames { + b, err := ioutil.ReadFile(filename) + if err != nil { + // Skip files which can't be opened and read for whatever reason + continue + } + + f, err := ini.Load(b) + if err != nil { + return nil, SharedConfigLoadError{Filename: filename, Err: err} + } + + files = append(files, sharedConfigFile{ + Filename: filename, IniData: f, + }) + } + + return files, nil +} + +func (cfg *sharedConfig) setAssumeRoleSource(origProfile string, files []sharedConfigFile) error { + var assumeRoleSrc sharedConfig + + // Multiple level assume role chains are not support + if cfg.AssumeRole.SourceProfile == origProfile { + assumeRoleSrc = *cfg + assumeRoleSrc.AssumeRole = assumeRoleConfig{} + } else { + err := assumeRoleSrc.setFromIniFiles(cfg.AssumeRole.SourceProfile, files) + if err != nil { + return err + } + } + + if len(assumeRoleSrc.Creds.AccessKeyID) == 0 { + return SharedConfigAssumeRoleError{RoleARN: cfg.AssumeRole.RoleARN} + } + + cfg.AssumeRoleSource = &assumeRoleSrc + + return nil +} + +func (cfg *sharedConfig) setFromIniFiles(profile string, files []sharedConfigFile) error { + // Trim files from the list that don't exist. + for _, f := range files { + if err := cfg.setFromIniFile(profile, f); err != nil { + if _, ok := err.(SharedConfigProfileNotExistsError); ok { + // Ignore proviles missings + continue + } + return err + } + } + + return nil +} + +// setFromFile loads the configuration from the file using +// the profile provided. A sharedConfig pointer type value is used so that +// multiple config file loadings can be chained. +// +// Only loads complete logically grouped values, and will not set fields in cfg +// for incomplete grouped values in the config. Such as credentials. For example +// if a config file only includes aws_access_key_id but no aws_secret_access_key +// the aws_access_key_id will be ignored. +func (cfg *sharedConfig) setFromIniFile(profile string, file sharedConfigFile) error { + section, err := file.IniData.GetSection(profile) + if err != nil { + // Fallback to to alternate profile name: profile + section, err = file.IniData.GetSection(fmt.Sprintf("profile %s", profile)) + if err != nil { + return SharedConfigProfileNotExistsError{Profile: profile, Err: err} + } + } + + // Shared Credentials + akid := section.Key(accessKeyIDKey).String() + secret := section.Key(secretAccessKey).String() + if len(akid) > 0 && len(secret) > 0 { + cfg.Creds = credentials.Value{ + AccessKeyID: akid, + SecretAccessKey: secret, + SessionToken: section.Key(sessionTokenKey).String(), + ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", file.Filename), + } + } + + // Assume Role + roleArn := section.Key(roleArnKey).String() + srcProfile := section.Key(sourceProfileKey).String() + if len(roleArn) > 0 && len(srcProfile) > 0 { + cfg.AssumeRole = assumeRoleConfig{ + RoleARN: roleArn, + SourceProfile: srcProfile, + ExternalID: section.Key(externalIDKey).String(), + MFASerial: section.Key(mfaSerialKey).String(), + RoleSessionName: section.Key(roleSessionNameKey).String(), + } + } + + // Region + if v := section.Key(regionKey).String(); len(v) > 0 { + cfg.Region = v + } + + return nil +} + +// SharedConfigLoadError is an error for the shared config file failed to load. +type SharedConfigLoadError struct { + Filename string + Err error +} + +// Code is the short id of the error. +func (e SharedConfigLoadError) Code() string { + return "SharedConfigLoadError" +} + +// Message is the description of the error +func (e SharedConfigLoadError) Message() string { + return fmt.Sprintf("failed to load config file, %s", e.Filename) +} + +// OrigErr is the underlying error that caused the failure. +func (e SharedConfigLoadError) OrigErr() error { + return e.Err +} + +// Error satisfies the error interface. +func (e SharedConfigLoadError) Error() string { + return awserr.SprintError(e.Code(), e.Message(), "", e.Err) +} + +// SharedConfigProfileNotExistsError is an error for the shared config when +// the profile was not find in the config file. +type SharedConfigProfileNotExistsError struct { + Profile string + Err error +} + +// Code is the short id of the error. +func (e SharedConfigProfileNotExistsError) Code() string { + return "SharedConfigProfileNotExistsError" +} + +// Message is the description of the error +func (e SharedConfigProfileNotExistsError) Message() string { + return fmt.Sprintf("failed to get profile, %s", e.Profile) +} + +// OrigErr is the underlying error that caused the failure. +func (e SharedConfigProfileNotExistsError) OrigErr() error { + return e.Err +} + +// Error satisfies the error interface. +func (e SharedConfigProfileNotExistsError) Error() string { + return awserr.SprintError(e.Code(), e.Message(), "", e.Err) +} + +// SharedConfigAssumeRoleError is an error for the shared config when the +// profile contains assume role information, but that information is invalid +// or not complete. +type SharedConfigAssumeRoleError struct { + RoleARN string +} + +// Code is the short id of the error. +func (e SharedConfigAssumeRoleError) Code() string { + return "SharedConfigAssumeRoleError" +} + +// Message is the description of the error +func (e SharedConfigAssumeRoleError) Message() string { + return fmt.Sprintf("failed to load assume role for %s, source profile has no shared credentials", + e.RoleARN) +} + +// OrigErr is the underlying error that caused the failure. +func (e SharedConfigAssumeRoleError) OrigErr() error { + return nil +} + +// Error satisfies the error interface. +func (e SharedConfigAssumeRoleError) Error() string { + return awserr.SprintError(e.Code(), e.Message(), "", nil) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/header_rules.go b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/header_rules.go new file mode 100644 index 00000000..244c86da --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/header_rules.go @@ -0,0 +1,82 @@ +package v4 + +import ( + "net/http" + "strings" +) + +// validator houses a set of rule needed for validation of a +// string value +type rules []rule + +// rule interface allows for more flexible rules and just simply +// checks whether or not a value adheres to that rule +type rule interface { + IsValid(value string) bool +} + +// IsValid will iterate through all rules and see if any rules +// apply to the value and supports nested rules +func (r rules) IsValid(value string) bool { + for _, rule := range r { + if rule.IsValid(value) { + return true + } + } + return false +} + +// mapRule generic rule for maps +type mapRule map[string]struct{} + +// IsValid for the map rule satisfies whether it exists in the map +func (m mapRule) IsValid(value string) bool { + _, ok := m[value] + return ok +} + +// whitelist is a generic rule for whitelisting +type whitelist struct { + rule +} + +// IsValid for whitelist checks if the value is within the whitelist +func (w whitelist) IsValid(value string) bool { + return w.rule.IsValid(value) +} + +// blacklist is a generic rule for blacklisting +type blacklist struct { + rule +} + +// IsValid for whitelist checks if the value is within the whitelist +func (b blacklist) IsValid(value string) bool { + return !b.rule.IsValid(value) +} + +type patterns []string + +// IsValid for patterns checks each pattern and returns if a match has +// been found +func (p patterns) IsValid(value string) bool { + for _, pattern := range p { + if strings.HasPrefix(http.CanonicalHeaderKey(value), pattern) { + return true + } + } + return false +} + +// inclusiveRules rules allow for rules to depend on one another +type inclusiveRules []rule + +// IsValid will return true if all rules are true +func (r inclusiveRules) IsValid(value string) bool { + for _, rule := range r { + if !rule.IsValid(value) { + return false + } + } + return true +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/options.go b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/options.go new file mode 100644 index 00000000..6aa2ed24 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/options.go @@ -0,0 +1,7 @@ +package v4 + +// WithUnsignedPayload will enable and set the UnsignedPayload field to +// true of the signer. +func WithUnsignedPayload(v4 *Signer) { + v4.UnsignedPayload = true +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go new file mode 100644 index 00000000..bd082e9d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go @@ -0,0 +1,24 @@ +// +build go1.5 + +package v4 + +import ( + "net/url" + "strings" +) + +func getURIPath(u *url.URL) string { + var uri string + + if len(u.Opaque) > 0 { + uri = "/" + strings.Join(strings.Split(u.Opaque, "/")[3:], "/") + } else { + uri = u.EscapedPath() + } + + if len(uri) == 0 { + uri = "/" + } + + return uri +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go new file mode 100644 index 00000000..6e463761 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go @@ -0,0 +1,779 @@ +// Package v4 implements signing for AWS V4 signer +// +// Provides request signing for request that need to be signed with +// AWS V4 Signatures. +// +// Standalone Signer +// +// Generally using the signer outside of the SDK should not require any additional +// logic when using Go v1.5 or higher. The signer does this by taking advantage +// of the URL.EscapedPath method. If your request URI requires additional escaping +// you many need to use the URL.Opaque to define what the raw URI should be sent +// to the service as. +// +// The signer will first check the URL.Opaque field, and use its value if set. +// The signer does require the URL.Opaque field to be set in the form of: +// +// "///" +// +// // e.g. +// "//example.com/some/path" +// +// The leading "//" and hostname are required or the URL.Opaque escaping will +// not work correctly. +// +// If URL.Opaque is not set the signer will fallback to the URL.EscapedPath() +// method and using the returned value. If you're using Go v1.4 you must set +// URL.Opaque if the URI path needs escaping. If URL.Opaque is not set with +// Go v1.5 the signer will fallback to URL.Path. +// +// AWS v4 signature validation requires that the canonical string's URI path +// element must be the URI escaped form of the HTTP request's path. +// http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html +// +// The Go HTTP client will perform escaping automatically on the request. Some +// of these escaping may cause signature validation errors because the HTTP +// request differs from the URI path or query that the signature was generated. +// https://golang.org/pkg/net/url/#URL.EscapedPath +// +// Because of this, it is recommended that when using the signer outside of the +// SDK that explicitly escaping the request prior to being signed is preferable, +// and will help prevent signature validation errors. This can be done by setting +// the URL.Opaque or URL.RawPath. The SDK will use URL.Opaque first and then +// call URL.EscapedPath() if Opaque is not set. +// +// If signing a request intended for HTTP2 server, and you're using Go 1.6.2 +// through 1.7.4 you should use the URL.RawPath as the pre-escaped form of the +// request URL. https://github.com/golang/go/issues/16847 points to a bug in +// Go pre 1.8 that fails to make HTTP2 requests using absolute URL in the HTTP +// message. URL.Opaque generally will force Go to make requests with absolute URL. +// URL.RawPath does not do this, but RawPath must be a valid escaping of Path +// or url.EscapedPath will ignore the RawPath escaping. +// +// Test `TestStandaloneSign` provides a complete example of using the signer +// outside of the SDK and pre-escaping the URI path. +package v4 + +import ( + "crypto/hmac" + "crypto/sha256" + "encoding/hex" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + "sort" + "strconv" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/sdkio" + "github.com/aws/aws-sdk-go/private/protocol/rest" +) + +const ( + authHeaderPrefix = "AWS4-HMAC-SHA256" + timeFormat = "20060102T150405Z" + shortTimeFormat = "20060102" + + // emptyStringSHA256 is a SHA256 of an empty string + emptyStringSHA256 = `e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855` +) + +var ignoredHeaders = rules{ + blacklist{ + mapRule{ + "Authorization": struct{}{}, + "User-Agent": struct{}{}, + "X-Amzn-Trace-Id": struct{}{}, + }, + }, +} + +// requiredSignedHeaders is a whitelist for build canonical headers. +var requiredSignedHeaders = rules{ + whitelist{ + mapRule{ + "Cache-Control": struct{}{}, + "Content-Disposition": struct{}{}, + "Content-Encoding": struct{}{}, + "Content-Language": struct{}{}, + "Content-Md5": struct{}{}, + "Content-Type": struct{}{}, + "Expires": struct{}{}, + "If-Match": struct{}{}, + "If-Modified-Since": struct{}{}, + "If-None-Match": struct{}{}, + "If-Unmodified-Since": struct{}{}, + "Range": struct{}{}, + "X-Amz-Acl": struct{}{}, + "X-Amz-Copy-Source": struct{}{}, + "X-Amz-Copy-Source-If-Match": struct{}{}, + "X-Amz-Copy-Source-If-Modified-Since": struct{}{}, + "X-Amz-Copy-Source-If-None-Match": struct{}{}, + "X-Amz-Copy-Source-If-Unmodified-Since": struct{}{}, + "X-Amz-Copy-Source-Range": struct{}{}, + "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Algorithm": struct{}{}, + "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key": struct{}{}, + "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key-Md5": struct{}{}, + "X-Amz-Grant-Full-control": struct{}{}, + "X-Amz-Grant-Read": struct{}{}, + "X-Amz-Grant-Read-Acp": struct{}{}, + "X-Amz-Grant-Write": struct{}{}, + "X-Amz-Grant-Write-Acp": struct{}{}, + "X-Amz-Metadata-Directive": struct{}{}, + "X-Amz-Mfa": struct{}{}, + "X-Amz-Request-Payer": struct{}{}, + "X-Amz-Server-Side-Encryption": struct{}{}, + "X-Amz-Server-Side-Encryption-Aws-Kms-Key-Id": struct{}{}, + "X-Amz-Server-Side-Encryption-Customer-Algorithm": struct{}{}, + "X-Amz-Server-Side-Encryption-Customer-Key": struct{}{}, + "X-Amz-Server-Side-Encryption-Customer-Key-Md5": struct{}{}, + "X-Amz-Storage-Class": struct{}{}, + "X-Amz-Website-Redirect-Location": struct{}{}, + }, + }, + patterns{"X-Amz-Meta-"}, +} + +// allowedHoisting is a whitelist for build query headers. The boolean value +// represents whether or not it is a pattern. +var allowedQueryHoisting = inclusiveRules{ + blacklist{requiredSignedHeaders}, + patterns{"X-Amz-"}, +} + +// Signer applies AWS v4 signing to given request. Use this to sign requests +// that need to be signed with AWS V4 Signatures. +type Signer struct { + // The authentication credentials the request will be signed against. + // This value must be set to sign requests. + Credentials *credentials.Credentials + + // Sets the log level the signer should use when reporting information to + // the logger. If the logger is nil nothing will be logged. See + // aws.LogLevelType for more information on available logging levels + // + // By default nothing will be logged. + Debug aws.LogLevelType + + // The logger loging information will be written to. If there the logger + // is nil, nothing will be logged. + Logger aws.Logger + + // Disables the Signer's moving HTTP header key/value pairs from the HTTP + // request header to the request's query string. This is most commonly used + // with pre-signed requests preventing headers from being added to the + // request's query string. + DisableHeaderHoisting bool + + // Disables the automatic escaping of the URI path of the request for the + // siganture's canonical string's path. For services that do not need additional + // escaping then use this to disable the signer escaping the path. + // + // S3 is an example of a service that does not need additional escaping. + // + // http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html + DisableURIPathEscaping bool + + // Disales the automatical setting of the HTTP request's Body field with the + // io.ReadSeeker passed in to the signer. This is useful if you're using a + // custom wrapper around the body for the io.ReadSeeker and want to preserve + // the Body value on the Request.Body. + // + // This does run the risk of signing a request with a body that will not be + // sent in the request. Need to ensure that the underlying data of the Body + // values are the same. + DisableRequestBodyOverwrite bool + + // currentTimeFn returns the time value which represents the current time. + // This value should only be used for testing. If it is nil the default + // time.Now will be used. + currentTimeFn func() time.Time + + // UnsignedPayload will prevent signing of the payload. This will only + // work for services that have support for this. + UnsignedPayload bool +} + +// NewSigner returns a Signer pointer configured with the credentials and optional +// option values provided. If not options are provided the Signer will use its +// default configuration. +func NewSigner(credentials *credentials.Credentials, options ...func(*Signer)) *Signer { + v4 := &Signer{ + Credentials: credentials, + } + + for _, option := range options { + option(v4) + } + + return v4 +} + +type signingCtx struct { + ServiceName string + Region string + Request *http.Request + Body io.ReadSeeker + Query url.Values + Time time.Time + ExpireTime time.Duration + SignedHeaderVals http.Header + + DisableURIPathEscaping bool + + credValues credentials.Value + isPresign bool + formattedTime string + formattedShortTime string + unsignedPayload bool + + bodyDigest string + signedHeaders string + canonicalHeaders string + canonicalString string + credentialString string + stringToSign string + signature string + authorization string +} + +// Sign signs AWS v4 requests with the provided body, service name, region the +// request is made to, and time the request is signed at. The signTime allows +// you to specify that a request is signed for the future, and cannot be +// used until then. +// +// Returns a list of HTTP headers that were included in the signature or an +// error if signing the request failed. Generally for signed requests this value +// is not needed as the full request context will be captured by the http.Request +// value. It is included for reference though. +// +// Sign will set the request's Body to be the `body` parameter passed in. If +// the body is not already an io.ReadCloser, it will be wrapped within one. If +// a `nil` body parameter passed to Sign, the request's Body field will be +// also set to nil. Its important to note that this functionality will not +// change the request's ContentLength of the request. +// +// Sign differs from Presign in that it will sign the request using HTTP +// header values. This type of signing is intended for http.Request values that +// will not be shared, or are shared in a way the header values on the request +// will not be lost. +// +// The requests body is an io.ReadSeeker so the SHA256 of the body can be +// generated. To bypass the signer computing the hash you can set the +// "X-Amz-Content-Sha256" header with a precomputed value. The signer will +// only compute the hash if the request header value is empty. +func (v4 Signer) Sign(r *http.Request, body io.ReadSeeker, service, region string, signTime time.Time) (http.Header, error) { + return v4.signWithBody(r, body, service, region, 0, false, signTime) +} + +// Presign signs AWS v4 requests with the provided body, service name, region +// the request is made to, and time the request is signed at. The signTime +// allows you to specify that a request is signed for the future, and cannot +// be used until then. +// +// Returns a list of HTTP headers that were included in the signature or an +// error if signing the request failed. For presigned requests these headers +// and their values must be included on the HTTP request when it is made. This +// is helpful to know what header values need to be shared with the party the +// presigned request will be distributed to. +// +// Presign differs from Sign in that it will sign the request using query string +// instead of header values. This allows you to share the Presigned Request's +// URL with third parties, or distribute it throughout your system with minimal +// dependencies. +// +// Presign also takes an exp value which is the duration the +// signed request will be valid after the signing time. This is allows you to +// set when the request will expire. +// +// The requests body is an io.ReadSeeker so the SHA256 of the body can be +// generated. To bypass the signer computing the hash you can set the +// "X-Amz-Content-Sha256" header with a precomputed value. The signer will +// only compute the hash if the request header value is empty. +// +// Presigning a S3 request will not compute the body's SHA256 hash by default. +// This is done due to the general use case for S3 presigned URLs is to share +// PUT/GET capabilities. If you would like to include the body's SHA256 in the +// presigned request's signature you can set the "X-Amz-Content-Sha256" +// HTTP header and that will be included in the request's signature. +func (v4 Signer) Presign(r *http.Request, body io.ReadSeeker, service, region string, exp time.Duration, signTime time.Time) (http.Header, error) { + return v4.signWithBody(r, body, service, region, exp, true, signTime) +} + +func (v4 Signer) signWithBody(r *http.Request, body io.ReadSeeker, service, region string, exp time.Duration, isPresign bool, signTime time.Time) (http.Header, error) { + currentTimeFn := v4.currentTimeFn + if currentTimeFn == nil { + currentTimeFn = time.Now + } + + ctx := &signingCtx{ + Request: r, + Body: body, + Query: r.URL.Query(), + Time: signTime, + ExpireTime: exp, + isPresign: isPresign, + ServiceName: service, + Region: region, + DisableURIPathEscaping: v4.DisableURIPathEscaping, + unsignedPayload: v4.UnsignedPayload, + } + + for key := range ctx.Query { + sort.Strings(ctx.Query[key]) + } + + if ctx.isRequestSigned() { + ctx.Time = currentTimeFn() + ctx.handlePresignRemoval() + } + + var err error + ctx.credValues, err = v4.Credentials.Get() + if err != nil { + return http.Header{}, err + } + + ctx.sanitizeHostForHeader() + ctx.assignAmzQueryValues() + if err := ctx.build(v4.DisableHeaderHoisting); err != nil { + return nil, err + } + + // If the request is not presigned the body should be attached to it. This + // prevents the confusion of wanting to send a signed request without + // the body the request was signed for attached. + if !(v4.DisableRequestBodyOverwrite || ctx.isPresign) { + var reader io.ReadCloser + if body != nil { + var ok bool + if reader, ok = body.(io.ReadCloser); !ok { + reader = ioutil.NopCloser(body) + } + } + r.Body = reader + } + + if v4.Debug.Matches(aws.LogDebugWithSigning) { + v4.logSigningInfo(ctx) + } + + return ctx.SignedHeaderVals, nil +} + +func (ctx *signingCtx) sanitizeHostForHeader() { + request.SanitizeHostForHeader(ctx.Request) +} + +func (ctx *signingCtx) handlePresignRemoval() { + if !ctx.isPresign { + return + } + + // The credentials have expired for this request. The current signing + // is invalid, and needs to be request because the request will fail. + ctx.removePresign() + + // Update the request's query string to ensure the values stays in + // sync in the case retrieving the new credentials fails. + ctx.Request.URL.RawQuery = ctx.Query.Encode() +} + +func (ctx *signingCtx) assignAmzQueryValues() { + if ctx.isPresign { + ctx.Query.Set("X-Amz-Algorithm", authHeaderPrefix) + if ctx.credValues.SessionToken != "" { + ctx.Query.Set("X-Amz-Security-Token", ctx.credValues.SessionToken) + } else { + ctx.Query.Del("X-Amz-Security-Token") + } + + return + } + + if ctx.credValues.SessionToken != "" { + ctx.Request.Header.Set("X-Amz-Security-Token", ctx.credValues.SessionToken) + } +} + +// SignRequestHandler is a named request handler the SDK will use to sign +// service client request with using the V4 signature. +var SignRequestHandler = request.NamedHandler{ + Name: "v4.SignRequestHandler", Fn: SignSDKRequest, +} + +// SignSDKRequest signs an AWS request with the V4 signature. This +// request handler should only be used with the SDK's built in service client's +// API operation requests. +// +// This function should not be used on its on its own, but in conjunction with +// an AWS service client's API operation call. To sign a standalone request +// not created by a service client's API operation method use the "Sign" or +// "Presign" functions of the "Signer" type. +// +// If the credentials of the request's config are set to +// credentials.AnonymousCredentials the request will not be signed. +func SignSDKRequest(req *request.Request) { + signSDKRequestWithCurrTime(req, time.Now) +} + +// BuildNamedHandler will build a generic handler for signing. +func BuildNamedHandler(name string, opts ...func(*Signer)) request.NamedHandler { + return request.NamedHandler{ + Name: name, + Fn: func(req *request.Request) { + signSDKRequestWithCurrTime(req, time.Now, opts...) + }, + } +} + +func signSDKRequestWithCurrTime(req *request.Request, curTimeFn func() time.Time, opts ...func(*Signer)) { + // If the request does not need to be signed ignore the signing of the + // request if the AnonymousCredentials object is used. + if req.Config.Credentials == credentials.AnonymousCredentials { + return + } + + region := req.ClientInfo.SigningRegion + if region == "" { + region = aws.StringValue(req.Config.Region) + } + + name := req.ClientInfo.SigningName + if name == "" { + name = req.ClientInfo.ServiceName + } + + v4 := NewSigner(req.Config.Credentials, func(v4 *Signer) { + v4.Debug = req.Config.LogLevel.Value() + v4.Logger = req.Config.Logger + v4.DisableHeaderHoisting = req.NotHoist + v4.currentTimeFn = curTimeFn + if name == "s3" { + // S3 service should not have any escaping applied + v4.DisableURIPathEscaping = true + } + // Prevents setting the HTTPRequest's Body. Since the Body could be + // wrapped in a custom io.Closer that we do not want to be stompped + // on top of by the signer. + v4.DisableRequestBodyOverwrite = true + }) + + for _, opt := range opts { + opt(v4) + } + + signingTime := req.Time + if !req.LastSignedAt.IsZero() { + signingTime = req.LastSignedAt + } + + signedHeaders, err := v4.signWithBody(req.HTTPRequest, req.GetBody(), + name, region, req.ExpireTime, req.ExpireTime > 0, signingTime, + ) + if err != nil { + req.Error = err + req.SignedHeaderVals = nil + return + } + + req.SignedHeaderVals = signedHeaders + req.LastSignedAt = curTimeFn() +} + +const logSignInfoMsg = `DEBUG: Request Signature: +---[ CANONICAL STRING ]----------------------------- +%s +---[ STRING TO SIGN ]-------------------------------- +%s%s +-----------------------------------------------------` +const logSignedURLMsg = ` +---[ SIGNED URL ]------------------------------------ +%s` + +func (v4 *Signer) logSigningInfo(ctx *signingCtx) { + signedURLMsg := "" + if ctx.isPresign { + signedURLMsg = fmt.Sprintf(logSignedURLMsg, ctx.Request.URL.String()) + } + msg := fmt.Sprintf(logSignInfoMsg, ctx.canonicalString, ctx.stringToSign, signedURLMsg) + v4.Logger.Log(msg) +} + +func (ctx *signingCtx) build(disableHeaderHoisting bool) error { + ctx.buildTime() // no depends + ctx.buildCredentialString() // no depends + + if err := ctx.buildBodyDigest(); err != nil { + return err + } + + unsignedHeaders := ctx.Request.Header + if ctx.isPresign { + if !disableHeaderHoisting { + urlValues := url.Values{} + urlValues, unsignedHeaders = buildQuery(allowedQueryHoisting, unsignedHeaders) // no depends + for k := range urlValues { + ctx.Query[k] = urlValues[k] + } + } + } + + ctx.buildCanonicalHeaders(ignoredHeaders, unsignedHeaders) + ctx.buildCanonicalString() // depends on canon headers / signed headers + ctx.buildStringToSign() // depends on canon string + ctx.buildSignature() // depends on string to sign + + if ctx.isPresign { + ctx.Request.URL.RawQuery += "&X-Amz-Signature=" + ctx.signature + } else { + parts := []string{ + authHeaderPrefix + " Credential=" + ctx.credValues.AccessKeyID + "/" + ctx.credentialString, + "SignedHeaders=" + ctx.signedHeaders, + "Signature=" + ctx.signature, + } + ctx.Request.Header.Set("Authorization", strings.Join(parts, ", ")) + } + + return nil +} + +func (ctx *signingCtx) buildTime() { + ctx.formattedTime = ctx.Time.UTC().Format(timeFormat) + ctx.formattedShortTime = ctx.Time.UTC().Format(shortTimeFormat) + + if ctx.isPresign { + duration := int64(ctx.ExpireTime / time.Second) + ctx.Query.Set("X-Amz-Date", ctx.formattedTime) + ctx.Query.Set("X-Amz-Expires", strconv.FormatInt(duration, 10)) + } else { + ctx.Request.Header.Set("X-Amz-Date", ctx.formattedTime) + } +} + +func (ctx *signingCtx) buildCredentialString() { + ctx.credentialString = strings.Join([]string{ + ctx.formattedShortTime, + ctx.Region, + ctx.ServiceName, + "aws4_request", + }, "/") + + if ctx.isPresign { + ctx.Query.Set("X-Amz-Credential", ctx.credValues.AccessKeyID+"/"+ctx.credentialString) + } +} + +func buildQuery(r rule, header http.Header) (url.Values, http.Header) { + query := url.Values{} + unsignedHeaders := http.Header{} + for k, h := range header { + if r.IsValid(k) { + query[k] = h + } else { + unsignedHeaders[k] = h + } + } + + return query, unsignedHeaders +} +func (ctx *signingCtx) buildCanonicalHeaders(r rule, header http.Header) { + var headers []string + headers = append(headers, "host") + for k, v := range header { + canonicalKey := http.CanonicalHeaderKey(k) + if !r.IsValid(canonicalKey) { + continue // ignored header + } + if ctx.SignedHeaderVals == nil { + ctx.SignedHeaderVals = make(http.Header) + } + + lowerCaseKey := strings.ToLower(k) + if _, ok := ctx.SignedHeaderVals[lowerCaseKey]; ok { + // include additional values + ctx.SignedHeaderVals[lowerCaseKey] = append(ctx.SignedHeaderVals[lowerCaseKey], v...) + continue + } + + headers = append(headers, lowerCaseKey) + ctx.SignedHeaderVals[lowerCaseKey] = v + } + sort.Strings(headers) + + ctx.signedHeaders = strings.Join(headers, ";") + + if ctx.isPresign { + ctx.Query.Set("X-Amz-SignedHeaders", ctx.signedHeaders) + } + + headerValues := make([]string, len(headers)) + for i, k := range headers { + if k == "host" { + if ctx.Request.Host != "" { + headerValues[i] = "host:" + ctx.Request.Host + } else { + headerValues[i] = "host:" + ctx.Request.URL.Host + } + } else { + headerValues[i] = k + ":" + + strings.Join(ctx.SignedHeaderVals[k], ",") + } + } + stripExcessSpaces(headerValues) + ctx.canonicalHeaders = strings.Join(headerValues, "\n") +} + +func (ctx *signingCtx) buildCanonicalString() { + ctx.Request.URL.RawQuery = strings.Replace(ctx.Query.Encode(), "+", "%20", -1) + + uri := getURIPath(ctx.Request.URL) + + if !ctx.DisableURIPathEscaping { + uri = rest.EscapePath(uri, false) + } + + ctx.canonicalString = strings.Join([]string{ + ctx.Request.Method, + uri, + ctx.Request.URL.RawQuery, + ctx.canonicalHeaders + "\n", + ctx.signedHeaders, + ctx.bodyDigest, + }, "\n") +} + +func (ctx *signingCtx) buildStringToSign() { + ctx.stringToSign = strings.Join([]string{ + authHeaderPrefix, + ctx.formattedTime, + ctx.credentialString, + hex.EncodeToString(makeSha256([]byte(ctx.canonicalString))), + }, "\n") +} + +func (ctx *signingCtx) buildSignature() { + secret := ctx.credValues.SecretAccessKey + date := makeHmac([]byte("AWS4"+secret), []byte(ctx.formattedShortTime)) + region := makeHmac(date, []byte(ctx.Region)) + service := makeHmac(region, []byte(ctx.ServiceName)) + credentials := makeHmac(service, []byte("aws4_request")) + signature := makeHmac(credentials, []byte(ctx.stringToSign)) + ctx.signature = hex.EncodeToString(signature) +} + +func (ctx *signingCtx) buildBodyDigest() error { + hash := ctx.Request.Header.Get("X-Amz-Content-Sha256") + if hash == "" { + if ctx.unsignedPayload || (ctx.isPresign && ctx.ServiceName == "s3") { + hash = "UNSIGNED-PAYLOAD" + } else if ctx.Body == nil { + hash = emptyStringSHA256 + } else { + if !aws.IsReaderSeekable(ctx.Body) { + return fmt.Errorf("cannot use unseekable request body %T, for signed request with body", ctx.Body) + } + hash = hex.EncodeToString(makeSha256Reader(ctx.Body)) + } + if ctx.unsignedPayload || ctx.ServiceName == "s3" || ctx.ServiceName == "glacier" { + ctx.Request.Header.Set("X-Amz-Content-Sha256", hash) + } + } + ctx.bodyDigest = hash + + return nil +} + +// isRequestSigned returns if the request is currently signed or presigned +func (ctx *signingCtx) isRequestSigned() bool { + if ctx.isPresign && ctx.Query.Get("X-Amz-Signature") != "" { + return true + } + if ctx.Request.Header.Get("Authorization") != "" { + return true + } + + return false +} + +// unsign removes signing flags for both signed and presigned requests. +func (ctx *signingCtx) removePresign() { + ctx.Query.Del("X-Amz-Algorithm") + ctx.Query.Del("X-Amz-Signature") + ctx.Query.Del("X-Amz-Security-Token") + ctx.Query.Del("X-Amz-Date") + ctx.Query.Del("X-Amz-Expires") + ctx.Query.Del("X-Amz-Credential") + ctx.Query.Del("X-Amz-SignedHeaders") +} + +func makeHmac(key []byte, data []byte) []byte { + hash := hmac.New(sha256.New, key) + hash.Write(data) + return hash.Sum(nil) +} + +func makeSha256(data []byte) []byte { + hash := sha256.New() + hash.Write(data) + return hash.Sum(nil) +} + +func makeSha256Reader(reader io.ReadSeeker) []byte { + hash := sha256.New() + start, _ := reader.Seek(0, sdkio.SeekCurrent) + defer reader.Seek(start, sdkio.SeekStart) + + io.Copy(hash, reader) + return hash.Sum(nil) +} + +const doubleSpace = " " + +// stripExcessSpaces will rewrite the passed in slice's string values to not +// contain muliple side-by-side spaces. +func stripExcessSpaces(vals []string) { + var j, k, l, m, spaces int + for i, str := range vals { + // Trim trailing spaces + for j = len(str) - 1; j >= 0 && str[j] == ' '; j-- { + } + + // Trim leading spaces + for k = 0; k < j && str[k] == ' '; k++ { + } + str = str[k : j+1] + + // Strip multiple spaces. + j = strings.Index(str, doubleSpace) + if j < 0 { + vals[i] = str + continue + } + + buf := []byte(str) + for k, m, l = j, j, len(buf); k < l; k++ { + if buf[k] == ' ' { + if spaces == 0 { + // First space. + buf[m] = buf[k] + m++ + } + spaces++ + } else { + // End of multiple spaces. + spaces = 0 + buf[m] = buf[k] + m++ + } + } + + vals[i] = string(buf[:m]) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/types.go b/vendor/github.com/aws/aws-sdk-go/aws/types.go new file mode 100644 index 00000000..8b6f2342 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/types.go @@ -0,0 +1,201 @@ +package aws + +import ( + "io" + "sync" + + "github.com/aws/aws-sdk-go/internal/sdkio" +) + +// ReadSeekCloser wraps a io.Reader returning a ReaderSeekerCloser. Should +// only be used with an io.Reader that is also an io.Seeker. Doing so may +// cause request signature errors, or request body's not sent for GET, HEAD +// and DELETE HTTP methods. +// +// Deprecated: Should only be used with io.ReadSeeker. If using for +// S3 PutObject to stream content use s3manager.Uploader instead. +func ReadSeekCloser(r io.Reader) ReaderSeekerCloser { + return ReaderSeekerCloser{r} +} + +// ReaderSeekerCloser represents a reader that can also delegate io.Seeker and +// io.Closer interfaces to the underlying object if they are available. +type ReaderSeekerCloser struct { + r io.Reader +} + +// IsReaderSeekable returns if the underlying reader type can be seeked. A +// io.Reader might not actually be seekable if it is the ReaderSeekerCloser +// type. +func IsReaderSeekable(r io.Reader) bool { + switch v := r.(type) { + case ReaderSeekerCloser: + return v.IsSeeker() + case *ReaderSeekerCloser: + return v.IsSeeker() + case io.ReadSeeker: + return true + default: + return false + } +} + +// Read reads from the reader up to size of p. The number of bytes read, and +// error if it occurred will be returned. +// +// If the reader is not an io.Reader zero bytes read, and nil error will be returned. +// +// Performs the same functionality as io.Reader Read +func (r ReaderSeekerCloser) Read(p []byte) (int, error) { + switch t := r.r.(type) { + case io.Reader: + return t.Read(p) + } + return 0, nil +} + +// Seek sets the offset for the next Read to offset, interpreted according to +// whence: 0 means relative to the origin of the file, 1 means relative to the +// current offset, and 2 means relative to the end. Seek returns the new offset +// and an error, if any. +// +// If the ReaderSeekerCloser is not an io.Seeker nothing will be done. +func (r ReaderSeekerCloser) Seek(offset int64, whence int) (int64, error) { + switch t := r.r.(type) { + case io.Seeker: + return t.Seek(offset, whence) + } + return int64(0), nil +} + +// IsSeeker returns if the underlying reader is also a seeker. +func (r ReaderSeekerCloser) IsSeeker() bool { + _, ok := r.r.(io.Seeker) + return ok +} + +// HasLen returns the length of the underlying reader if the value implements +// the Len() int method. +func (r ReaderSeekerCloser) HasLen() (int, bool) { + type lenner interface { + Len() int + } + + if lr, ok := r.r.(lenner); ok { + return lr.Len(), true + } + + return 0, false +} + +// GetLen returns the length of the bytes remaining in the underlying reader. +// Checks first for Len(), then io.Seeker to determine the size of the +// underlying reader. +// +// Will return -1 if the length cannot be determined. +func (r ReaderSeekerCloser) GetLen() (int64, error) { + if l, ok := r.HasLen(); ok { + return int64(l), nil + } + + if s, ok := r.r.(io.Seeker); ok { + return seekerLen(s) + } + + return -1, nil +} + +// SeekerLen attempts to get the number of bytes remaining at the seeker's +// current position. Returns the number of bytes remaining or error. +func SeekerLen(s io.Seeker) (int64, error) { + // Determine if the seeker is actually seekable. ReaderSeekerCloser + // hides the fact that a io.Readers might not actually be seekable. + switch v := s.(type) { + case ReaderSeekerCloser: + return v.GetLen() + case *ReaderSeekerCloser: + return v.GetLen() + } + + return seekerLen(s) +} + +func seekerLen(s io.Seeker) (int64, error) { + curOffset, err := s.Seek(0, sdkio.SeekCurrent) + if err != nil { + return 0, err + } + + endOffset, err := s.Seek(0, sdkio.SeekEnd) + if err != nil { + return 0, err + } + + _, err = s.Seek(curOffset, sdkio.SeekStart) + if err != nil { + return 0, err + } + + return endOffset - curOffset, nil +} + +// Close closes the ReaderSeekerCloser. +// +// If the ReaderSeekerCloser is not an io.Closer nothing will be done. +func (r ReaderSeekerCloser) Close() error { + switch t := r.r.(type) { + case io.Closer: + return t.Close() + } + return nil +} + +// A WriteAtBuffer provides a in memory buffer supporting the io.WriterAt interface +// Can be used with the s3manager.Downloader to download content to a buffer +// in memory. Safe to use concurrently. +type WriteAtBuffer struct { + buf []byte + m sync.Mutex + + // GrowthCoeff defines the growth rate of the internal buffer. By + // default, the growth rate is 1, where expanding the internal + // buffer will allocate only enough capacity to fit the new expected + // length. + GrowthCoeff float64 +} + +// NewWriteAtBuffer creates a WriteAtBuffer with an internal buffer +// provided by buf. +func NewWriteAtBuffer(buf []byte) *WriteAtBuffer { + return &WriteAtBuffer{buf: buf} +} + +// WriteAt writes a slice of bytes to a buffer starting at the position provided +// The number of bytes written will be returned, or error. Can overwrite previous +// written slices if the write ats overlap. +func (b *WriteAtBuffer) WriteAt(p []byte, pos int64) (n int, err error) { + pLen := len(p) + expLen := pos + int64(pLen) + b.m.Lock() + defer b.m.Unlock() + if int64(len(b.buf)) < expLen { + if int64(cap(b.buf)) < expLen { + if b.GrowthCoeff < 1 { + b.GrowthCoeff = 1 + } + newBuf := make([]byte, expLen, int64(b.GrowthCoeff*float64(expLen))) + copy(newBuf, b.buf) + b.buf = newBuf + } + b.buf = b.buf[:expLen] + } + copy(b.buf[pos:], p) + return pLen, nil +} + +// Bytes returns a slice of bytes written to the buffer. +func (b *WriteAtBuffer) Bytes() []byte { + b.m.Lock() + defer b.m.Unlock() + return b.buf +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/url.go b/vendor/github.com/aws/aws-sdk-go/aws/url.go new file mode 100644 index 00000000..6192b245 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/url.go @@ -0,0 +1,12 @@ +// +build go1.8 + +package aws + +import "net/url" + +// URLHostname will extract the Hostname without port from the URL value. +// +// Wrapper of net/url#URL.Hostname for backwards Go version compatibility. +func URLHostname(url *url.URL) string { + return url.Hostname() +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/url_1_7.go b/vendor/github.com/aws/aws-sdk-go/aws/url_1_7.go new file mode 100644 index 00000000..0210d272 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/url_1_7.go @@ -0,0 +1,29 @@ +// +build !go1.8 + +package aws + +import ( + "net/url" + "strings" +) + +// URLHostname will extract the Hostname without port from the URL value. +// +// Copy of Go 1.8's net/url#URL.Hostname functionality. +func URLHostname(url *url.URL) string { + return stripPort(url.Host) + +} + +// stripPort is copy of Go 1.8 url#URL.Hostname functionality. +// https://golang.org/src/net/url/url.go +func stripPort(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return hostport + } + if i := strings.IndexByte(hostport, ']'); i != -1 { + return strings.TrimPrefix(hostport[:i], "[") + } + return hostport[:colon] +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go new file mode 100644 index 00000000..8720affb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -0,0 +1,8 @@ +// Package aws provides core functionality for making requests to AWS services. +package aws + +// SDKName is the name of this AWS SDK +const SDKName = "aws-sdk-go" + +// SDKVersion is the version of this SDK +const SDKVersion = "1.13.59" diff --git a/vendor/github.com/aws/aws-sdk-go/awstesting/mock/mock.go b/vendor/github.com/aws/aws-sdk-go/awstesting/mock/mock.go new file mode 100644 index 00000000..1bc9290d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/awstesting/mock/mock.go @@ -0,0 +1,45 @@ +package mock + +import ( + "net/http" + "net/http/httptest" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/session" +) + +// Session is a mock session which is used to hit the mock server +var Session = func() *session.Session { + // server is the mock server that simply writes a 200 status back to the client + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + })) + + return session.Must(session.NewSession(&aws.Config{ + DisableSSL: aws.Bool(true), + Endpoint: aws.String(server.URL), + })) +}() + +// NewMockClient creates and initializes a client that will connect to the +// mock server +func NewMockClient(cfgs ...*aws.Config) *client.Client { + c := Session.ClientConfig("Mock", cfgs...) + + svc := client.New( + *c.Config, + metadata.ClientInfo{ + ServiceName: "Mock", + SigningRegion: c.SigningRegion, + Endpoint: c.Endpoint, + APIVersion: "2015-12-08", + JSONVersion: "1.1", + TargetPrefix: "MockServer", + }, + c.Handlers, + ) + + return svc +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkio/io_go1.6.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkio/io_go1.6.go new file mode 100644 index 00000000..5aa9137e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkio/io_go1.6.go @@ -0,0 +1,10 @@ +// +build !go1.7 + +package sdkio + +// Copy of Go 1.7 io package's Seeker constants. +const ( + SeekStart = 0 // seek relative to the origin of the file + SeekCurrent = 1 // seek relative to the current offset + SeekEnd = 2 // seek relative to the end +) diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkio/io_go1.7.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkio/io_go1.7.go new file mode 100644 index 00000000..e5f00561 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkio/io_go1.7.go @@ -0,0 +1,12 @@ +// +build go1.7 + +package sdkio + +import "io" + +// Alias for Go 1.7 io package Seeker constants +const ( + SeekStart = io.SeekStart // seek relative to the origin of the file + SeekCurrent = io.SeekCurrent // seek relative to the current offset + SeekEnd = io.SeekEnd // seek relative to the end +) diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/locked_source.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/locked_source.go new file mode 100644 index 00000000..0c9802d8 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkrand/locked_source.go @@ -0,0 +1,29 @@ +package sdkrand + +import ( + "math/rand" + "sync" + "time" +) + +// lockedSource is a thread-safe implementation of rand.Source +type lockedSource struct { + lk sync.Mutex + src rand.Source +} + +func (r *lockedSource) Int63() (n int64) { + r.lk.Lock() + n = r.src.Int63() + r.lk.Unlock() + return +} + +func (r *lockedSource) Seed(seed int64) { + r.lk.Lock() + r.src.Seed(seed) + r.lk.Unlock() +} + +// SeededRand is a new RNG using a thread safe implementation of rand.Source +var SeededRand = rand.New(&lockedSource{src: rand.NewSource(time.Now().UnixNano())}) diff --git a/vendor/github.com/aws/aws-sdk-go/internal/shareddefaults/shared_config.go b/vendor/github.com/aws/aws-sdk-go/internal/shareddefaults/shared_config.go new file mode 100644 index 00000000..ebcbc2b4 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/shareddefaults/shared_config.go @@ -0,0 +1,40 @@ +package shareddefaults + +import ( + "os" + "path/filepath" + "runtime" +) + +// SharedCredentialsFilename returns the SDK's default file path +// for the shared credentials file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.aws/credentials +// - Windows: %USERPROFILE%\.aws\credentials +func SharedCredentialsFilename() string { + return filepath.Join(UserHomeDir(), ".aws", "credentials") +} + +// SharedConfigFilename returns the SDK's default file path for +// the shared config file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.aws/config +// - Windows: %USERPROFILE%\.aws\config +func SharedConfigFilename() string { + return filepath.Join(UserHomeDir(), ".aws", "config") +} + +// UserHomeDir returns the home directory for the user the process is +// running under. +func UserHomeDir() string { + if runtime.GOOS == "windows" { // Windows + return os.Getenv("USERPROFILE") + } + + // *nix + return os.Getenv("HOME") +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/idempotency.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/idempotency.go new file mode 100644 index 00000000..53831dff --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/idempotency.go @@ -0,0 +1,75 @@ +package protocol + +import ( + "crypto/rand" + "fmt" + "reflect" +) + +// RandReader is the random reader the protocol package will use to read +// random bytes from. This is exported for testing, and should not be used. +var RandReader = rand.Reader + +const idempotencyTokenFillTag = `idempotencyToken` + +// CanSetIdempotencyToken returns true if the struct field should be +// automatically populated with a Idempotency token. +// +// Only *string and string type fields that are tagged with idempotencyToken +// which are not already set can be auto filled. +func CanSetIdempotencyToken(v reflect.Value, f reflect.StructField) bool { + switch u := v.Interface().(type) { + // To auto fill an Idempotency token the field must be a string, + // tagged for auto fill, and have a zero value. + case *string: + return u == nil && len(f.Tag.Get(idempotencyTokenFillTag)) != 0 + case string: + return len(u) == 0 && len(f.Tag.Get(idempotencyTokenFillTag)) != 0 + } + + return false +} + +// GetIdempotencyToken returns a randomly generated idempotency token. +func GetIdempotencyToken() string { + b := make([]byte, 16) + RandReader.Read(b) + + return UUIDVersion4(b) +} + +// SetIdempotencyToken will set the value provided with a Idempotency Token. +// Given that the value can be set. Will panic if value is not setable. +func SetIdempotencyToken(v reflect.Value) { + if v.Kind() == reflect.Ptr { + if v.IsNil() && v.CanSet() { + v.Set(reflect.New(v.Type().Elem())) + } + v = v.Elem() + } + v = reflect.Indirect(v) + + if !v.CanSet() { + panic(fmt.Sprintf("unable to set idempotnecy token %v", v)) + } + + b := make([]byte, 16) + _, err := rand.Read(b) + if err != nil { + // TODO handle error + return + } + + v.Set(reflect.ValueOf(UUIDVersion4(b))) +} + +// UUIDVersion4 returns a Version 4 random UUID from the byte slice provided +func UUIDVersion4(u []byte) string { + // https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_.28random.29 + // 13th character is "4" + u[6] = (u[6] | 0x40) & 0x4F + // 17th character is "8", "9", "a", or "b" + u[8] = (u[8] | 0x80) & 0xBF + + return fmt.Sprintf(`%X-%X-%X-%X-%X`, u[0:4], u[4:6], u[6:8], u[8:10], u[10:]) +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go new file mode 100644 index 00000000..ec765ba2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go @@ -0,0 +1,286 @@ +// Package jsonutil provides JSON serialization of AWS requests and responses. +package jsonutil + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "math" + "reflect" + "sort" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/private/protocol" +) + +var timeType = reflect.ValueOf(time.Time{}).Type() +var byteSliceType = reflect.ValueOf([]byte{}).Type() + +// BuildJSON builds a JSON string for a given object v. +func BuildJSON(v interface{}) ([]byte, error) { + var buf bytes.Buffer + + err := buildAny(reflect.ValueOf(v), &buf, "") + return buf.Bytes(), err +} + +func buildAny(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + origVal := value + value = reflect.Indirect(value) + if !value.IsValid() { + return nil + } + + vtype := value.Type() + + t := tag.Get("type") + if t == "" { + switch vtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if value.Type() != timeType { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := value.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + // cannot be a JSONValue map + if _, ok := value.Interface().(aws.JSONValue); !ok { + t = "map" + } + } + } + + switch t { + case "structure": + if field, ok := vtype.FieldByName("_"); ok { + tag = field.Tag + } + return buildStruct(value, buf, tag) + case "list": + return buildList(value, buf, tag) + case "map": + return buildMap(value, buf, tag) + default: + return buildScalar(origVal, buf, tag) + } +} + +func buildStruct(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + if !value.IsValid() { + return nil + } + + // unwrap payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := value.Type().FieldByName(payload) + tag = field.Tag + value = elemOf(value.FieldByName(payload)) + + if !value.IsValid() { + return nil + } + } + + buf.WriteByte('{') + + t := value.Type() + first := true + for i := 0; i < t.NumField(); i++ { + member := value.Field(i) + + // This allocates the most memory. + // Additionally, we cannot skip nil fields due to + // idempotency auto filling. + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("json") == "-" { + continue + } + if field.Tag.Get("location") != "" { + continue // ignore non-body elements + } + if field.Tag.Get("ignore") != "" { + continue + } + + if protocol.CanSetIdempotencyToken(member, field) { + token := protocol.GetIdempotencyToken() + member = reflect.ValueOf(&token) + } + + if (member.Kind() == reflect.Ptr || member.Kind() == reflect.Slice || member.Kind() == reflect.Map) && member.IsNil() { + continue // ignore unset fields + } + + if first { + first = false + } else { + buf.WriteByte(',') + } + + // figure out what this field is called + name := field.Name + if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + writeString(name, buf) + buf.WriteString(`:`) + + err := buildAny(member, buf, field.Tag) + if err != nil { + return err + } + + } + + buf.WriteString("}") + + return nil +} + +func buildList(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + buf.WriteString("[") + + for i := 0; i < value.Len(); i++ { + buildAny(value.Index(i), buf, "") + + if i < value.Len()-1 { + buf.WriteString(",") + } + } + + buf.WriteString("]") + + return nil +} + +type sortedValues []reflect.Value + +func (sv sortedValues) Len() int { return len(sv) } +func (sv sortedValues) Swap(i, j int) { sv[i], sv[j] = sv[j], sv[i] } +func (sv sortedValues) Less(i, j int) bool { return sv[i].String() < sv[j].String() } + +func buildMap(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + buf.WriteString("{") + + sv := sortedValues(value.MapKeys()) + sort.Sort(sv) + + for i, k := range sv { + if i > 0 { + buf.WriteByte(',') + } + + writeString(k.String(), buf) + buf.WriteString(`:`) + + buildAny(value.MapIndex(k), buf, "") + } + + buf.WriteString("}") + + return nil +} + +func buildScalar(v reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + // prevents allocation on the heap. + scratch := [64]byte{} + switch value := reflect.Indirect(v); value.Kind() { + case reflect.String: + writeString(value.String(), buf) + case reflect.Bool: + if value.Bool() { + buf.WriteString("true") + } else { + buf.WriteString("false") + } + case reflect.Int64: + buf.Write(strconv.AppendInt(scratch[:0], value.Int(), 10)) + case reflect.Float64: + f := value.Float() + if math.IsInf(f, 0) || math.IsNaN(f) { + return &json.UnsupportedValueError{Value: v, Str: strconv.FormatFloat(f, 'f', -1, 64)} + } + buf.Write(strconv.AppendFloat(scratch[:0], f, 'f', -1, 64)) + default: + switch converted := value.Interface().(type) { + case time.Time: + buf.Write(strconv.AppendInt(scratch[:0], converted.UTC().Unix(), 10)) + case []byte: + if !value.IsNil() { + buf.WriteByte('"') + if len(converted) < 1024 { + // for small buffers, using Encode directly is much faster. + dst := make([]byte, base64.StdEncoding.EncodedLen(len(converted))) + base64.StdEncoding.Encode(dst, converted) + buf.Write(dst) + } else { + // for large buffers, avoid unnecessary extra temporary + // buffer space. + enc := base64.NewEncoder(base64.StdEncoding, buf) + enc.Write(converted) + enc.Close() + } + buf.WriteByte('"') + } + case aws.JSONValue: + str, err := protocol.EncodeJSONValue(converted, protocol.QuotedEscape) + if err != nil { + return fmt.Errorf("unable to encode JSONValue, %v", err) + } + buf.WriteString(str) + default: + return fmt.Errorf("unsupported JSON value %v (%s)", value.Interface(), value.Type()) + } + } + return nil +} + +var hex = "0123456789abcdef" + +func writeString(s string, buf *bytes.Buffer) { + buf.WriteByte('"') + for i := 0; i < len(s); i++ { + if s[i] == '"' { + buf.WriteString(`\"`) + } else if s[i] == '\\' { + buf.WriteString(`\\`) + } else if s[i] == '\b' { + buf.WriteString(`\b`) + } else if s[i] == '\f' { + buf.WriteString(`\f`) + } else if s[i] == '\r' { + buf.WriteString(`\r`) + } else if s[i] == '\t' { + buf.WriteString(`\t`) + } else if s[i] == '\n' { + buf.WriteString(`\n`) + } else if s[i] < 32 { + buf.WriteString("\\u00") + buf.WriteByte(hex[s[i]>>4]) + buf.WriteByte(hex[s[i]&0xF]) + } else { + buf.WriteByte(s[i]) + } + } + buf.WriteByte('"') +} + +// Returns the reflection element of a value, if it is a pointer. +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go new file mode 100644 index 00000000..037e1e7b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go @@ -0,0 +1,226 @@ +package jsonutil + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "reflect" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/private/protocol" +) + +// UnmarshalJSON reads a stream and unmarshals the results in object v. +func UnmarshalJSON(v interface{}, stream io.Reader) error { + var out interface{} + + b, err := ioutil.ReadAll(stream) + if err != nil { + return err + } + + if len(b) == 0 { + return nil + } + + if err := json.Unmarshal(b, &out); err != nil { + return err + } + + return unmarshalAny(reflect.ValueOf(v), out, "") +} + +func unmarshalAny(value reflect.Value, data interface{}, tag reflect.StructTag) error { + vtype := value.Type() + if vtype.Kind() == reflect.Ptr { + vtype = vtype.Elem() // check kind of actual element type + } + + t := tag.Get("type") + if t == "" { + switch vtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if _, ok := value.Interface().(*time.Time); !ok { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := value.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + // cannot be a JSONValue map + if _, ok := value.Interface().(aws.JSONValue); !ok { + t = "map" + } + } + } + + switch t { + case "structure": + if field, ok := vtype.FieldByName("_"); ok { + tag = field.Tag + } + return unmarshalStruct(value, data, tag) + case "list": + return unmarshalList(value, data, tag) + case "map": + return unmarshalMap(value, data, tag) + default: + return unmarshalScalar(value, data, tag) + } +} + +func unmarshalStruct(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + mapData, ok := data.(map[string]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a structure (%#v)", data) + } + + t := value.Type() + if value.Kind() == reflect.Ptr { + if value.IsNil() { // create the structure if it's nil + s := reflect.New(value.Type().Elem()) + value.Set(s) + value = s + } + + value = value.Elem() + t = t.Elem() + } + + // unwrap any payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := t.FieldByName(payload) + return unmarshalAny(value.FieldByName(payload), data, field.Tag) + } + + for i := 0; i < t.NumField(); i++ { + field := t.Field(i) + if field.PkgPath != "" { + continue // ignore unexported fields + } + + // figure out what this field is called + name := field.Name + if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + member := value.FieldByIndex(field.Index) + err := unmarshalAny(member, mapData[name], field.Tag) + if err != nil { + return err + } + } + return nil +} + +func unmarshalList(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + listData, ok := data.([]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a list (%#v)", data) + } + + if value.IsNil() { + l := len(listData) + value.Set(reflect.MakeSlice(value.Type(), l, l)) + } + + for i, c := range listData { + err := unmarshalAny(value.Index(i), c, "") + if err != nil { + return err + } + } + + return nil +} + +func unmarshalMap(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + mapData, ok := data.(map[string]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a map (%#v)", data) + } + + if value.IsNil() { + value.Set(reflect.MakeMap(value.Type())) + } + + for k, v := range mapData { + kvalue := reflect.ValueOf(k) + vvalue := reflect.New(value.Type().Elem()).Elem() + + unmarshalAny(vvalue, v, "") + value.SetMapIndex(kvalue, vvalue) + } + + return nil +} + +func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTag) error { + errf := func() error { + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) + } + + switch d := data.(type) { + case nil: + return nil // nothing to do here + case string: + switch value.Interface().(type) { + case *string: + value.Set(reflect.ValueOf(&d)) + case []byte: + b, err := base64.StdEncoding.DecodeString(d) + if err != nil { + return err + } + value.Set(reflect.ValueOf(b)) + case aws.JSONValue: + // No need to use escaping as the value is a non-quoted string. + v, err := protocol.DecodeJSONValue(d, protocol.NoEscape) + if err != nil { + return err + } + value.Set(reflect.ValueOf(v)) + default: + return errf() + } + case float64: + switch value.Interface().(type) { + case *int64: + di := int64(d) + value.Set(reflect.ValueOf(&di)) + case *float64: + value.Set(reflect.ValueOf(&d)) + case *time.Time: + t := time.Unix(int64(d), 0).UTC() + value.Set(reflect.ValueOf(&t)) + default: + return errf() + } + case bool: + switch value.Interface().(type) { + case *bool: + value.Set(reflect.ValueOf(&d)) + default: + return errf() + } + default: + return fmt.Errorf("unsupported JSON value (%v)", data) + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go new file mode 100644 index 00000000..56af4dc4 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go @@ -0,0 +1,111 @@ +// Package jsonrpc provides JSON RPC utilities for serialization of AWS +// requests and responses. +package jsonrpc + +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/input/json.json build_test.go +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/output/json.json unmarshal_test.go + +import ( + "encoding/json" + "io/ioutil" + "strings" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil" + "github.com/aws/aws-sdk-go/private/protocol/rest" +) + +var emptyJSON = []byte("{}") + +// BuildHandler is a named request handler for building jsonrpc protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.jsonrpc.Build", Fn: Build} + +// UnmarshalHandler is a named request handler for unmarshaling jsonrpc protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "awssdk.jsonrpc.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling jsonrpc protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "awssdk.jsonrpc.UnmarshalMeta", Fn: UnmarshalMeta} + +// UnmarshalErrorHandler is a named request handler for unmarshaling jsonrpc protocol request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "awssdk.jsonrpc.UnmarshalError", Fn: UnmarshalError} + +// Build builds a JSON payload for a JSON RPC request. +func Build(req *request.Request) { + var buf []byte + var err error + if req.ParamsFilled() { + buf, err = jsonutil.BuildJSON(req.Params) + if err != nil { + req.Error = awserr.New("SerializationError", "failed encoding JSON RPC request", err) + return + } + } else { + buf = emptyJSON + } + + if req.ClientInfo.TargetPrefix != "" || string(buf) != "{}" { + req.SetBufferBody(buf) + } + + if req.ClientInfo.TargetPrefix != "" { + target := req.ClientInfo.TargetPrefix + "." + req.Operation.Name + req.HTTPRequest.Header.Add("X-Amz-Target", target) + } + if req.ClientInfo.JSONVersion != "" { + jsonVersion := req.ClientInfo.JSONVersion + req.HTTPRequest.Header.Add("Content-Type", "application/x-amz-json-"+jsonVersion) + } +} + +// Unmarshal unmarshals a response for a JSON RPC service. +func Unmarshal(req *request.Request) { + defer req.HTTPResponse.Body.Close() + if req.DataFilled() { + err := jsonutil.UnmarshalJSON(req.Data, req.HTTPResponse.Body) + if err != nil { + req.Error = awserr.New("SerializationError", "failed decoding JSON RPC response", err) + } + } + return +} + +// UnmarshalMeta unmarshals headers from a response for a JSON RPC service. +func UnmarshalMeta(req *request.Request) { + rest.UnmarshalMeta(req) +} + +// UnmarshalError unmarshals an error response for a JSON RPC service. +func UnmarshalError(req *request.Request) { + defer req.HTTPResponse.Body.Close() + bodyBytes, err := ioutil.ReadAll(req.HTTPResponse.Body) + if err != nil { + req.Error = awserr.New("SerializationError", "failed reading JSON RPC error response", err) + return + } + if len(bodyBytes) == 0 { + req.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", req.HTTPResponse.Status, nil), + req.HTTPResponse.StatusCode, + "", + ) + return + } + var jsonErr jsonErrorResponse + if err := json.Unmarshal(bodyBytes, &jsonErr); err != nil { + req.Error = awserr.New("SerializationError", "failed decoding JSON RPC error response", err) + return + } + + codes := strings.SplitN(jsonErr.Code, "#", 2) + req.Error = awserr.NewRequestFailure( + awserr.New(codes[len(codes)-1], jsonErr.Message, nil), + req.HTTPResponse.StatusCode, + req.RequestID, + ) +} + +type jsonErrorResponse struct { + Code string `json:"__type"` + Message string `json:"message"` +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonvalue.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonvalue.go new file mode 100644 index 00000000..776d1101 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonvalue.go @@ -0,0 +1,76 @@ +package protocol + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "strconv" + + "github.com/aws/aws-sdk-go/aws" +) + +// EscapeMode is the mode that should be use for escaping a value +type EscapeMode uint + +// The modes for escaping a value before it is marshaled, and unmarshaled. +const ( + NoEscape EscapeMode = iota + Base64Escape + QuotedEscape +) + +// EncodeJSONValue marshals the value into a JSON string, and optionally base64 +// encodes the string before returning it. +// +// Will panic if the escape mode is unknown. +func EncodeJSONValue(v aws.JSONValue, escape EscapeMode) (string, error) { + b, err := json.Marshal(v) + if err != nil { + return "", err + } + + switch escape { + case NoEscape: + return string(b), nil + case Base64Escape: + return base64.StdEncoding.EncodeToString(b), nil + case QuotedEscape: + return strconv.Quote(string(b)), nil + } + + panic(fmt.Sprintf("EncodeJSONValue called with unknown EscapeMode, %v", escape)) +} + +// DecodeJSONValue will attempt to decode the string input as a JSONValue. +// Optionally decoding base64 the value first before JSON unmarshaling. +// +// Will panic if the escape mode is unknown. +func DecodeJSONValue(v string, escape EscapeMode) (aws.JSONValue, error) { + var b []byte + var err error + + switch escape { + case NoEscape: + b = []byte(v) + case Base64Escape: + b, err = base64.StdEncoding.DecodeString(v) + case QuotedEscape: + var u string + u, err = strconv.Unquote(v) + b = []byte(u) + default: + panic(fmt.Sprintf("DecodeJSONValue called with unknown EscapeMode, %v", escape)) + } + + if err != nil { + return nil, err + } + + m := aws.JSONValue{} + err = json.Unmarshal(b, &m) + if err != nil { + return nil, err + } + + return m, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/build.go new file mode 100644 index 00000000..60e5b09d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/build.go @@ -0,0 +1,36 @@ +// Package query provides serialization of AWS query requests, and responses. +package query + +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/input/query.json build_test.go + +import ( + "net/url" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol/query/queryutil" +) + +// BuildHandler is a named request handler for building query protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.query.Build", Fn: Build} + +// Build builds a request for an AWS Query service. +func Build(r *request.Request) { + body := url.Values{ + "Action": {r.Operation.Name}, + "Version": {r.ClientInfo.APIVersion}, + } + if err := queryutil.Parse(body, r.Params, false); err != nil { + r.Error = awserr.New("SerializationError", "failed encoding Query request", err) + return + } + + if !r.IsPresigned() { + r.HTTPRequest.Method = "POST" + r.HTTPRequest.Header.Set("Content-Type", "application/x-www-form-urlencoded; charset=utf-8") + r.SetBufferBody([]byte(body.Encode())) + } else { // This is a pre-signed request + r.HTTPRequest.Method = "GET" + r.HTTPRequest.URL.RawQuery = body.Encode() + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go new file mode 100644 index 00000000..5ce9cba3 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go @@ -0,0 +1,241 @@ +package queryutil + +import ( + "encoding/base64" + "fmt" + "net/url" + "reflect" + "sort" + "strconv" + "strings" + "time" + + "github.com/aws/aws-sdk-go/private/protocol" +) + +// Parse parses an object i and fills a url.Values object. The isEC2 flag +// indicates if this is the EC2 Query sub-protocol. +func Parse(body url.Values, i interface{}, isEC2 bool) error { + q := queryParser{isEC2: isEC2} + return q.parseValue(body, reflect.ValueOf(i), "", "") +} + +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} + +type queryParser struct { + isEC2 bool +} + +func (q *queryParser) parseValue(v url.Values, value reflect.Value, prefix string, tag reflect.StructTag) error { + value = elemOf(value) + + // no need to handle zero values + if !value.IsValid() { + return nil + } + + t := tag.Get("type") + if t == "" { + switch value.Kind() { + case reflect.Struct: + t = "structure" + case reflect.Slice: + t = "list" + case reflect.Map: + t = "map" + } + } + + switch t { + case "structure": + return q.parseStruct(v, value, prefix) + case "list": + return q.parseList(v, value, prefix, tag) + case "map": + return q.parseMap(v, value, prefix, tag) + default: + return q.parseScalar(v, value, prefix, tag) + } +} + +func (q *queryParser) parseStruct(v url.Values, value reflect.Value, prefix string) error { + if !value.IsValid() { + return nil + } + + t := value.Type() + for i := 0; i < value.NumField(); i++ { + elemValue := elemOf(value.Field(i)) + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("ignore") != "" { + continue + } + + if protocol.CanSetIdempotencyToken(value.Field(i), field) { + token := protocol.GetIdempotencyToken() + elemValue = reflect.ValueOf(token) + } + + var name string + if q.isEC2 { + name = field.Tag.Get("queryName") + } + if name == "" { + if field.Tag.Get("flattened") != "" && field.Tag.Get("locationNameList") != "" { + name = field.Tag.Get("locationNameList") + } else if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + if name != "" && q.isEC2 { + name = strings.ToUpper(name[0:1]) + name[1:] + } + } + if name == "" { + name = field.Name + } + + if prefix != "" { + name = prefix + "." + name + } + + if err := q.parseValue(v, elemValue, name, field.Tag); err != nil { + return err + } + } + return nil +} + +func (q *queryParser) parseList(v url.Values, value reflect.Value, prefix string, tag reflect.StructTag) error { + // If it's empty, generate an empty value + if !value.IsNil() && value.Len() == 0 { + v.Set(prefix, "") + return nil + } + + if _, ok := value.Interface().([]byte); ok { + return q.parseScalar(v, value, prefix, tag) + } + + // check for unflattened list member + if !q.isEC2 && tag.Get("flattened") == "" { + if listName := tag.Get("locationNameList"); listName == "" { + prefix += ".member" + } else { + prefix += "." + listName + } + } + + for i := 0; i < value.Len(); i++ { + slicePrefix := prefix + if slicePrefix == "" { + slicePrefix = strconv.Itoa(i + 1) + } else { + slicePrefix = slicePrefix + "." + strconv.Itoa(i+1) + } + if err := q.parseValue(v, value.Index(i), slicePrefix, ""); err != nil { + return err + } + } + return nil +} + +func (q *queryParser) parseMap(v url.Values, value reflect.Value, prefix string, tag reflect.StructTag) error { + // If it's empty, generate an empty value + if !value.IsNil() && value.Len() == 0 { + v.Set(prefix, "") + return nil + } + + // check for unflattened list member + if !q.isEC2 && tag.Get("flattened") == "" { + prefix += ".entry" + } + + // sort keys for improved serialization consistency. + // this is not strictly necessary for protocol support. + mapKeyValues := value.MapKeys() + mapKeys := map[string]reflect.Value{} + mapKeyNames := make([]string, len(mapKeyValues)) + for i, mapKey := range mapKeyValues { + name := mapKey.String() + mapKeys[name] = mapKey + mapKeyNames[i] = name + } + sort.Strings(mapKeyNames) + + for i, mapKeyName := range mapKeyNames { + mapKey := mapKeys[mapKeyName] + mapValue := value.MapIndex(mapKey) + + kname := tag.Get("locationNameKey") + if kname == "" { + kname = "key" + } + vname := tag.Get("locationNameValue") + if vname == "" { + vname = "value" + } + + // serialize key + var keyName string + if prefix == "" { + keyName = strconv.Itoa(i+1) + "." + kname + } else { + keyName = prefix + "." + strconv.Itoa(i+1) + "." + kname + } + + if err := q.parseValue(v, mapKey, keyName, ""); err != nil { + return err + } + + // serialize value + var valueName string + if prefix == "" { + valueName = strconv.Itoa(i+1) + "." + vname + } else { + valueName = prefix + "." + strconv.Itoa(i+1) + "." + vname + } + + if err := q.parseValue(v, mapValue, valueName, ""); err != nil { + return err + } + } + + return nil +} + +func (q *queryParser) parseScalar(v url.Values, r reflect.Value, name string, tag reflect.StructTag) error { + switch value := r.Interface().(type) { + case string: + v.Set(name, value) + case []byte: + if !r.IsNil() { + v.Set(name, base64.StdEncoding.EncodeToString(value)) + } + case bool: + v.Set(name, strconv.FormatBool(value)) + case int64: + v.Set(name, strconv.FormatInt(value, 10)) + case int: + v.Set(name, strconv.Itoa(value)) + case float64: + v.Set(name, strconv.FormatFloat(value, 'f', -1, 64)) + case float32: + v.Set(name, strconv.FormatFloat(float64(value), 'f', -1, 32)) + case time.Time: + const ISO8601UTC = "2006-01-02T15:04:05Z" + v.Set(name, value.UTC().Format(ISO8601UTC)) + default: + return fmt.Errorf("unsupported value for param %s: %v (%s)", name, r.Interface(), r.Type().Name()) + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go new file mode 100644 index 00000000..e0f4d5a5 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go @@ -0,0 +1,35 @@ +package query + +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/output/query.json unmarshal_test.go + +import ( + "encoding/xml" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil" +) + +// UnmarshalHandler is a named request handler for unmarshaling query protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "awssdk.query.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling query protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "awssdk.query.UnmarshalMeta", Fn: UnmarshalMeta} + +// Unmarshal unmarshals a response for an AWS Query service. +func Unmarshal(r *request.Request) { + defer r.HTTPResponse.Body.Close() + if r.DataFilled() { + decoder := xml.NewDecoder(r.HTTPResponse.Body) + err := xmlutil.UnmarshalXML(r.Data, decoder, r.Operation.Name+"Result") + if err != nil { + r.Error = awserr.New("SerializationError", "failed decoding Query response", err) + return + } + } +} + +// UnmarshalMeta unmarshals header response values for an AWS Query service. +func UnmarshalMeta(r *request.Request) { + r.RequestID = r.HTTPResponse.Header.Get("X-Amzn-Requestid") +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go new file mode 100644 index 00000000..f2142961 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go @@ -0,0 +1,66 @@ +package query + +import ( + "encoding/xml" + "io/ioutil" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +type xmlErrorResponse struct { + XMLName xml.Name `xml:"ErrorResponse"` + Code string `xml:"Error>Code"` + Message string `xml:"Error>Message"` + RequestID string `xml:"RequestId"` +} + +type xmlServiceUnavailableResponse struct { + XMLName xml.Name `xml:"ServiceUnavailableException"` +} + +// UnmarshalErrorHandler is a name request handler to unmarshal request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "awssdk.query.UnmarshalError", Fn: UnmarshalError} + +// UnmarshalError unmarshals an error response for an AWS Query service. +func UnmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + bodyBytes, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = awserr.New("SerializationError", "failed to read from query HTTP response body", err) + return + } + + // First check for specific error + resp := xmlErrorResponse{} + decodeErr := xml.Unmarshal(bodyBytes, &resp) + if decodeErr == nil { + reqID := resp.RequestID + if reqID == "" { + reqID = r.RequestID + } + r.Error = awserr.NewRequestFailure( + awserr.New(resp.Code, resp.Message, nil), + r.HTTPResponse.StatusCode, + reqID, + ) + return + } + + // Check for unhandled error + servUnavailResp := xmlServiceUnavailableResponse{} + unavailErr := xml.Unmarshal(bodyBytes, &servUnavailResp) + if unavailErr == nil { + r.Error = awserr.NewRequestFailure( + awserr.New("ServiceUnavailableException", "service is unavailable", nil), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + + // Failed to retrieve any error message from the response body + r.Error = awserr.New("SerializationError", + "failed to decode query XML error response", decodeErr) +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go new file mode 100644 index 00000000..f761e0b3 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go @@ -0,0 +1,293 @@ +// Package rest provides RESTful serialization of AWS requests and responses. +package rest + +import ( + "bytes" + "encoding/base64" + "fmt" + "io" + "net/http" + "net/url" + "path" + "reflect" + "strconv" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" +) + +// RFC1123GMT is a RFC1123 (RFC822) formated timestame. This format is not +// using the standard library's time.RFC1123 due to the desire to always use +// GMT as the timezone. +const RFC1123GMT = "Mon, 2 Jan 2006 15:04:05 GMT" + +// Whether the byte value can be sent without escaping in AWS URLs +var noEscape [256]bool + +var errValueNotSet = fmt.Errorf("value not set") + +func init() { + for i := 0; i < len(noEscape); i++ { + // AWS expects every character except these to be escaped + noEscape[i] = (i >= 'A' && i <= 'Z') || + (i >= 'a' && i <= 'z') || + (i >= '0' && i <= '9') || + i == '-' || + i == '.' || + i == '_' || + i == '~' + } +} + +// BuildHandler is a named request handler for building rest protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.rest.Build", Fn: Build} + +// Build builds the REST component of a service request. +func Build(r *request.Request) { + if r.ParamsFilled() { + v := reflect.ValueOf(r.Params).Elem() + buildLocationElements(r, v, false) + buildBody(r, v) + } +} + +// BuildAsGET builds the REST component of a service request with the ability to hoist +// data from the body. +func BuildAsGET(r *request.Request) { + if r.ParamsFilled() { + v := reflect.ValueOf(r.Params).Elem() + buildLocationElements(r, v, true) + buildBody(r, v) + } +} + +func buildLocationElements(r *request.Request, v reflect.Value, buildGETQuery bool) { + query := r.HTTPRequest.URL.Query() + + // Setup the raw path to match the base path pattern. This is needed + // so that when the path is mutated a custom escaped version can be + // stored in RawPath that will be used by the Go client. + r.HTTPRequest.URL.RawPath = r.HTTPRequest.URL.Path + + for i := 0; i < v.NumField(); i++ { + m := v.Field(i) + if n := v.Type().Field(i).Name; n[0:1] == strings.ToLower(n[0:1]) { + continue + } + + if m.IsValid() { + field := v.Type().Field(i) + name := field.Tag.Get("locationName") + if name == "" { + name = field.Name + } + if kind := m.Kind(); kind == reflect.Ptr { + m = m.Elem() + } else if kind == reflect.Interface { + if !m.Elem().IsValid() { + continue + } + } + if !m.IsValid() { + continue + } + if field.Tag.Get("ignore") != "" { + continue + } + + var err error + switch field.Tag.Get("location") { + case "headers": // header maps + err = buildHeaderMap(&r.HTTPRequest.Header, m, field.Tag) + case "header": + err = buildHeader(&r.HTTPRequest.Header, m, name, field.Tag) + case "uri": + err = buildURI(r.HTTPRequest.URL, m, name, field.Tag) + case "querystring": + err = buildQueryString(query, m, name, field.Tag) + default: + if buildGETQuery { + err = buildQueryString(query, m, name, field.Tag) + } + } + r.Error = err + } + if r.Error != nil { + return + } + } + + r.HTTPRequest.URL.RawQuery = query.Encode() + if !aws.BoolValue(r.Config.DisableRestProtocolURICleaning) { + cleanPath(r.HTTPRequest.URL) + } +} + +func buildBody(r *request.Request, v reflect.Value) { + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + pfield, _ := v.Type().FieldByName(payloadName) + if ptag := pfield.Tag.Get("type"); ptag != "" && ptag != "structure" { + payload := reflect.Indirect(v.FieldByName(payloadName)) + if payload.IsValid() && payload.Interface() != nil { + switch reader := payload.Interface().(type) { + case io.ReadSeeker: + r.SetReaderBody(reader) + case []byte: + r.SetBufferBody(reader) + case string: + r.SetStringBody(reader) + default: + r.Error = awserr.New("SerializationError", + "failed to encode REST request", + fmt.Errorf("unknown payload type %s", payload.Type())) + } + } + } + } + } +} + +func buildHeader(header *http.Header, v reflect.Value, name string, tag reflect.StructTag) error { + str, err := convertType(v, tag) + if err == errValueNotSet { + return nil + } else if err != nil { + return awserr.New("SerializationError", "failed to encode REST request", err) + } + + header.Add(name, str) + + return nil +} + +func buildHeaderMap(header *http.Header, v reflect.Value, tag reflect.StructTag) error { + prefix := tag.Get("locationName") + for _, key := range v.MapKeys() { + str, err := convertType(v.MapIndex(key), tag) + if err == errValueNotSet { + continue + } else if err != nil { + return awserr.New("SerializationError", "failed to encode REST request", err) + + } + + header.Add(prefix+key.String(), str) + } + return nil +} + +func buildURI(u *url.URL, v reflect.Value, name string, tag reflect.StructTag) error { + value, err := convertType(v, tag) + if err == errValueNotSet { + return nil + } else if err != nil { + return awserr.New("SerializationError", "failed to encode REST request", err) + } + + u.Path = strings.Replace(u.Path, "{"+name+"}", value, -1) + u.Path = strings.Replace(u.Path, "{"+name+"+}", value, -1) + + u.RawPath = strings.Replace(u.RawPath, "{"+name+"}", EscapePath(value, true), -1) + u.RawPath = strings.Replace(u.RawPath, "{"+name+"+}", EscapePath(value, false), -1) + + return nil +} + +func buildQueryString(query url.Values, v reflect.Value, name string, tag reflect.StructTag) error { + switch value := v.Interface().(type) { + case []*string: + for _, item := range value { + query.Add(name, *item) + } + case map[string]*string: + for key, item := range value { + query.Add(key, *item) + } + case map[string][]*string: + for key, items := range value { + for _, item := range items { + query.Add(key, *item) + } + } + default: + str, err := convertType(v, tag) + if err == errValueNotSet { + return nil + } else if err != nil { + return awserr.New("SerializationError", "failed to encode REST request", err) + } + query.Set(name, str) + } + + return nil +} + +func cleanPath(u *url.URL) { + hasSlash := strings.HasSuffix(u.Path, "/") + + // clean up path, removing duplicate `/` + u.Path = path.Clean(u.Path) + u.RawPath = path.Clean(u.RawPath) + + if hasSlash && !strings.HasSuffix(u.Path, "/") { + u.Path += "/" + u.RawPath += "/" + } +} + +// EscapePath escapes part of a URL path in Amazon style +func EscapePath(path string, encodeSep bool) string { + var buf bytes.Buffer + for i := 0; i < len(path); i++ { + c := path[i] + if noEscape[c] || (c == '/' && !encodeSep) { + buf.WriteByte(c) + } else { + fmt.Fprintf(&buf, "%%%02X", c) + } + } + return buf.String() +} + +func convertType(v reflect.Value, tag reflect.StructTag) (str string, err error) { + v = reflect.Indirect(v) + if !v.IsValid() { + return "", errValueNotSet + } + + switch value := v.Interface().(type) { + case string: + str = value + case []byte: + str = base64.StdEncoding.EncodeToString(value) + case bool: + str = strconv.FormatBool(value) + case int64: + str = strconv.FormatInt(value, 10) + case float64: + str = strconv.FormatFloat(value, 'f', -1, 64) + case time.Time: + str = value.UTC().Format(RFC1123GMT) + case aws.JSONValue: + if len(value) == 0 { + return "", errValueNotSet + } + escaping := protocol.NoEscape + if tag.Get("location") == "header" { + escaping = protocol.Base64Escape + } + str, err = protocol.EncodeJSONValue(value, escaping) + if err != nil { + return "", fmt.Errorf("unable to encode JSONValue, %v", err) + } + default: + err := fmt.Errorf("unsupported value for param %v (%s)", v.Interface(), v.Type()) + return "", err + } + return str, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/payload.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/payload.go new file mode 100644 index 00000000..4366de2e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/payload.go @@ -0,0 +1,45 @@ +package rest + +import "reflect" + +// PayloadMember returns the payload field member of i if there is one, or nil. +func PayloadMember(i interface{}) interface{} { + if i == nil { + return nil + } + + v := reflect.ValueOf(i).Elem() + if !v.IsValid() { + return nil + } + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + field, _ := v.Type().FieldByName(payloadName) + if field.Tag.Get("type") != "structure" { + return nil + } + + payload := v.FieldByName(payloadName) + if payload.IsValid() || (payload.Kind() == reflect.Ptr && !payload.IsNil()) { + return payload.Interface() + } + } + } + return nil +} + +// PayloadType returns the type of a payload field member of i if there is one, or "". +func PayloadType(i interface{}) string { + v := reflect.Indirect(reflect.ValueOf(i)) + if !v.IsValid() { + return "" + } + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + if member, ok := v.Type().FieldByName(payloadName); ok { + return member.Tag.Get("type") + } + } + } + return "" +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go new file mode 100644 index 00000000..9d4e7626 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go @@ -0,0 +1,221 @@ +package rest + +import ( + "bytes" + "encoding/base64" + "fmt" + "io" + "io/ioutil" + "net/http" + "reflect" + "strconv" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" +) + +// UnmarshalHandler is a named request handler for unmarshaling rest protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "awssdk.rest.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling rest protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "awssdk.rest.UnmarshalMeta", Fn: UnmarshalMeta} + +// Unmarshal unmarshals the REST component of a response in a REST service. +func Unmarshal(r *request.Request) { + if r.DataFilled() { + v := reflect.Indirect(reflect.ValueOf(r.Data)) + unmarshalBody(r, v) + } +} + +// UnmarshalMeta unmarshals the REST metadata of a response in a REST service +func UnmarshalMeta(r *request.Request) { + r.RequestID = r.HTTPResponse.Header.Get("X-Amzn-Requestid") + if r.RequestID == "" { + // Alternative version of request id in the header + r.RequestID = r.HTTPResponse.Header.Get("X-Amz-Request-Id") + } + if r.DataFilled() { + v := reflect.Indirect(reflect.ValueOf(r.Data)) + unmarshalLocationElements(r, v) + } +} + +func unmarshalBody(r *request.Request, v reflect.Value) { + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + pfield, _ := v.Type().FieldByName(payloadName) + if ptag := pfield.Tag.Get("type"); ptag != "" && ptag != "structure" { + payload := v.FieldByName(payloadName) + if payload.IsValid() { + switch payload.Interface().(type) { + case []byte: + defer r.HTTPResponse.Body.Close() + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = awserr.New("SerializationError", "failed to decode REST response", err) + } else { + payload.Set(reflect.ValueOf(b)) + } + case *string: + defer r.HTTPResponse.Body.Close() + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = awserr.New("SerializationError", "failed to decode REST response", err) + } else { + str := string(b) + payload.Set(reflect.ValueOf(&str)) + } + default: + switch payload.Type().String() { + case "io.ReadCloser": + payload.Set(reflect.ValueOf(r.HTTPResponse.Body)) + case "io.ReadSeeker": + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = awserr.New("SerializationError", + "failed to read response body", err) + return + } + payload.Set(reflect.ValueOf(ioutil.NopCloser(bytes.NewReader(b)))) + default: + io.Copy(ioutil.Discard, r.HTTPResponse.Body) + defer r.HTTPResponse.Body.Close() + r.Error = awserr.New("SerializationError", + "failed to decode REST response", + fmt.Errorf("unknown payload type %s", payload.Type())) + } + } + } + } + } + } +} + +func unmarshalLocationElements(r *request.Request, v reflect.Value) { + for i := 0; i < v.NumField(); i++ { + m, field := v.Field(i), v.Type().Field(i) + if n := field.Name; n[0:1] == strings.ToLower(n[0:1]) { + continue + } + + if m.IsValid() { + name := field.Tag.Get("locationName") + if name == "" { + name = field.Name + } + + switch field.Tag.Get("location") { + case "statusCode": + unmarshalStatusCode(m, r.HTTPResponse.StatusCode) + case "header": + err := unmarshalHeader(m, r.HTTPResponse.Header.Get(name), field.Tag) + if err != nil { + r.Error = awserr.New("SerializationError", "failed to decode REST response", err) + break + } + case "headers": + prefix := field.Tag.Get("locationName") + err := unmarshalHeaderMap(m, r.HTTPResponse.Header, prefix) + if err != nil { + r.Error = awserr.New("SerializationError", "failed to decode REST response", err) + break + } + } + } + if r.Error != nil { + return + } + } +} + +func unmarshalStatusCode(v reflect.Value, statusCode int) { + if !v.IsValid() { + return + } + + switch v.Interface().(type) { + case *int64: + s := int64(statusCode) + v.Set(reflect.ValueOf(&s)) + } +} + +func unmarshalHeaderMap(r reflect.Value, headers http.Header, prefix string) error { + switch r.Interface().(type) { + case map[string]*string: // we only support string map value types + out := map[string]*string{} + for k, v := range headers { + k = http.CanonicalHeaderKey(k) + if strings.HasPrefix(strings.ToLower(k), strings.ToLower(prefix)) { + out[k[len(prefix):]] = &v[0] + } + } + r.Set(reflect.ValueOf(out)) + } + return nil +} + +func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) error { + isJSONValue := tag.Get("type") == "jsonvalue" + if isJSONValue { + if len(header) == 0 { + return nil + } + } else if !v.IsValid() || (header == "" && v.Elem().Kind() != reflect.String) { + return nil + } + + switch v.Interface().(type) { + case *string: + v.Set(reflect.ValueOf(&header)) + case []byte: + b, err := base64.StdEncoding.DecodeString(header) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&b)) + case *bool: + b, err := strconv.ParseBool(header) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&b)) + case *int64: + i, err := strconv.ParseInt(header, 10, 64) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&i)) + case *float64: + f, err := strconv.ParseFloat(header, 64) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&f)) + case *time.Time: + t, err := time.Parse(time.RFC1123, header) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&t)) + case aws.JSONValue: + escaping := protocol.NoEscape + if tag.Get("location") == "header" { + escaping = protocol.Base64Escape + } + m, err := protocol.DecodeJSONValue(header, escaping) + if err != nil { + return err + } + v.Set(reflect.ValueOf(m)) + default: + err := fmt.Errorf("Unsupported value for param %v (%s)", v.Interface(), v.Type()) + return err + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go new file mode 100644 index 00000000..7bdf4c85 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go @@ -0,0 +1,69 @@ +// Package restxml provides RESTful XML serialization of AWS +// requests and responses. +package restxml + +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/input/rest-xml.json build_test.go +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/output/rest-xml.json unmarshal_test.go + +import ( + "bytes" + "encoding/xml" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol/query" + "github.com/aws/aws-sdk-go/private/protocol/rest" + "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil" +) + +// BuildHandler is a named request handler for building restxml protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.restxml.Build", Fn: Build} + +// UnmarshalHandler is a named request handler for unmarshaling restxml protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "awssdk.restxml.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling restxml protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "awssdk.restxml.UnmarshalMeta", Fn: UnmarshalMeta} + +// UnmarshalErrorHandler is a named request handler for unmarshaling restxml protocol request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "awssdk.restxml.UnmarshalError", Fn: UnmarshalError} + +// Build builds a request payload for the REST XML protocol. +func Build(r *request.Request) { + rest.Build(r) + + if t := rest.PayloadType(r.Params); t == "structure" || t == "" { + var buf bytes.Buffer + err := xmlutil.BuildXML(r.Params, xml.NewEncoder(&buf)) + if err != nil { + r.Error = awserr.New("SerializationError", "failed to encode rest XML request", err) + return + } + r.SetBufferBody(buf.Bytes()) + } +} + +// Unmarshal unmarshals a payload response for the REST XML protocol. +func Unmarshal(r *request.Request) { + if t := rest.PayloadType(r.Data); t == "structure" || t == "" { + defer r.HTTPResponse.Body.Close() + decoder := xml.NewDecoder(r.HTTPResponse.Body) + err := xmlutil.UnmarshalXML(r.Data, decoder, "") + if err != nil { + r.Error = awserr.New("SerializationError", "failed to decode REST XML response", err) + return + } + } else { + rest.Unmarshal(r) + } +} + +// UnmarshalMeta unmarshals response headers for the REST XML protocol. +func UnmarshalMeta(r *request.Request) { + rest.UnmarshalMeta(r) +} + +// UnmarshalError unmarshals a response error for the REST XML protocol. +func UnmarshalError(r *request.Request) { + query.UnmarshalError(r) +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/unmarshal.go new file mode 100644 index 00000000..da1a6811 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/unmarshal.go @@ -0,0 +1,21 @@ +package protocol + +import ( + "io" + "io/ioutil" + + "github.com/aws/aws-sdk-go/aws/request" +) + +// UnmarshalDiscardBodyHandler is a named request handler to empty and close a response's body +var UnmarshalDiscardBodyHandler = request.NamedHandler{Name: "awssdk.shared.UnmarshalDiscardBody", Fn: UnmarshalDiscardBody} + +// UnmarshalDiscardBody is a request handler to empty a response's body and closing it. +func UnmarshalDiscardBody(r *request.Request) { + if r.HTTPResponse == nil || r.HTTPResponse.Body == nil { + return + } + + io.Copy(ioutil.Discard, r.HTTPResponse.Body) + r.HTTPResponse.Body.Close() +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go new file mode 100644 index 00000000..7091b456 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go @@ -0,0 +1,296 @@ +// Package xmlutil provides XML serialization of AWS requests and responses. +package xmlutil + +import ( + "encoding/base64" + "encoding/xml" + "fmt" + "reflect" + "sort" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/private/protocol" +) + +// BuildXML will serialize params into an xml.Encoder. +// Error will be returned if the serialization of any of the params or nested values fails. +func BuildXML(params interface{}, e *xml.Encoder) error { + b := xmlBuilder{encoder: e, namespaces: map[string]string{}} + root := NewXMLElement(xml.Name{}) + if err := b.buildValue(reflect.ValueOf(params), root, ""); err != nil { + return err + } + for _, c := range root.Children { + for _, v := range c { + return StructToXML(e, v, false) + } + } + return nil +} + +// Returns the reflection element of a value, if it is a pointer. +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} + +// A xmlBuilder serializes values from Go code to XML +type xmlBuilder struct { + encoder *xml.Encoder + namespaces map[string]string +} + +// buildValue generic XMLNode builder for any type. Will build value for their specific type +// struct, list, map, scalar. +// +// Also takes a "type" tag value to set what type a value should be converted to XMLNode as. If +// type is not provided reflect will be used to determine the value's type. +func (b *xmlBuilder) buildValue(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + value = elemOf(value) + if !value.IsValid() { // no need to handle zero values + return nil + } else if tag.Get("location") != "" { // don't handle non-body location values + return nil + } + + t := tag.Get("type") + if t == "" { + switch value.Kind() { + case reflect.Struct: + t = "structure" + case reflect.Slice: + t = "list" + case reflect.Map: + t = "map" + } + } + + switch t { + case "structure": + if field, ok := value.Type().FieldByName("_"); ok { + tag = tag + reflect.StructTag(" ") + field.Tag + } + return b.buildStruct(value, current, tag) + case "list": + return b.buildList(value, current, tag) + case "map": + return b.buildMap(value, current, tag) + default: + return b.buildScalar(value, current, tag) + } +} + +// buildStruct adds a struct and its fields to the current XMLNode. All fields any any nested +// types are converted to XMLNodes also. +func (b *xmlBuilder) buildStruct(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + if !value.IsValid() { + return nil + } + + fieldAdded := false + + // unwrap payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := value.Type().FieldByName(payload) + tag = field.Tag + value = elemOf(value.FieldByName(payload)) + + if !value.IsValid() { + return nil + } + } + + child := NewXMLElement(xml.Name{Local: tag.Get("locationName")}) + + // there is an xmlNamespace associated with this struct + if prefix, uri := tag.Get("xmlPrefix"), tag.Get("xmlURI"); uri != "" { + ns := xml.Attr{ + Name: xml.Name{Local: "xmlns"}, + Value: uri, + } + if prefix != "" { + b.namespaces[prefix] = uri // register the namespace + ns.Name.Local = "xmlns:" + prefix + } + + child.Attr = append(child.Attr, ns) + } + + t := value.Type() + for i := 0; i < value.NumField(); i++ { + member := elemOf(value.Field(i)) + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("ignore") != "" { + continue + } + + mTag := field.Tag + if mTag.Get("location") != "" { // skip non-body members + continue + } + + if protocol.CanSetIdempotencyToken(value.Field(i), field) { + token := protocol.GetIdempotencyToken() + member = reflect.ValueOf(token) + } + + memberName := mTag.Get("locationName") + if memberName == "" { + memberName = field.Name + mTag = reflect.StructTag(string(mTag) + ` locationName:"` + memberName + `"`) + } + if err := b.buildValue(member, child, mTag); err != nil { + return err + } + + fieldAdded = true + } + + if fieldAdded { // only append this child if we have one ore more valid members + current.AddChild(child) + } + + return nil +} + +// buildList adds the value's list items to the current XMLNode as children nodes. All +// nested values in the list are converted to XMLNodes also. +func (b *xmlBuilder) buildList(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + if value.IsNil() { // don't build omitted lists + return nil + } + + // check for unflattened list member + flattened := tag.Get("flattened") != "" + + xname := xml.Name{Local: tag.Get("locationName")} + if flattened { + for i := 0; i < value.Len(); i++ { + child := NewXMLElement(xname) + current.AddChild(child) + if err := b.buildValue(value.Index(i), child, ""); err != nil { + return err + } + } + } else { + list := NewXMLElement(xname) + current.AddChild(list) + + for i := 0; i < value.Len(); i++ { + iname := tag.Get("locationNameList") + if iname == "" { + iname = "member" + } + + child := NewXMLElement(xml.Name{Local: iname}) + list.AddChild(child) + if err := b.buildValue(value.Index(i), child, ""); err != nil { + return err + } + } + } + + return nil +} + +// buildMap adds the value's key/value pairs to the current XMLNode as children nodes. All +// nested values in the map are converted to XMLNodes also. +// +// Error will be returned if it is unable to build the map's values into XMLNodes +func (b *xmlBuilder) buildMap(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + if value.IsNil() { // don't build omitted maps + return nil + } + + maproot := NewXMLElement(xml.Name{Local: tag.Get("locationName")}) + current.AddChild(maproot) + current = maproot + + kname, vname := "key", "value" + if n := tag.Get("locationNameKey"); n != "" { + kname = n + } + if n := tag.Get("locationNameValue"); n != "" { + vname = n + } + + // sorting is not required for compliance, but it makes testing easier + keys := make([]string, value.Len()) + for i, k := range value.MapKeys() { + keys[i] = k.String() + } + sort.Strings(keys) + + for _, k := range keys { + v := value.MapIndex(reflect.ValueOf(k)) + + mapcur := current + if tag.Get("flattened") == "" { // add "entry" tag to non-flat maps + child := NewXMLElement(xml.Name{Local: "entry"}) + mapcur.AddChild(child) + mapcur = child + } + + kchild := NewXMLElement(xml.Name{Local: kname}) + kchild.Text = k + vchild := NewXMLElement(xml.Name{Local: vname}) + mapcur.AddChild(kchild) + mapcur.AddChild(vchild) + + if err := b.buildValue(v, vchild, ""); err != nil { + return err + } + } + + return nil +} + +// buildScalar will convert the value into a string and append it as a attribute or child +// of the current XMLNode. +// +// The value will be added as an attribute if tag contains a "xmlAttribute" attribute value. +// +// Error will be returned if the value type is unsupported. +func (b *xmlBuilder) buildScalar(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + var str string + switch converted := value.Interface().(type) { + case string: + str = converted + case []byte: + if !value.IsNil() { + str = base64.StdEncoding.EncodeToString(converted) + } + case bool: + str = strconv.FormatBool(converted) + case int64: + str = strconv.FormatInt(converted, 10) + case int: + str = strconv.Itoa(converted) + case float64: + str = strconv.FormatFloat(converted, 'f', -1, 64) + case float32: + str = strconv.FormatFloat(float64(converted), 'f', -1, 32) + case time.Time: + const ISO8601UTC = "2006-01-02T15:04:05Z" + str = converted.UTC().Format(ISO8601UTC) + default: + return fmt.Errorf("unsupported value for param %s: %v (%s)", + tag.Get("locationName"), value.Interface(), value.Type().Name()) + } + + xname := xml.Name{Local: tag.Get("locationName")} + if tag.Get("xmlAttribute") != "" { // put into current node's attribute list + attr := xml.Attr{Name: xname, Value: str} + current.Attr = append(current.Attr, attr) + } else { // regular text node + current.AddChild(&XMLNode{Name: xname, Text: str}) + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go new file mode 100644 index 00000000..a6c25ba3 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go @@ -0,0 +1,266 @@ +package xmlutil + +import ( + "encoding/base64" + "encoding/xml" + "fmt" + "io" + "reflect" + "strconv" + "strings" + "time" +) + +// UnmarshalXML deserializes an xml.Decoder into the container v. V +// needs to match the shape of the XML expected to be decoded. +// If the shape doesn't match unmarshaling will fail. +func UnmarshalXML(v interface{}, d *xml.Decoder, wrapper string) error { + n, err := XMLToStruct(d, nil) + if err != nil { + return err + } + if n.Children != nil { + for _, root := range n.Children { + for _, c := range root { + if wrappedChild, ok := c.Children[wrapper]; ok { + c = wrappedChild[0] // pull out wrapped element + } + + err = parse(reflect.ValueOf(v), c, "") + if err != nil { + if err == io.EOF { + return nil + } + return err + } + } + } + return nil + } + return nil +} + +// parse deserializes any value from the XMLNode. The type tag is used to infer the type, or reflect +// will be used to determine the type from r. +func parse(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + rtype := r.Type() + if rtype.Kind() == reflect.Ptr { + rtype = rtype.Elem() // check kind of actual element type + } + + t := tag.Get("type") + if t == "" { + switch rtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if _, ok := r.Interface().(*time.Time); !ok { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := r.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + t = "map" + } + } + + switch t { + case "structure": + if field, ok := rtype.FieldByName("_"); ok { + tag = field.Tag + } + return parseStruct(r, node, tag) + case "list": + return parseList(r, node, tag) + case "map": + return parseMap(r, node, tag) + default: + return parseScalar(r, node, tag) + } +} + +// parseStruct deserializes a structure and its fields from an XMLNode. Any nested +// types in the structure will also be deserialized. +func parseStruct(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + t := r.Type() + if r.Kind() == reflect.Ptr { + if r.IsNil() { // create the structure if it's nil + s := reflect.New(r.Type().Elem()) + r.Set(s) + r = s + } + + r = r.Elem() + t = t.Elem() + } + + // unwrap any payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := t.FieldByName(payload) + return parseStruct(r.FieldByName(payload), node, field.Tag) + } + + for i := 0; i < t.NumField(); i++ { + field := t.Field(i) + if c := field.Name[0:1]; strings.ToLower(c) == c { + continue // ignore unexported fields + } + + // figure out what this field is called + name := field.Name + if field.Tag.Get("flattened") != "" && field.Tag.Get("locationNameList") != "" { + name = field.Tag.Get("locationNameList") + } else if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + // try to find the field by name in elements + elems := node.Children[name] + + if elems == nil { // try to find the field in attributes + if val, ok := node.findElem(name); ok { + elems = []*XMLNode{{Text: val}} + } + } + + member := r.FieldByName(field.Name) + for _, elem := range elems { + err := parse(member, elem, field.Tag) + if err != nil { + return err + } + } + } + return nil +} + +// parseList deserializes a list of values from an XML node. Each list entry +// will also be deserialized. +func parseList(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + t := r.Type() + + if tag.Get("flattened") == "" { // look at all item entries + mname := "member" + if name := tag.Get("locationNameList"); name != "" { + mname = name + } + + if Children, ok := node.Children[mname]; ok { + if r.IsNil() { + r.Set(reflect.MakeSlice(t, len(Children), len(Children))) + } + + for i, c := range Children { + err := parse(r.Index(i), c, "") + if err != nil { + return err + } + } + } + } else { // flattened list means this is a single element + if r.IsNil() { + r.Set(reflect.MakeSlice(t, 0, 0)) + } + + childR := reflect.Zero(t.Elem()) + r.Set(reflect.Append(r, childR)) + err := parse(r.Index(r.Len()-1), node, "") + if err != nil { + return err + } + } + + return nil +} + +// parseMap deserializes a map from an XMLNode. The direct children of the XMLNode +// will also be deserialized as map entries. +func parseMap(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + if r.IsNil() { + r.Set(reflect.MakeMap(r.Type())) + } + + if tag.Get("flattened") == "" { // look at all child entries + for _, entry := range node.Children["entry"] { + parseMapEntry(r, entry, tag) + } + } else { // this element is itself an entry + parseMapEntry(r, node, tag) + } + + return nil +} + +// parseMapEntry deserializes a map entry from a XML node. +func parseMapEntry(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + kname, vname := "key", "value" + if n := tag.Get("locationNameKey"); n != "" { + kname = n + } + if n := tag.Get("locationNameValue"); n != "" { + vname = n + } + + keys, ok := node.Children[kname] + values := node.Children[vname] + if ok { + for i, key := range keys { + keyR := reflect.ValueOf(key.Text) + value := values[i] + valueR := reflect.New(r.Type().Elem()).Elem() + + parse(valueR, value, "") + r.SetMapIndex(keyR, valueR) + } + } + return nil +} + +// parseScaller deserializes an XMLNode value into a concrete type based on the +// interface type of r. +// +// Error is returned if the deserialization fails due to invalid type conversion, +// or unsupported interface type. +func parseScalar(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + switch r.Interface().(type) { + case *string: + r.Set(reflect.ValueOf(&node.Text)) + return nil + case []byte: + b, err := base64.StdEncoding.DecodeString(node.Text) + if err != nil { + return err + } + r.Set(reflect.ValueOf(b)) + case *bool: + v, err := strconv.ParseBool(node.Text) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&v)) + case *int64: + v, err := strconv.ParseInt(node.Text, 10, 64) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&v)) + case *float64: + v, err := strconv.ParseFloat(node.Text, 64) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&v)) + case *time.Time: + const ISO8601UTC = "2006-01-02T15:04:05Z" + t, err := time.Parse(ISO8601UTC, node.Text) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&t)) + default: + return fmt.Errorf("unsupported value: %v (%s)", r.Interface(), r.Type()) + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go new file mode 100644 index 00000000..3e970b62 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go @@ -0,0 +1,147 @@ +package xmlutil + +import ( + "encoding/xml" + "fmt" + "io" + "sort" +) + +// A XMLNode contains the values to be encoded or decoded. +type XMLNode struct { + Name xml.Name `json:",omitempty"` + Children map[string][]*XMLNode `json:",omitempty"` + Text string `json:",omitempty"` + Attr []xml.Attr `json:",omitempty"` + + namespaces map[string]string + parent *XMLNode +} + +// NewXMLElement returns a pointer to a new XMLNode initialized to default values. +func NewXMLElement(name xml.Name) *XMLNode { + return &XMLNode{ + Name: name, + Children: map[string][]*XMLNode{}, + Attr: []xml.Attr{}, + } +} + +// AddChild adds child to the XMLNode. +func (n *XMLNode) AddChild(child *XMLNode) { + if _, ok := n.Children[child.Name.Local]; !ok { + n.Children[child.Name.Local] = []*XMLNode{} + } + n.Children[child.Name.Local] = append(n.Children[child.Name.Local], child) +} + +// XMLToStruct converts a xml.Decoder stream to XMLNode with nested values. +func XMLToStruct(d *xml.Decoder, s *xml.StartElement) (*XMLNode, error) { + out := &XMLNode{} + for { + tok, err := d.Token() + if err != nil { + if err == io.EOF { + break + } else { + return out, err + } + } + + if tok == nil { + break + } + + switch typed := tok.(type) { + case xml.CharData: + out.Text = string(typed.Copy()) + case xml.StartElement: + el := typed.Copy() + out.Attr = el.Attr + if out.Children == nil { + out.Children = map[string][]*XMLNode{} + } + + name := typed.Name.Local + slice := out.Children[name] + if slice == nil { + slice = []*XMLNode{} + } + node, e := XMLToStruct(d, &el) + out.findNamespaces() + if e != nil { + return out, e + } + node.Name = typed.Name + node.findNamespaces() + tempOut := *out + // Save into a temp variable, simply because out gets squashed during + // loop iterations + node.parent = &tempOut + slice = append(slice, node) + out.Children[name] = slice + case xml.EndElement: + if s != nil && s.Name.Local == typed.Name.Local { // matching end token + return out, nil + } + out = &XMLNode{} + } + } + return out, nil +} + +func (n *XMLNode) findNamespaces() { + ns := map[string]string{} + for _, a := range n.Attr { + if a.Name.Space == "xmlns" { + ns[a.Value] = a.Name.Local + } + } + + n.namespaces = ns +} + +func (n *XMLNode) findElem(name string) (string, bool) { + for node := n; node != nil; node = node.parent { + for _, a := range node.Attr { + namespace := a.Name.Space + if v, ok := node.namespaces[namespace]; ok { + namespace = v + } + if name == fmt.Sprintf("%s:%s", namespace, a.Name.Local) { + return a.Value, true + } + } + } + return "", false +} + +// StructToXML writes an XMLNode to a xml.Encoder as tokens. +func StructToXML(e *xml.Encoder, node *XMLNode, sorted bool) error { + e.EncodeToken(xml.StartElement{Name: node.Name, Attr: node.Attr}) + + if node.Text != "" { + e.EncodeToken(xml.CharData([]byte(node.Text))) + } else if sorted { + sortedNames := []string{} + for k := range node.Children { + sortedNames = append(sortedNames, k) + } + sort.Strings(sortedNames) + + for _, k := range sortedNames { + for _, v := range node.Children[k] { + StructToXML(e, v, sorted) + } + } + } else { + for _, c := range node.Children { + for _, v := range c { + StructToXML(e, v, sorted) + } + } + } + + e.EncodeToken(xml.EndElement{Name: node.Name}) + return e.Flush() +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go new file mode 100644 index 00000000..3cd91839 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go @@ -0,0 +1,12089 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudformation + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/query" +) + +const opCancelUpdateStack = "CancelUpdateStack" + +// CancelUpdateStackRequest generates a "aws/request.Request" representing the +// client's request for the CancelUpdateStack operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelUpdateStack for more information on using the CancelUpdateStack +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelUpdateStackRequest method. +// req, resp := client.CancelUpdateStackRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CancelUpdateStack +func (c *CloudFormation) CancelUpdateStackRequest(input *CancelUpdateStackInput) (req *request.Request, output *CancelUpdateStackOutput) { + op := &request.Operation{ + Name: opCancelUpdateStack, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelUpdateStackInput{} + } + + output = &CancelUpdateStackOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// CancelUpdateStack API operation for AWS CloudFormation. +// +// Cancels an update on the specified stack. If the call completes successfully, +// the stack rolls back the update and reverts to the previous stack configuration. +// +// You can cancel only stacks that are in the UPDATE_IN_PROGRESS state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation CancelUpdateStack for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTokenAlreadyExistsException "TokenAlreadyExistsException" +// A client request token already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CancelUpdateStack +func (c *CloudFormation) CancelUpdateStack(input *CancelUpdateStackInput) (*CancelUpdateStackOutput, error) { + req, out := c.CancelUpdateStackRequest(input) + return out, req.Send() +} + +// CancelUpdateStackWithContext is the same as CancelUpdateStack with the addition of +// the ability to pass a context and additional request options. +// +// See CancelUpdateStack for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) CancelUpdateStackWithContext(ctx aws.Context, input *CancelUpdateStackInput, opts ...request.Option) (*CancelUpdateStackOutput, error) { + req, out := c.CancelUpdateStackRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opContinueUpdateRollback = "ContinueUpdateRollback" + +// ContinueUpdateRollbackRequest generates a "aws/request.Request" representing the +// client's request for the ContinueUpdateRollback operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ContinueUpdateRollback for more information on using the ContinueUpdateRollback +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ContinueUpdateRollbackRequest method. +// req, resp := client.ContinueUpdateRollbackRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ContinueUpdateRollback +func (c *CloudFormation) ContinueUpdateRollbackRequest(input *ContinueUpdateRollbackInput) (req *request.Request, output *ContinueUpdateRollbackOutput) { + op := &request.Operation{ + Name: opContinueUpdateRollback, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ContinueUpdateRollbackInput{} + } + + output = &ContinueUpdateRollbackOutput{} + req = c.newRequest(op, input, output) + return +} + +// ContinueUpdateRollback API operation for AWS CloudFormation. +// +// For a specified stack that is in the UPDATE_ROLLBACK_FAILED state, continues +// rolling it back to the UPDATE_ROLLBACK_COMPLETE state. Depending on the cause +// of the failure, you can manually fix the error (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-update-rollback-failed) +// and continue the rollback. By continuing the rollback, you can return your +// stack to a working state (the UPDATE_ROLLBACK_COMPLETE state), and then try +// to update the stack again. +// +// A stack goes into the UPDATE_ROLLBACK_FAILED state when AWS CloudFormation +// cannot roll back all changes after a failed stack update. For example, you +// might have a stack that is rolling back to an old database instance that +// was deleted outside of AWS CloudFormation. Because AWS CloudFormation doesn't +// know the database was deleted, it assumes that the database instance still +// exists and attempts to roll back to it, causing the update rollback to fail. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ContinueUpdateRollback for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTokenAlreadyExistsException "TokenAlreadyExistsException" +// A client request token already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ContinueUpdateRollback +func (c *CloudFormation) ContinueUpdateRollback(input *ContinueUpdateRollbackInput) (*ContinueUpdateRollbackOutput, error) { + req, out := c.ContinueUpdateRollbackRequest(input) + return out, req.Send() +} + +// ContinueUpdateRollbackWithContext is the same as ContinueUpdateRollback with the addition of +// the ability to pass a context and additional request options. +// +// See ContinueUpdateRollback for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ContinueUpdateRollbackWithContext(ctx aws.Context, input *ContinueUpdateRollbackInput, opts ...request.Option) (*ContinueUpdateRollbackOutput, error) { + req, out := c.ContinueUpdateRollbackRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateChangeSet = "CreateChangeSet" + +// CreateChangeSetRequest generates a "aws/request.Request" representing the +// client's request for the CreateChangeSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateChangeSet for more information on using the CreateChangeSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateChangeSetRequest method. +// req, resp := client.CreateChangeSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet +func (c *CloudFormation) CreateChangeSetRequest(input *CreateChangeSetInput) (req *request.Request, output *CreateChangeSetOutput) { + op := &request.Operation{ + Name: opCreateChangeSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateChangeSetInput{} + } + + output = &CreateChangeSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateChangeSet API operation for AWS CloudFormation. +// +// Creates a list of changes that will be applied to a stack so that you can +// review the changes before executing them. You can create a change set for +// a stack that doesn't exist or an existing stack. If you create a change set +// for a stack that doesn't exist, the change set shows all of the resources +// that AWS CloudFormation will create. If you create a change set for an existing +// stack, AWS CloudFormation compares the stack's information with the information +// that you submit in the change set and lists the differences. Use change sets +// to understand which resources AWS CloudFormation will create or change, and +// how it will change resources in an existing stack, before you create or update +// a stack. +// +// To create a change set for a stack that doesn't exist, for the ChangeSetType +// parameter, specify CREATE. To create a change set for an existing stack, +// specify UPDATE for the ChangeSetType parameter. After the CreateChangeSet +// call successfully completes, AWS CloudFormation starts creating the change +// set. To check the status of the change set or to review it, use the DescribeChangeSet +// action. +// +// When you are satisfied with the changes the change set will make, execute +// the change set by using the ExecuteChangeSet action. AWS CloudFormation doesn't +// make changes until you execute the change set. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation CreateChangeSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAlreadyExistsException "AlreadyExistsException" +// The resource with the name requested already exists. +// +// * ErrCodeInsufficientCapabilitiesException "InsufficientCapabilitiesException" +// The template contains resources with capabilities that weren't specified +// in the Capabilities parameter. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The quota for the resource has already been reached. +// +// For information on stack set limitations, see Limitations of StackSets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-limitations.html). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet +func (c *CloudFormation) CreateChangeSet(input *CreateChangeSetInput) (*CreateChangeSetOutput, error) { + req, out := c.CreateChangeSetRequest(input) + return out, req.Send() +} + +// CreateChangeSetWithContext is the same as CreateChangeSet with the addition of +// the ability to pass a context and additional request options. +// +// See CreateChangeSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) CreateChangeSetWithContext(ctx aws.Context, input *CreateChangeSetInput, opts ...request.Option) (*CreateChangeSetOutput, error) { + req, out := c.CreateChangeSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateStack = "CreateStack" + +// CreateStackRequest generates a "aws/request.Request" representing the +// client's request for the CreateStack operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateStack for more information on using the CreateStack +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateStackRequest method. +// req, resp := client.CreateStackRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateStack +func (c *CloudFormation) CreateStackRequest(input *CreateStackInput) (req *request.Request, output *CreateStackOutput) { + op := &request.Operation{ + Name: opCreateStack, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateStackInput{} + } + + output = &CreateStackOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateStack API operation for AWS CloudFormation. +// +// Creates a stack as specified in the template. After the call completes successfully, +// the stack creation starts. You can check the status of the stack via the +// DescribeStacks API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation CreateStack for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// The quota for the resource has already been reached. +// +// For information on stack set limitations, see Limitations of StackSets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-limitations.html). +// +// * ErrCodeAlreadyExistsException "AlreadyExistsException" +// The resource with the name requested already exists. +// +// * ErrCodeTokenAlreadyExistsException "TokenAlreadyExistsException" +// A client request token already exists. +// +// * ErrCodeInsufficientCapabilitiesException "InsufficientCapabilitiesException" +// The template contains resources with capabilities that weren't specified +// in the Capabilities parameter. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateStack +func (c *CloudFormation) CreateStack(input *CreateStackInput) (*CreateStackOutput, error) { + req, out := c.CreateStackRequest(input) + return out, req.Send() +} + +// CreateStackWithContext is the same as CreateStack with the addition of +// the ability to pass a context and additional request options. +// +// See CreateStack for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) CreateStackWithContext(ctx aws.Context, input *CreateStackInput, opts ...request.Option) (*CreateStackOutput, error) { + req, out := c.CreateStackRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateStackInstances = "CreateStackInstances" + +// CreateStackInstancesRequest generates a "aws/request.Request" representing the +// client's request for the CreateStackInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateStackInstances for more information on using the CreateStackInstances +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateStackInstancesRequest method. +// req, resp := client.CreateStackInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateStackInstances +func (c *CloudFormation) CreateStackInstancesRequest(input *CreateStackInstancesInput) (req *request.Request, output *CreateStackInstancesOutput) { + op := &request.Operation{ + Name: opCreateStackInstances, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateStackInstancesInput{} + } + + output = &CreateStackInstancesOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateStackInstances API operation for AWS CloudFormation. +// +// Creates stack instances for the specified accounts, within the specified +// regions. A stack instance refers to a stack in a specific account and region. +// Accounts and Regions are required parameters—you must specify at least one +// account and one region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation CreateStackInstances for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeOperationInProgressException "OperationInProgressException" +// Another operation is currently in progress for this stack set. Only one operation +// can be performed for a stack set at a given time. +// +// * ErrCodeOperationIdAlreadyExistsException "OperationIdAlreadyExistsException" +// The specified operation ID already exists. +// +// * ErrCodeStaleRequestException "StaleRequestException" +// Another operation has been performed on this stack set since the specified +// operation was performed. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The specified operation isn't valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The quota for the resource has already been reached. +// +// For information on stack set limitations, see Limitations of StackSets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-limitations.html). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateStackInstances +func (c *CloudFormation) CreateStackInstances(input *CreateStackInstancesInput) (*CreateStackInstancesOutput, error) { + req, out := c.CreateStackInstancesRequest(input) + return out, req.Send() +} + +// CreateStackInstancesWithContext is the same as CreateStackInstances with the addition of +// the ability to pass a context and additional request options. +// +// See CreateStackInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) CreateStackInstancesWithContext(ctx aws.Context, input *CreateStackInstancesInput, opts ...request.Option) (*CreateStackInstancesOutput, error) { + req, out := c.CreateStackInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateStackSet = "CreateStackSet" + +// CreateStackSetRequest generates a "aws/request.Request" representing the +// client's request for the CreateStackSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateStackSet for more information on using the CreateStackSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateStackSetRequest method. +// req, resp := client.CreateStackSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateStackSet +func (c *CloudFormation) CreateStackSetRequest(input *CreateStackSetInput) (req *request.Request, output *CreateStackSetOutput) { + op := &request.Operation{ + Name: opCreateStackSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateStackSetInput{} + } + + output = &CreateStackSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateStackSet API operation for AWS CloudFormation. +// +// Creates a stack set. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation CreateStackSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNameAlreadyExistsException "NameAlreadyExistsException" +// The specified name is already in use. +// +// * ErrCodeCreatedButModifiedException "CreatedButModifiedException" +// The specified resource exists, but has been changed. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The quota for the resource has already been reached. +// +// For information on stack set limitations, see Limitations of StackSets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-limitations.html). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateStackSet +func (c *CloudFormation) CreateStackSet(input *CreateStackSetInput) (*CreateStackSetOutput, error) { + req, out := c.CreateStackSetRequest(input) + return out, req.Send() +} + +// CreateStackSetWithContext is the same as CreateStackSet with the addition of +// the ability to pass a context and additional request options. +// +// See CreateStackSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) CreateStackSetWithContext(ctx aws.Context, input *CreateStackSetInput, opts ...request.Option) (*CreateStackSetOutput, error) { + req, out := c.CreateStackSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteChangeSet = "DeleteChangeSet" + +// DeleteChangeSetRequest generates a "aws/request.Request" representing the +// client's request for the DeleteChangeSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteChangeSet for more information on using the DeleteChangeSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteChangeSetRequest method. +// req, resp := client.DeleteChangeSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteChangeSet +func (c *CloudFormation) DeleteChangeSetRequest(input *DeleteChangeSetInput) (req *request.Request, output *DeleteChangeSetOutput) { + op := &request.Operation{ + Name: opDeleteChangeSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteChangeSetInput{} + } + + output = &DeleteChangeSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteChangeSet API operation for AWS CloudFormation. +// +// Deletes the specified change set. Deleting change sets ensures that no one +// executes the wrong change set. +// +// If the call successfully completes, AWS CloudFormation successfully deleted +// the change set. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DeleteChangeSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidChangeSetStatusException "InvalidChangeSetStatus" +// The specified change set can't be used to update the stack. For example, +// the change set status might be CREATE_IN_PROGRESS, or the stack status might +// be UPDATE_IN_PROGRESS. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteChangeSet +func (c *CloudFormation) DeleteChangeSet(input *DeleteChangeSetInput) (*DeleteChangeSetOutput, error) { + req, out := c.DeleteChangeSetRequest(input) + return out, req.Send() +} + +// DeleteChangeSetWithContext is the same as DeleteChangeSet with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteChangeSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DeleteChangeSetWithContext(ctx aws.Context, input *DeleteChangeSetInput, opts ...request.Option) (*DeleteChangeSetOutput, error) { + req, out := c.DeleteChangeSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteStack = "DeleteStack" + +// DeleteStackRequest generates a "aws/request.Request" representing the +// client's request for the DeleteStack operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteStack for more information on using the DeleteStack +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteStackRequest method. +// req, resp := client.DeleteStackRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteStack +func (c *CloudFormation) DeleteStackRequest(input *DeleteStackInput) (req *request.Request, output *DeleteStackOutput) { + op := &request.Operation{ + Name: opDeleteStack, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteStackInput{} + } + + output = &DeleteStackOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteStack API operation for AWS CloudFormation. +// +// Deletes a specified stack. Once the call completes successfully, stack deletion +// starts. Deleted stacks do not show up in the DescribeStacks API if the deletion +// has been completed successfully. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DeleteStack for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTokenAlreadyExistsException "TokenAlreadyExistsException" +// A client request token already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteStack +func (c *CloudFormation) DeleteStack(input *DeleteStackInput) (*DeleteStackOutput, error) { + req, out := c.DeleteStackRequest(input) + return out, req.Send() +} + +// DeleteStackWithContext is the same as DeleteStack with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteStack for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DeleteStackWithContext(ctx aws.Context, input *DeleteStackInput, opts ...request.Option) (*DeleteStackOutput, error) { + req, out := c.DeleteStackRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteStackInstances = "DeleteStackInstances" + +// DeleteStackInstancesRequest generates a "aws/request.Request" representing the +// client's request for the DeleteStackInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteStackInstances for more information on using the DeleteStackInstances +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteStackInstancesRequest method. +// req, resp := client.DeleteStackInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteStackInstances +func (c *CloudFormation) DeleteStackInstancesRequest(input *DeleteStackInstancesInput) (req *request.Request, output *DeleteStackInstancesOutput) { + op := &request.Operation{ + Name: opDeleteStackInstances, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteStackInstancesInput{} + } + + output = &DeleteStackInstancesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteStackInstances API operation for AWS CloudFormation. +// +// Deletes stack instances for the specified accounts, in the specified regions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DeleteStackInstances for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeOperationInProgressException "OperationInProgressException" +// Another operation is currently in progress for this stack set. Only one operation +// can be performed for a stack set at a given time. +// +// * ErrCodeOperationIdAlreadyExistsException "OperationIdAlreadyExistsException" +// The specified operation ID already exists. +// +// * ErrCodeStaleRequestException "StaleRequestException" +// Another operation has been performed on this stack set since the specified +// operation was performed. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The specified operation isn't valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteStackInstances +func (c *CloudFormation) DeleteStackInstances(input *DeleteStackInstancesInput) (*DeleteStackInstancesOutput, error) { + req, out := c.DeleteStackInstancesRequest(input) + return out, req.Send() +} + +// DeleteStackInstancesWithContext is the same as DeleteStackInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteStackInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DeleteStackInstancesWithContext(ctx aws.Context, input *DeleteStackInstancesInput, opts ...request.Option) (*DeleteStackInstancesOutput, error) { + req, out := c.DeleteStackInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteStackSet = "DeleteStackSet" + +// DeleteStackSetRequest generates a "aws/request.Request" representing the +// client's request for the DeleteStackSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteStackSet for more information on using the DeleteStackSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteStackSetRequest method. +// req, resp := client.DeleteStackSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteStackSet +func (c *CloudFormation) DeleteStackSetRequest(input *DeleteStackSetInput) (req *request.Request, output *DeleteStackSetOutput) { + op := &request.Operation{ + Name: opDeleteStackSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteStackSetInput{} + } + + output = &DeleteStackSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteStackSet API operation for AWS CloudFormation. +// +// Deletes a stack set. Before you can delete a stack set, all of its member +// stack instances must be deleted. For more information about how to do this, +// see DeleteStackInstances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DeleteStackSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotEmptyException "StackSetNotEmptyException" +// You can't yet delete this stack set, because it still contains one or more +// stack instances. Delete all stack instances from the stack set before deleting +// the stack set. +// +// * ErrCodeOperationInProgressException "OperationInProgressException" +// Another operation is currently in progress for this stack set. Only one operation +// can be performed for a stack set at a given time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DeleteStackSet +func (c *CloudFormation) DeleteStackSet(input *DeleteStackSetInput) (*DeleteStackSetOutput, error) { + req, out := c.DeleteStackSetRequest(input) + return out, req.Send() +} + +// DeleteStackSetWithContext is the same as DeleteStackSet with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteStackSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DeleteStackSetWithContext(ctx aws.Context, input *DeleteStackSetInput, opts ...request.Option) (*DeleteStackSetOutput, error) { + req, out := c.DeleteStackSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeAccountLimits = "DescribeAccountLimits" + +// DescribeAccountLimitsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAccountLimits operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAccountLimits for more information on using the DescribeAccountLimits +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAccountLimitsRequest method. +// req, resp := client.DescribeAccountLimitsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeAccountLimits +func (c *CloudFormation) DescribeAccountLimitsRequest(input *DescribeAccountLimitsInput) (req *request.Request, output *DescribeAccountLimitsOutput) { + op := &request.Operation{ + Name: opDescribeAccountLimits, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAccountLimitsInput{} + } + + output = &DescribeAccountLimitsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAccountLimits API operation for AWS CloudFormation. +// +// Retrieves your account's AWS CloudFormation limits, such as the maximum number +// of stacks that you can create in your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeAccountLimits for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeAccountLimits +func (c *CloudFormation) DescribeAccountLimits(input *DescribeAccountLimitsInput) (*DescribeAccountLimitsOutput, error) { + req, out := c.DescribeAccountLimitsRequest(input) + return out, req.Send() +} + +// DescribeAccountLimitsWithContext is the same as DescribeAccountLimits with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAccountLimits for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeAccountLimitsWithContext(ctx aws.Context, input *DescribeAccountLimitsInput, opts ...request.Option) (*DescribeAccountLimitsOutput, error) { + req, out := c.DescribeAccountLimitsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeChangeSet = "DescribeChangeSet" + +// DescribeChangeSetRequest generates a "aws/request.Request" representing the +// client's request for the DescribeChangeSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeChangeSet for more information on using the DescribeChangeSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeChangeSetRequest method. +// req, resp := client.DescribeChangeSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeChangeSet +func (c *CloudFormation) DescribeChangeSetRequest(input *DescribeChangeSetInput) (req *request.Request, output *DescribeChangeSetOutput) { + op := &request.Operation{ + Name: opDescribeChangeSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeChangeSetInput{} + } + + output = &DescribeChangeSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeChangeSet API operation for AWS CloudFormation. +// +// Returns the inputs for the change set and a list of changes that AWS CloudFormation +// will make if you execute the change set. For more information, see Updating +// Stacks Using Change Sets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html) +// in the AWS CloudFormation User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeChangeSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeChangeSetNotFoundException "ChangeSetNotFound" +// The specified change set name or ID doesn't exit. To view valid change sets +// for a stack, use the ListChangeSets action. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeChangeSet +func (c *CloudFormation) DescribeChangeSet(input *DescribeChangeSetInput) (*DescribeChangeSetOutput, error) { + req, out := c.DescribeChangeSetRequest(input) + return out, req.Send() +} + +// DescribeChangeSetWithContext is the same as DescribeChangeSet with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeChangeSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeChangeSetWithContext(ctx aws.Context, input *DescribeChangeSetInput, opts ...request.Option) (*DescribeChangeSetOutput, error) { + req, out := c.DescribeChangeSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStackEvents = "DescribeStackEvents" + +// DescribeStackEventsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackEvents operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackEvents for more information on using the DescribeStackEvents +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackEventsRequest method. +// req, resp := client.DescribeStackEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackEvents +func (c *CloudFormation) DescribeStackEventsRequest(input *DescribeStackEventsInput) (req *request.Request, output *DescribeStackEventsOutput) { + op := &request.Operation{ + Name: opDescribeStackEvents, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeStackEventsInput{} + } + + output = &DescribeStackEventsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackEvents API operation for AWS CloudFormation. +// +// Returns all stack related events for a specified stack in reverse chronological +// order. For more information about a stack's event history, go to Stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/concept-stack.html) +// in the AWS CloudFormation User Guide. +// +// You can list events for stacks that have failed to create or have been deleted +// by specifying the unique stack identifier (stack ID). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackEvents for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackEvents +func (c *CloudFormation) DescribeStackEvents(input *DescribeStackEventsInput) (*DescribeStackEventsOutput, error) { + req, out := c.DescribeStackEventsRequest(input) + return out, req.Send() +} + +// DescribeStackEventsWithContext is the same as DescribeStackEvents with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackEventsWithContext(ctx aws.Context, input *DescribeStackEventsInput, opts ...request.Option) (*DescribeStackEventsOutput, error) { + req, out := c.DescribeStackEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeStackEventsPages iterates over the pages of a DescribeStackEvents operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeStackEvents method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeStackEvents operation. +// pageNum := 0 +// err := client.DescribeStackEventsPages(params, +// func(page *DescribeStackEventsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) DescribeStackEventsPages(input *DescribeStackEventsInput, fn func(*DescribeStackEventsOutput, bool) bool) error { + return c.DescribeStackEventsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeStackEventsPagesWithContext same as DescribeStackEventsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackEventsPagesWithContext(ctx aws.Context, input *DescribeStackEventsInput, fn func(*DescribeStackEventsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeStackEventsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStackEventsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeStackEventsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeStackInstance = "DescribeStackInstance" + +// DescribeStackInstanceRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackInstance for more information on using the DescribeStackInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackInstanceRequest method. +// req, resp := client.DescribeStackInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackInstance +func (c *CloudFormation) DescribeStackInstanceRequest(input *DescribeStackInstanceInput) (req *request.Request, output *DescribeStackInstanceOutput) { + op := &request.Operation{ + Name: opDescribeStackInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStackInstanceInput{} + } + + output = &DescribeStackInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackInstance API operation for AWS CloudFormation. +// +// Returns the stack instance that's associated with the specified stack set, +// AWS account, and region. +// +// For a list of stack instances that are associated with a specific stack set, +// use ListStackInstances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeStackInstanceNotFoundException "StackInstanceNotFoundException" +// The specified stack instance doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackInstance +func (c *CloudFormation) DescribeStackInstance(input *DescribeStackInstanceInput) (*DescribeStackInstanceOutput, error) { + req, out := c.DescribeStackInstanceRequest(input) + return out, req.Send() +} + +// DescribeStackInstanceWithContext is the same as DescribeStackInstance with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackInstanceWithContext(ctx aws.Context, input *DescribeStackInstanceInput, opts ...request.Option) (*DescribeStackInstanceOutput, error) { + req, out := c.DescribeStackInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStackResource = "DescribeStackResource" + +// DescribeStackResourceRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackResource for more information on using the DescribeStackResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackResourceRequest method. +// req, resp := client.DescribeStackResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackResource +func (c *CloudFormation) DescribeStackResourceRequest(input *DescribeStackResourceInput) (req *request.Request, output *DescribeStackResourceOutput) { + op := &request.Operation{ + Name: opDescribeStackResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStackResourceInput{} + } + + output = &DescribeStackResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackResource API operation for AWS CloudFormation. +// +// Returns a description of the specified resource in the specified stack. +// +// For deleted stacks, DescribeStackResource returns resource information for +// up to 90 days after the stack has been deleted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackResource for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackResource +func (c *CloudFormation) DescribeStackResource(input *DescribeStackResourceInput) (*DescribeStackResourceOutput, error) { + req, out := c.DescribeStackResourceRequest(input) + return out, req.Send() +} + +// DescribeStackResourceWithContext is the same as DescribeStackResource with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackResourceWithContext(ctx aws.Context, input *DescribeStackResourceInput, opts ...request.Option) (*DescribeStackResourceOutput, error) { + req, out := c.DescribeStackResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStackResources = "DescribeStackResources" + +// DescribeStackResourcesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackResources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackResources for more information on using the DescribeStackResources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackResourcesRequest method. +// req, resp := client.DescribeStackResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackResources +func (c *CloudFormation) DescribeStackResourcesRequest(input *DescribeStackResourcesInput) (req *request.Request, output *DescribeStackResourcesOutput) { + op := &request.Operation{ + Name: opDescribeStackResources, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStackResourcesInput{} + } + + output = &DescribeStackResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackResources API operation for AWS CloudFormation. +// +// Returns AWS resource descriptions for running and deleted stacks. If StackName +// is specified, all the associated resources that are part of the stack are +// returned. If PhysicalResourceId is specified, the associated resources of +// the stack that the resource belongs to are returned. +// +// Only the first 100 resources will be returned. If your stack has more resources +// than this, you should use ListStackResources instead. +// +// For deleted stacks, DescribeStackResources returns resource information for +// up to 90 days after the stack has been deleted. +// +// You must specify either StackName or PhysicalResourceId, but not both. In +// addition, you can specify LogicalResourceId to filter the returned result. +// For more information about resources, the LogicalResourceId and PhysicalResourceId, +// go to the AWS CloudFormation User Guide (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/). +// +// A ValidationError is returned if you specify both StackName and PhysicalResourceId +// in the same request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackResources for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackResources +func (c *CloudFormation) DescribeStackResources(input *DescribeStackResourcesInput) (*DescribeStackResourcesOutput, error) { + req, out := c.DescribeStackResourcesRequest(input) + return out, req.Send() +} + +// DescribeStackResourcesWithContext is the same as DescribeStackResources with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackResources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackResourcesWithContext(ctx aws.Context, input *DescribeStackResourcesInput, opts ...request.Option) (*DescribeStackResourcesOutput, error) { + req, out := c.DescribeStackResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStackSet = "DescribeStackSet" + +// DescribeStackSetRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackSet for more information on using the DescribeStackSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackSetRequest method. +// req, resp := client.DescribeStackSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackSet +func (c *CloudFormation) DescribeStackSetRequest(input *DescribeStackSetInput) (req *request.Request, output *DescribeStackSetOutput) { + op := &request.Operation{ + Name: opDescribeStackSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStackSetInput{} + } + + output = &DescribeStackSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackSet API operation for AWS CloudFormation. +// +// Returns the description of the specified stack set. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackSet +func (c *CloudFormation) DescribeStackSet(input *DescribeStackSetInput) (*DescribeStackSetOutput, error) { + req, out := c.DescribeStackSetRequest(input) + return out, req.Send() +} + +// DescribeStackSetWithContext is the same as DescribeStackSet with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackSetWithContext(ctx aws.Context, input *DescribeStackSetInput, opts ...request.Option) (*DescribeStackSetOutput, error) { + req, out := c.DescribeStackSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStackSetOperation = "DescribeStackSetOperation" + +// DescribeStackSetOperationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackSetOperation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackSetOperation for more information on using the DescribeStackSetOperation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackSetOperationRequest method. +// req, resp := client.DescribeStackSetOperationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackSetOperation +func (c *CloudFormation) DescribeStackSetOperationRequest(input *DescribeStackSetOperationInput) (req *request.Request, output *DescribeStackSetOperationOutput) { + op := &request.Operation{ + Name: opDescribeStackSetOperation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStackSetOperationInput{} + } + + output = &DescribeStackSetOperationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackSetOperation API operation for AWS CloudFormation. +// +// Returns the description of the specified stack set operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackSetOperation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeOperationNotFoundException "OperationNotFoundException" +// The specified ID refers to an operation that doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackSetOperation +func (c *CloudFormation) DescribeStackSetOperation(input *DescribeStackSetOperationInput) (*DescribeStackSetOperationOutput, error) { + req, out := c.DescribeStackSetOperationRequest(input) + return out, req.Send() +} + +// DescribeStackSetOperationWithContext is the same as DescribeStackSetOperation with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackSetOperation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackSetOperationWithContext(ctx aws.Context, input *DescribeStackSetOperationInput, opts ...request.Option) (*DescribeStackSetOperationOutput, error) { + req, out := c.DescribeStackSetOperationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStacks = "DescribeStacks" + +// DescribeStacksRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStacks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStacks for more information on using the DescribeStacks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStacksRequest method. +// req, resp := client.DescribeStacksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStacks +func (c *CloudFormation) DescribeStacksRequest(input *DescribeStacksInput) (req *request.Request, output *DescribeStacksOutput) { + op := &request.Operation{ + Name: opDescribeStacks, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeStacksInput{} + } + + output = &DescribeStacksOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStacks API operation for AWS CloudFormation. +// +// Returns the description for the specified stack; if no stack name was specified, +// then it returns the description for all the stacks created. +// +// If the stack does not exist, an AmazonCloudFormationException is returned. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStacks for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStacks +func (c *CloudFormation) DescribeStacks(input *DescribeStacksInput) (*DescribeStacksOutput, error) { + req, out := c.DescribeStacksRequest(input) + return out, req.Send() +} + +// DescribeStacksWithContext is the same as DescribeStacks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStacks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStacksWithContext(ctx aws.Context, input *DescribeStacksInput, opts ...request.Option) (*DescribeStacksOutput, error) { + req, out := c.DescribeStacksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeStacksPages iterates over the pages of a DescribeStacks operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeStacks method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeStacks operation. +// pageNum := 0 +// err := client.DescribeStacksPages(params, +// func(page *DescribeStacksOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) DescribeStacksPages(input *DescribeStacksInput, fn func(*DescribeStacksOutput, bool) bool) error { + return c.DescribeStacksPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeStacksPagesWithContext same as DescribeStacksPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStacksPagesWithContext(ctx aws.Context, input *DescribeStacksInput, fn func(*DescribeStacksOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeStacksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStacksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeStacksOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opEstimateTemplateCost = "EstimateTemplateCost" + +// EstimateTemplateCostRequest generates a "aws/request.Request" representing the +// client's request for the EstimateTemplateCost operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See EstimateTemplateCost for more information on using the EstimateTemplateCost +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the EstimateTemplateCostRequest method. +// req, resp := client.EstimateTemplateCostRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/EstimateTemplateCost +func (c *CloudFormation) EstimateTemplateCostRequest(input *EstimateTemplateCostInput) (req *request.Request, output *EstimateTemplateCostOutput) { + op := &request.Operation{ + Name: opEstimateTemplateCost, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &EstimateTemplateCostInput{} + } + + output = &EstimateTemplateCostOutput{} + req = c.newRequest(op, input, output) + return +} + +// EstimateTemplateCost API operation for AWS CloudFormation. +// +// Returns the estimated monthly cost of a template. The return value is an +// AWS Simple Monthly Calculator URL with a query string that describes the +// resources required to run the template. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation EstimateTemplateCost for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/EstimateTemplateCost +func (c *CloudFormation) EstimateTemplateCost(input *EstimateTemplateCostInput) (*EstimateTemplateCostOutput, error) { + req, out := c.EstimateTemplateCostRequest(input) + return out, req.Send() +} + +// EstimateTemplateCostWithContext is the same as EstimateTemplateCost with the addition of +// the ability to pass a context and additional request options. +// +// See EstimateTemplateCost for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) EstimateTemplateCostWithContext(ctx aws.Context, input *EstimateTemplateCostInput, opts ...request.Option) (*EstimateTemplateCostOutput, error) { + req, out := c.EstimateTemplateCostRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opExecuteChangeSet = "ExecuteChangeSet" + +// ExecuteChangeSetRequest generates a "aws/request.Request" representing the +// client's request for the ExecuteChangeSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ExecuteChangeSet for more information on using the ExecuteChangeSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ExecuteChangeSetRequest method. +// req, resp := client.ExecuteChangeSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ExecuteChangeSet +func (c *CloudFormation) ExecuteChangeSetRequest(input *ExecuteChangeSetInput) (req *request.Request, output *ExecuteChangeSetOutput) { + op := &request.Operation{ + Name: opExecuteChangeSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ExecuteChangeSetInput{} + } + + output = &ExecuteChangeSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// ExecuteChangeSet API operation for AWS CloudFormation. +// +// Updates a stack using the input information that was provided when the specified +// change set was created. After the call successfully completes, AWS CloudFormation +// starts updating the stack. Use the DescribeStacks action to view the status +// of the update. +// +// When you execute a change set, AWS CloudFormation deletes all other change +// sets associated with the stack because they aren't valid for the updated +// stack. +// +// If a stack policy is associated with the stack, AWS CloudFormation enforces +// the policy during the update. You can't specify a temporary stack policy +// that overrides the current policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ExecuteChangeSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidChangeSetStatusException "InvalidChangeSetStatus" +// The specified change set can't be used to update the stack. For example, +// the change set status might be CREATE_IN_PROGRESS, or the stack status might +// be UPDATE_IN_PROGRESS. +// +// * ErrCodeChangeSetNotFoundException "ChangeSetNotFound" +// The specified change set name or ID doesn't exit. To view valid change sets +// for a stack, use the ListChangeSets action. +// +// * ErrCodeInsufficientCapabilitiesException "InsufficientCapabilitiesException" +// The template contains resources with capabilities that weren't specified +// in the Capabilities parameter. +// +// * ErrCodeTokenAlreadyExistsException "TokenAlreadyExistsException" +// A client request token already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ExecuteChangeSet +func (c *CloudFormation) ExecuteChangeSet(input *ExecuteChangeSetInput) (*ExecuteChangeSetOutput, error) { + req, out := c.ExecuteChangeSetRequest(input) + return out, req.Send() +} + +// ExecuteChangeSetWithContext is the same as ExecuteChangeSet with the addition of +// the ability to pass a context and additional request options. +// +// See ExecuteChangeSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ExecuteChangeSetWithContext(ctx aws.Context, input *ExecuteChangeSetInput, opts ...request.Option) (*ExecuteChangeSetOutput, error) { + req, out := c.ExecuteChangeSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetStackPolicy = "GetStackPolicy" + +// GetStackPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetStackPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetStackPolicy for more information on using the GetStackPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetStackPolicyRequest method. +// req, resp := client.GetStackPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/GetStackPolicy +func (c *CloudFormation) GetStackPolicyRequest(input *GetStackPolicyInput) (req *request.Request, output *GetStackPolicyOutput) { + op := &request.Operation{ + Name: opGetStackPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetStackPolicyInput{} + } + + output = &GetStackPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetStackPolicy API operation for AWS CloudFormation. +// +// Returns the stack policy for a specified stack. If a stack doesn't have a +// policy, a null value is returned. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation GetStackPolicy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/GetStackPolicy +func (c *CloudFormation) GetStackPolicy(input *GetStackPolicyInput) (*GetStackPolicyOutput, error) { + req, out := c.GetStackPolicyRequest(input) + return out, req.Send() +} + +// GetStackPolicyWithContext is the same as GetStackPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetStackPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) GetStackPolicyWithContext(ctx aws.Context, input *GetStackPolicyInput, opts ...request.Option) (*GetStackPolicyOutput, error) { + req, out := c.GetStackPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetTemplate = "GetTemplate" + +// GetTemplateRequest generates a "aws/request.Request" representing the +// client's request for the GetTemplate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetTemplate for more information on using the GetTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetTemplateRequest method. +// req, resp := client.GetTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/GetTemplate +func (c *CloudFormation) GetTemplateRequest(input *GetTemplateInput) (req *request.Request, output *GetTemplateOutput) { + op := &request.Operation{ + Name: opGetTemplate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetTemplateInput{} + } + + output = &GetTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetTemplate API operation for AWS CloudFormation. +// +// Returns the template body for a specified stack. You can get the template +// for running or deleted stacks. +// +// For deleted stacks, GetTemplate returns the template for up to 90 days after +// the stack has been deleted. +// +// If the template does not exist, a ValidationError is returned. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation GetTemplate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeChangeSetNotFoundException "ChangeSetNotFound" +// The specified change set name or ID doesn't exit. To view valid change sets +// for a stack, use the ListChangeSets action. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/GetTemplate +func (c *CloudFormation) GetTemplate(input *GetTemplateInput) (*GetTemplateOutput, error) { + req, out := c.GetTemplateRequest(input) + return out, req.Send() +} + +// GetTemplateWithContext is the same as GetTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See GetTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) GetTemplateWithContext(ctx aws.Context, input *GetTemplateInput, opts ...request.Option) (*GetTemplateOutput, error) { + req, out := c.GetTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetTemplateSummary = "GetTemplateSummary" + +// GetTemplateSummaryRequest generates a "aws/request.Request" representing the +// client's request for the GetTemplateSummary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetTemplateSummary for more information on using the GetTemplateSummary +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetTemplateSummaryRequest method. +// req, resp := client.GetTemplateSummaryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/GetTemplateSummary +func (c *CloudFormation) GetTemplateSummaryRequest(input *GetTemplateSummaryInput) (req *request.Request, output *GetTemplateSummaryOutput) { + op := &request.Operation{ + Name: opGetTemplateSummary, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetTemplateSummaryInput{} + } + + output = &GetTemplateSummaryOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetTemplateSummary API operation for AWS CloudFormation. +// +// Returns information about a new or existing template. The GetTemplateSummary +// action is useful for viewing parameter information, such as default parameter +// values and parameter types, before you create or update a stack or stack +// set. +// +// You can use the GetTemplateSummary action when you submit a template, or +// you can get template information for a stack set, or a running or deleted +// stack. +// +// For deleted stacks, GetTemplateSummary returns the template information for +// up to 90 days after the stack has been deleted. If the template does not +// exist, a ValidationError is returned. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation GetTemplateSummary for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/GetTemplateSummary +func (c *CloudFormation) GetTemplateSummary(input *GetTemplateSummaryInput) (*GetTemplateSummaryOutput, error) { + req, out := c.GetTemplateSummaryRequest(input) + return out, req.Send() +} + +// GetTemplateSummaryWithContext is the same as GetTemplateSummary with the addition of +// the ability to pass a context and additional request options. +// +// See GetTemplateSummary for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) GetTemplateSummaryWithContext(ctx aws.Context, input *GetTemplateSummaryInput, opts ...request.Option) (*GetTemplateSummaryOutput, error) { + req, out := c.GetTemplateSummaryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListChangeSets = "ListChangeSets" + +// ListChangeSetsRequest generates a "aws/request.Request" representing the +// client's request for the ListChangeSets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListChangeSets for more information on using the ListChangeSets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListChangeSetsRequest method. +// req, resp := client.ListChangeSetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListChangeSets +func (c *CloudFormation) ListChangeSetsRequest(input *ListChangeSetsInput) (req *request.Request, output *ListChangeSetsOutput) { + op := &request.Operation{ + Name: opListChangeSets, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListChangeSetsInput{} + } + + output = &ListChangeSetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListChangeSets API operation for AWS CloudFormation. +// +// Returns the ID and status of each active change set for a stack. For example, +// AWS CloudFormation lists change sets that are in the CREATE_IN_PROGRESS or +// CREATE_PENDING state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListChangeSets for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListChangeSets +func (c *CloudFormation) ListChangeSets(input *ListChangeSetsInput) (*ListChangeSetsOutput, error) { + req, out := c.ListChangeSetsRequest(input) + return out, req.Send() +} + +// ListChangeSetsWithContext is the same as ListChangeSets with the addition of +// the ability to pass a context and additional request options. +// +// See ListChangeSets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListChangeSetsWithContext(ctx aws.Context, input *ListChangeSetsInput, opts ...request.Option) (*ListChangeSetsOutput, error) { + req, out := c.ListChangeSetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListExports = "ListExports" + +// ListExportsRequest generates a "aws/request.Request" representing the +// client's request for the ListExports operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListExports for more information on using the ListExports +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListExportsRequest method. +// req, resp := client.ListExportsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListExports +func (c *CloudFormation) ListExportsRequest(input *ListExportsInput) (req *request.Request, output *ListExportsOutput) { + op := &request.Operation{ + Name: opListExports, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListExportsInput{} + } + + output = &ListExportsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListExports API operation for AWS CloudFormation. +// +// Lists all exported output values in the account and region in which you call +// this action. Use this action to see the exported output values that you can +// import into other stacks. To import values, use the Fn::ImportValue (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html) +// function. +// +// For more information, see AWS CloudFormation Export Stack Output Values +// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListExports for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListExports +func (c *CloudFormation) ListExports(input *ListExportsInput) (*ListExportsOutput, error) { + req, out := c.ListExportsRequest(input) + return out, req.Send() +} + +// ListExportsWithContext is the same as ListExports with the addition of +// the ability to pass a context and additional request options. +// +// See ListExports for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListExportsWithContext(ctx aws.Context, input *ListExportsInput, opts ...request.Option) (*ListExportsOutput, error) { + req, out := c.ListExportsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListExportsPages iterates over the pages of a ListExports operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListExports method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListExports operation. +// pageNum := 0 +// err := client.ListExportsPages(params, +// func(page *ListExportsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) ListExportsPages(input *ListExportsInput, fn func(*ListExportsOutput, bool) bool) error { + return c.ListExportsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListExportsPagesWithContext same as ListExportsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListExportsPagesWithContext(ctx aws.Context, input *ListExportsInput, fn func(*ListExportsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListExportsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListExportsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListExportsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListImports = "ListImports" + +// ListImportsRequest generates a "aws/request.Request" representing the +// client's request for the ListImports operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListImports for more information on using the ListImports +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListImportsRequest method. +// req, resp := client.ListImportsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListImports +func (c *CloudFormation) ListImportsRequest(input *ListImportsInput) (req *request.Request, output *ListImportsOutput) { + op := &request.Operation{ + Name: opListImports, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListImportsInput{} + } + + output = &ListImportsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListImports API operation for AWS CloudFormation. +// +// Lists all stacks that are importing an exported output value. To modify or +// remove an exported output value, first use this action to see which stacks +// are using it. To see the exported output values in your account, see ListExports. +// +// For more information about importing an exported output value, see the Fn::ImportValue +// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html) +// function. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListImports for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListImports +func (c *CloudFormation) ListImports(input *ListImportsInput) (*ListImportsOutput, error) { + req, out := c.ListImportsRequest(input) + return out, req.Send() +} + +// ListImportsWithContext is the same as ListImports with the addition of +// the ability to pass a context and additional request options. +// +// See ListImports for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListImportsWithContext(ctx aws.Context, input *ListImportsInput, opts ...request.Option) (*ListImportsOutput, error) { + req, out := c.ListImportsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListImportsPages iterates over the pages of a ListImports operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListImports method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListImports operation. +// pageNum := 0 +// err := client.ListImportsPages(params, +// func(page *ListImportsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) ListImportsPages(input *ListImportsInput, fn func(*ListImportsOutput, bool) bool) error { + return c.ListImportsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListImportsPagesWithContext same as ListImportsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListImportsPagesWithContext(ctx aws.Context, input *ListImportsInput, fn func(*ListImportsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListImportsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListImportsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListImportsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListStackInstances = "ListStackInstances" + +// ListStackInstancesRequest generates a "aws/request.Request" representing the +// client's request for the ListStackInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStackInstances for more information on using the ListStackInstances +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStackInstancesRequest method. +// req, resp := client.ListStackInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackInstances +func (c *CloudFormation) ListStackInstancesRequest(input *ListStackInstancesInput) (req *request.Request, output *ListStackInstancesOutput) { + op := &request.Operation{ + Name: opListStackInstances, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListStackInstancesInput{} + } + + output = &ListStackInstancesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStackInstances API operation for AWS CloudFormation. +// +// Returns summary information about stack instances that are associated with +// the specified stack set. You can filter for stack instances that are associated +// with a specific AWS account name or region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListStackInstances for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackInstances +func (c *CloudFormation) ListStackInstances(input *ListStackInstancesInput) (*ListStackInstancesOutput, error) { + req, out := c.ListStackInstancesRequest(input) + return out, req.Send() +} + +// ListStackInstancesWithContext is the same as ListStackInstances with the addition of +// the ability to pass a context and additional request options. +// +// See ListStackInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStackInstancesWithContext(ctx aws.Context, input *ListStackInstancesInput, opts ...request.Option) (*ListStackInstancesOutput, error) { + req, out := c.ListStackInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListStackResources = "ListStackResources" + +// ListStackResourcesRequest generates a "aws/request.Request" representing the +// client's request for the ListStackResources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStackResources for more information on using the ListStackResources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStackResourcesRequest method. +// req, resp := client.ListStackResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackResources +func (c *CloudFormation) ListStackResourcesRequest(input *ListStackResourcesInput) (req *request.Request, output *ListStackResourcesOutput) { + op := &request.Operation{ + Name: opListStackResources, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListStackResourcesInput{} + } + + output = &ListStackResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStackResources API operation for AWS CloudFormation. +// +// Returns descriptions of all resources of the specified stack. +// +// For deleted stacks, ListStackResources returns resource information for up +// to 90 days after the stack has been deleted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListStackResources for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackResources +func (c *CloudFormation) ListStackResources(input *ListStackResourcesInput) (*ListStackResourcesOutput, error) { + req, out := c.ListStackResourcesRequest(input) + return out, req.Send() +} + +// ListStackResourcesWithContext is the same as ListStackResources with the addition of +// the ability to pass a context and additional request options. +// +// See ListStackResources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStackResourcesWithContext(ctx aws.Context, input *ListStackResourcesInput, opts ...request.Option) (*ListStackResourcesOutput, error) { + req, out := c.ListStackResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListStackResourcesPages iterates over the pages of a ListStackResources operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListStackResources method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListStackResources operation. +// pageNum := 0 +// err := client.ListStackResourcesPages(params, +// func(page *ListStackResourcesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) ListStackResourcesPages(input *ListStackResourcesInput, fn func(*ListStackResourcesOutput, bool) bool) error { + return c.ListStackResourcesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListStackResourcesPagesWithContext same as ListStackResourcesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStackResourcesPagesWithContext(ctx aws.Context, input *ListStackResourcesInput, fn func(*ListStackResourcesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListStackResourcesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListStackResourcesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListStackResourcesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListStackSetOperationResults = "ListStackSetOperationResults" + +// ListStackSetOperationResultsRequest generates a "aws/request.Request" representing the +// client's request for the ListStackSetOperationResults operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStackSetOperationResults for more information on using the ListStackSetOperationResults +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStackSetOperationResultsRequest method. +// req, resp := client.ListStackSetOperationResultsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackSetOperationResults +func (c *CloudFormation) ListStackSetOperationResultsRequest(input *ListStackSetOperationResultsInput) (req *request.Request, output *ListStackSetOperationResultsOutput) { + op := &request.Operation{ + Name: opListStackSetOperationResults, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListStackSetOperationResultsInput{} + } + + output = &ListStackSetOperationResultsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStackSetOperationResults API operation for AWS CloudFormation. +// +// Returns summary information about the results of a stack set operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListStackSetOperationResults for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeOperationNotFoundException "OperationNotFoundException" +// The specified ID refers to an operation that doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackSetOperationResults +func (c *CloudFormation) ListStackSetOperationResults(input *ListStackSetOperationResultsInput) (*ListStackSetOperationResultsOutput, error) { + req, out := c.ListStackSetOperationResultsRequest(input) + return out, req.Send() +} + +// ListStackSetOperationResultsWithContext is the same as ListStackSetOperationResults with the addition of +// the ability to pass a context and additional request options. +// +// See ListStackSetOperationResults for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStackSetOperationResultsWithContext(ctx aws.Context, input *ListStackSetOperationResultsInput, opts ...request.Option) (*ListStackSetOperationResultsOutput, error) { + req, out := c.ListStackSetOperationResultsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListStackSetOperations = "ListStackSetOperations" + +// ListStackSetOperationsRequest generates a "aws/request.Request" representing the +// client's request for the ListStackSetOperations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStackSetOperations for more information on using the ListStackSetOperations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStackSetOperationsRequest method. +// req, resp := client.ListStackSetOperationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackSetOperations +func (c *CloudFormation) ListStackSetOperationsRequest(input *ListStackSetOperationsInput) (req *request.Request, output *ListStackSetOperationsOutput) { + op := &request.Operation{ + Name: opListStackSetOperations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListStackSetOperationsInput{} + } + + output = &ListStackSetOperationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStackSetOperations API operation for AWS CloudFormation. +// +// Returns summary information about operations performed on a stack set. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListStackSetOperations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackSetOperations +func (c *CloudFormation) ListStackSetOperations(input *ListStackSetOperationsInput) (*ListStackSetOperationsOutput, error) { + req, out := c.ListStackSetOperationsRequest(input) + return out, req.Send() +} + +// ListStackSetOperationsWithContext is the same as ListStackSetOperations with the addition of +// the ability to pass a context and additional request options. +// +// See ListStackSetOperations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStackSetOperationsWithContext(ctx aws.Context, input *ListStackSetOperationsInput, opts ...request.Option) (*ListStackSetOperationsOutput, error) { + req, out := c.ListStackSetOperationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListStackSets = "ListStackSets" + +// ListStackSetsRequest generates a "aws/request.Request" representing the +// client's request for the ListStackSets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStackSets for more information on using the ListStackSets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStackSetsRequest method. +// req, resp := client.ListStackSetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackSets +func (c *CloudFormation) ListStackSetsRequest(input *ListStackSetsInput) (req *request.Request, output *ListStackSetsOutput) { + op := &request.Operation{ + Name: opListStackSets, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListStackSetsInput{} + } + + output = &ListStackSetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStackSets API operation for AWS CloudFormation. +// +// Returns summary information about stack sets that are associated with the +// user. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListStackSets for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStackSets +func (c *CloudFormation) ListStackSets(input *ListStackSetsInput) (*ListStackSetsOutput, error) { + req, out := c.ListStackSetsRequest(input) + return out, req.Send() +} + +// ListStackSetsWithContext is the same as ListStackSets with the addition of +// the ability to pass a context and additional request options. +// +// See ListStackSets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStackSetsWithContext(ctx aws.Context, input *ListStackSetsInput, opts ...request.Option) (*ListStackSetsOutput, error) { + req, out := c.ListStackSetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListStacks = "ListStacks" + +// ListStacksRequest generates a "aws/request.Request" representing the +// client's request for the ListStacks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStacks for more information on using the ListStacks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStacksRequest method. +// req, resp := client.ListStacksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStacks +func (c *CloudFormation) ListStacksRequest(input *ListStacksInput) (req *request.Request, output *ListStacksOutput) { + op := &request.Operation{ + Name: opListStacks, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListStacksInput{} + } + + output = &ListStacksOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStacks API operation for AWS CloudFormation. +// +// Returns the summary information for stacks whose status matches the specified +// StackStatusFilter. Summary information for stacks that have been deleted +// is kept for 90 days after the stack is deleted. If no StackStatusFilter is +// specified, summary information for all stacks is returned (including existing +// stacks and stacks that have been deleted). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ListStacks for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ListStacks +func (c *CloudFormation) ListStacks(input *ListStacksInput) (*ListStacksOutput, error) { + req, out := c.ListStacksRequest(input) + return out, req.Send() +} + +// ListStacksWithContext is the same as ListStacks with the addition of +// the ability to pass a context and additional request options. +// +// See ListStacks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStacksWithContext(ctx aws.Context, input *ListStacksInput, opts ...request.Option) (*ListStacksOutput, error) { + req, out := c.ListStacksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListStacksPages iterates over the pages of a ListStacks operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListStacks method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListStacks operation. +// pageNum := 0 +// err := client.ListStacksPages(params, +// func(page *ListStacksOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) ListStacksPages(input *ListStacksInput, fn func(*ListStacksOutput, bool) bool) error { + return c.ListStacksPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListStacksPagesWithContext same as ListStacksPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ListStacksPagesWithContext(ctx aws.Context, input *ListStacksInput, fn func(*ListStacksOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListStacksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListStacksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListStacksOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opSetStackPolicy = "SetStackPolicy" + +// SetStackPolicyRequest generates a "aws/request.Request" representing the +// client's request for the SetStackPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetStackPolicy for more information on using the SetStackPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetStackPolicyRequest method. +// req, resp := client.SetStackPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/SetStackPolicy +func (c *CloudFormation) SetStackPolicyRequest(input *SetStackPolicyInput) (req *request.Request, output *SetStackPolicyOutput) { + op := &request.Operation{ + Name: opSetStackPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SetStackPolicyInput{} + } + + output = &SetStackPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetStackPolicy API operation for AWS CloudFormation. +// +// Sets a stack policy for a specified stack. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation SetStackPolicy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/SetStackPolicy +func (c *CloudFormation) SetStackPolicy(input *SetStackPolicyInput) (*SetStackPolicyOutput, error) { + req, out := c.SetStackPolicyRequest(input) + return out, req.Send() +} + +// SetStackPolicyWithContext is the same as SetStackPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See SetStackPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) SetStackPolicyWithContext(ctx aws.Context, input *SetStackPolicyInput, opts ...request.Option) (*SetStackPolicyOutput, error) { + req, out := c.SetStackPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSignalResource = "SignalResource" + +// SignalResourceRequest generates a "aws/request.Request" representing the +// client's request for the SignalResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SignalResource for more information on using the SignalResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SignalResourceRequest method. +// req, resp := client.SignalResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/SignalResource +func (c *CloudFormation) SignalResourceRequest(input *SignalResourceInput) (req *request.Request, output *SignalResourceOutput) { + op := &request.Operation{ + Name: opSignalResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SignalResourceInput{} + } + + output = &SignalResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SignalResource API operation for AWS CloudFormation. +// +// Sends a signal to the specified resource with a success or failure status. +// You can use the SignalResource API in conjunction with a creation policy +// or update policy. AWS CloudFormation doesn't proceed with a stack creation +// or update until resources receive the required number of signals or the timeout +// period is exceeded. The SignalResource API is useful in cases where you want +// to send signals from anywhere other than an Amazon EC2 instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation SignalResource for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/SignalResource +func (c *CloudFormation) SignalResource(input *SignalResourceInput) (*SignalResourceOutput, error) { + req, out := c.SignalResourceRequest(input) + return out, req.Send() +} + +// SignalResourceWithContext is the same as SignalResource with the addition of +// the ability to pass a context and additional request options. +// +// See SignalResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) SignalResourceWithContext(ctx aws.Context, input *SignalResourceInput, opts ...request.Option) (*SignalResourceOutput, error) { + req, out := c.SignalResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopStackSetOperation = "StopStackSetOperation" + +// StopStackSetOperationRequest generates a "aws/request.Request" representing the +// client's request for the StopStackSetOperation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopStackSetOperation for more information on using the StopStackSetOperation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopStackSetOperationRequest method. +// req, resp := client.StopStackSetOperationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/StopStackSetOperation +func (c *CloudFormation) StopStackSetOperationRequest(input *StopStackSetOperationInput) (req *request.Request, output *StopStackSetOperationOutput) { + op := &request.Operation{ + Name: opStopStackSetOperation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopStackSetOperationInput{} + } + + output = &StopStackSetOperationOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopStackSetOperation API operation for AWS CloudFormation. +// +// Stops an in-progress operation on a stack set and its associated stack instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation StopStackSetOperation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeOperationNotFoundException "OperationNotFoundException" +// The specified ID refers to an operation that doesn't exist. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The specified operation isn't valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/StopStackSetOperation +func (c *CloudFormation) StopStackSetOperation(input *StopStackSetOperationInput) (*StopStackSetOperationOutput, error) { + req, out := c.StopStackSetOperationRequest(input) + return out, req.Send() +} + +// StopStackSetOperationWithContext is the same as StopStackSetOperation with the addition of +// the ability to pass a context and additional request options. +// +// See StopStackSetOperation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) StopStackSetOperationWithContext(ctx aws.Context, input *StopStackSetOperationInput, opts ...request.Option) (*StopStackSetOperationOutput, error) { + req, out := c.StopStackSetOperationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateStack = "UpdateStack" + +// UpdateStackRequest generates a "aws/request.Request" representing the +// client's request for the UpdateStack operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateStack for more information on using the UpdateStack +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateStackRequest method. +// req, resp := client.UpdateStackRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStack +func (c *CloudFormation) UpdateStackRequest(input *UpdateStackInput) (req *request.Request, output *UpdateStackOutput) { + op := &request.Operation{ + Name: opUpdateStack, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateStackInput{} + } + + output = &UpdateStackOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateStack API operation for AWS CloudFormation. +// +// Updates a stack as specified in the template. After the call completes successfully, +// the stack update starts. You can check the status of the stack via the DescribeStacks +// action. +// +// To get a copy of the template for an existing stack, you can use the GetTemplate +// action. +// +// For more information about creating an update template, updating a stack, +// and monitoring the progress of the update, see Updating a Stack (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation UpdateStack for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInsufficientCapabilitiesException "InsufficientCapabilitiesException" +// The template contains resources with capabilities that weren't specified +// in the Capabilities parameter. +// +// * ErrCodeTokenAlreadyExistsException "TokenAlreadyExistsException" +// A client request token already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStack +func (c *CloudFormation) UpdateStack(input *UpdateStackInput) (*UpdateStackOutput, error) { + req, out := c.UpdateStackRequest(input) + return out, req.Send() +} + +// UpdateStackWithContext is the same as UpdateStack with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateStack for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) UpdateStackWithContext(ctx aws.Context, input *UpdateStackInput, opts ...request.Option) (*UpdateStackOutput, error) { + req, out := c.UpdateStackRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateStackInstances = "UpdateStackInstances" + +// UpdateStackInstancesRequest generates a "aws/request.Request" representing the +// client's request for the UpdateStackInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateStackInstances for more information on using the UpdateStackInstances +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateStackInstancesRequest method. +// req, resp := client.UpdateStackInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStackInstances +func (c *CloudFormation) UpdateStackInstancesRequest(input *UpdateStackInstancesInput) (req *request.Request, output *UpdateStackInstancesOutput) { + op := &request.Operation{ + Name: opUpdateStackInstances, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateStackInstancesInput{} + } + + output = &UpdateStackInstancesOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateStackInstances API operation for AWS CloudFormation. +// +// Updates the parameter values for stack instances for the specified accounts, +// within the specified regions. A stack instance refers to a stack in a specific +// account and region. +// +// You can only update stack instances in regions and accounts where they already +// exist; to create additional stack instances, use CreateStackInstances (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStackInstances.html). +// +// During stack set updates, any parameters overridden for a stack instance +// are not updated, but retain their overridden value. +// +// You can only update the parameter values that are specified in the stack +// set; to add or delete a parameter itself, use UpdateStackSet (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStackSet.html) +// to update the stack set template. If you add a parameter to a template, before +// you can override the parameter value specified in the stack set you must +// first use UpdateStackSet (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStackSet.html) +// to update all stack instances with the updated template and parameter value +// specified in the stack set. Once a stack instance has been updated with the +// new parameter, you can then override the parameter value using UpdateStackInstances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation UpdateStackInstances for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeStackInstanceNotFoundException "StackInstanceNotFoundException" +// The specified stack instance doesn't exist. +// +// * ErrCodeOperationInProgressException "OperationInProgressException" +// Another operation is currently in progress for this stack set. Only one operation +// can be performed for a stack set at a given time. +// +// * ErrCodeOperationIdAlreadyExistsException "OperationIdAlreadyExistsException" +// The specified operation ID already exists. +// +// * ErrCodeStaleRequestException "StaleRequestException" +// Another operation has been performed on this stack set since the specified +// operation was performed. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The specified operation isn't valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStackInstances +func (c *CloudFormation) UpdateStackInstances(input *UpdateStackInstancesInput) (*UpdateStackInstancesOutput, error) { + req, out := c.UpdateStackInstancesRequest(input) + return out, req.Send() +} + +// UpdateStackInstancesWithContext is the same as UpdateStackInstances with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateStackInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) UpdateStackInstancesWithContext(ctx aws.Context, input *UpdateStackInstancesInput, opts ...request.Option) (*UpdateStackInstancesOutput, error) { + req, out := c.UpdateStackInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateStackSet = "UpdateStackSet" + +// UpdateStackSetRequest generates a "aws/request.Request" representing the +// client's request for the UpdateStackSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateStackSet for more information on using the UpdateStackSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateStackSetRequest method. +// req, resp := client.UpdateStackSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStackSet +func (c *CloudFormation) UpdateStackSetRequest(input *UpdateStackSetInput) (req *request.Request, output *UpdateStackSetOutput) { + op := &request.Operation{ + Name: opUpdateStackSet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateStackSetInput{} + } + + output = &UpdateStackSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateStackSet API operation for AWS CloudFormation. +// +// Updates the stack set, and associated stack instances in the specified accounts +// and regions. +// +// Even if the stack set operation created by updating the stack set fails (completely +// or partially, below or above a specified failure tolerance), the stack set +// is updated with your changes. Subsequent CreateStackInstances calls on the +// specified stack set use the updated stack set. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation UpdateStackSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeStackSetNotFoundException "StackSetNotFoundException" +// The specified stack set doesn't exist. +// +// * ErrCodeOperationInProgressException "OperationInProgressException" +// Another operation is currently in progress for this stack set. Only one operation +// can be performed for a stack set at a given time. +// +// * ErrCodeOperationIdAlreadyExistsException "OperationIdAlreadyExistsException" +// The specified operation ID already exists. +// +// * ErrCodeStaleRequestException "StaleRequestException" +// Another operation has been performed on this stack set since the specified +// operation was performed. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The specified operation isn't valid. +// +// * ErrCodeStackInstanceNotFoundException "StackInstanceNotFoundException" +// The specified stack instance doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStackSet +func (c *CloudFormation) UpdateStackSet(input *UpdateStackSetInput) (*UpdateStackSetOutput, error) { + req, out := c.UpdateStackSetRequest(input) + return out, req.Send() +} + +// UpdateStackSetWithContext is the same as UpdateStackSet with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateStackSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) UpdateStackSetWithContext(ctx aws.Context, input *UpdateStackSetInput, opts ...request.Option) (*UpdateStackSetOutput, error) { + req, out := c.UpdateStackSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateTerminationProtection = "UpdateTerminationProtection" + +// UpdateTerminationProtectionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateTerminationProtection operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateTerminationProtection for more information on using the UpdateTerminationProtection +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateTerminationProtectionRequest method. +// req, resp := client.UpdateTerminationProtectionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateTerminationProtection +func (c *CloudFormation) UpdateTerminationProtectionRequest(input *UpdateTerminationProtectionInput) (req *request.Request, output *UpdateTerminationProtectionOutput) { + op := &request.Operation{ + Name: opUpdateTerminationProtection, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateTerminationProtectionInput{} + } + + output = &UpdateTerminationProtectionOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateTerminationProtection API operation for AWS CloudFormation. +// +// Updates termination protection for the specified stack. If a user attempts +// to delete a stack with termination protection enabled, the operation fails +// and the stack remains unchanged. For more information, see Protecting a Stack +// From Being Deleted (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) +// in the AWS CloudFormation User Guide. +// +// For nested stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html), +// termination protection is set on the root stack and cannot be changed directly +// on the nested stack. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation UpdateTerminationProtection for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateTerminationProtection +func (c *CloudFormation) UpdateTerminationProtection(input *UpdateTerminationProtectionInput) (*UpdateTerminationProtectionOutput, error) { + req, out := c.UpdateTerminationProtectionRequest(input) + return out, req.Send() +} + +// UpdateTerminationProtectionWithContext is the same as UpdateTerminationProtection with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateTerminationProtection for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) UpdateTerminationProtectionWithContext(ctx aws.Context, input *UpdateTerminationProtectionInput, opts ...request.Option) (*UpdateTerminationProtectionOutput, error) { + req, out := c.UpdateTerminationProtectionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opValidateTemplate = "ValidateTemplate" + +// ValidateTemplateRequest generates a "aws/request.Request" representing the +// client's request for the ValidateTemplate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ValidateTemplate for more information on using the ValidateTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ValidateTemplateRequest method. +// req, resp := client.ValidateTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ValidateTemplate +func (c *CloudFormation) ValidateTemplateRequest(input *ValidateTemplateInput) (req *request.Request, output *ValidateTemplateOutput) { + op := &request.Operation{ + Name: opValidateTemplate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ValidateTemplateInput{} + } + + output = &ValidateTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// ValidateTemplate API operation for AWS CloudFormation. +// +// Validates a specified template. AWS CloudFormation first checks if the template +// is valid JSON. If it isn't, AWS CloudFormation checks if the template is +// valid YAML. If both these checks fail, AWS CloudFormation returns a template +// validation error. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation ValidateTemplate for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/ValidateTemplate +func (c *CloudFormation) ValidateTemplate(input *ValidateTemplateInput) (*ValidateTemplateOutput, error) { + req, out := c.ValidateTemplateRequest(input) + return out, req.Send() +} + +// ValidateTemplateWithContext is the same as ValidateTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See ValidateTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) ValidateTemplateWithContext(ctx aws.Context, input *ValidateTemplateInput, opts ...request.Option) (*ValidateTemplateOutput, error) { + req, out := c.ValidateTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Structure that contains the results of the account gate function which AWS +// CloudFormation invokes, if present, before proceeding with a stack set operation +// in an account and region. +// +// For each account and region, AWS CloudFormation lets you specify a Lamdba +// function that encapsulates any requirements that must be met before CloudFormation +// can proceed with a stack set operation in that account and region. CloudFormation +// invokes the function each time a stack set operation is requested for that +// account and region; if the function returns FAILED, CloudFormation cancels +// the operation in that account and region, and sets the stack set operation +// result status for that account and region to FAILED. +// +// For more information, see Configuring a target account gate (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-account-gating.html). +type AccountGateResult struct { + _ struct{} `type:"structure"` + + // The status of the account gate function. + // + // * SUCCEEDED: The account gate function has determined that the account + // and region passes any requirements for a stack set operation to occur. + // AWS CloudFormation proceeds with the stack operation in that account and + // region. + // + // * FAILED: The account gate function has determined that the account and + // region does not meet the requirements for a stack set operation to occur. + // AWS CloudFormation cancels the stack set operation in that account and + // region, and sets the stack set operation result status for that account + // and region to FAILED. + // + // * SKIPPED: AWS CloudFormation has skipped calling the account gate function + // for this account and region, for one of the following reasons: + // + // An account gate function has not been specified for the account and region. + // AWS CloudFormation proceeds with the stack set operation in this account + // and region. + // + // The AWSCloudFormationStackSetExecutionRole of the stack set adminstration + // account lacks permissions to invoke the function. AWS CloudFormation proceeds + // with the stack set operation in this account and region. + // + // Either no action is necessary, or no action is possible, on the stack. AWS + // CloudFormation skips the stack set operation in this account and region. + Status *string `type:"string" enum:"AccountGateStatus"` + + // The reason for the account gate status assigned to this account and region + // for the stack set operation. + StatusReason *string `type:"string"` +} + +// String returns the string representation +func (s AccountGateResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccountGateResult) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *AccountGateResult) SetStatus(v string) *AccountGateResult { + s.Status = &v + return s +} + +// SetStatusReason sets the StatusReason field's value. +func (s *AccountGateResult) SetStatusReason(v string) *AccountGateResult { + s.StatusReason = &v + return s +} + +// The AccountLimit data type. +type AccountLimit struct { + _ struct{} `type:"structure"` + + // The name of the account limit. Currently, the only account limit is StackLimit. + Name *string `type:"string"` + + // The value that is associated with the account limit name. + Value *int64 `type:"integer"` +} + +// String returns the string representation +func (s AccountLimit) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccountLimit) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *AccountLimit) SetName(v string) *AccountLimit { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *AccountLimit) SetValue(v int64) *AccountLimit { + s.Value = &v + return s +} + +// The input for the CancelUpdateStack action. +type CancelUpdateStackInput struct { + _ struct{} `type:"structure"` + + // A unique identifier for this CancelUpdateStack request. Specify this token + // if you plan to retry requests so that AWS CloudFormation knows that you're + // not attempting to cancel an update on a stack with the same name. You might + // retry CancelUpdateStack requests to ensure that AWS CloudFormation successfully + // received them. + ClientRequestToken *string `min:"1" type:"string"` + + // The name or the unique stack ID that is associated with the stack. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelUpdateStackInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelUpdateStackInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelUpdateStackInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelUpdateStackInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *CancelUpdateStackInput) SetClientRequestToken(v string) *CancelUpdateStackInput { + s.ClientRequestToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *CancelUpdateStackInput) SetStackName(v string) *CancelUpdateStackInput { + s.StackName = &v + return s +} + +type CancelUpdateStackOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelUpdateStackOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelUpdateStackOutput) GoString() string { + return s.String() +} + +// The Change structure describes the changes AWS CloudFormation will perform +// if you execute the change set. +type Change struct { + _ struct{} `type:"structure"` + + // A ResourceChange structure that describes the resource and action that AWS + // CloudFormation will perform. + ResourceChange *ResourceChange `type:"structure"` + + // The type of entity that AWS CloudFormation changes. Currently, the only entity + // type is Resource. + Type *string `type:"string" enum:"ChangeType"` +} + +// String returns the string representation +func (s Change) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Change) GoString() string { + return s.String() +} + +// SetResourceChange sets the ResourceChange field's value. +func (s *Change) SetResourceChange(v *ResourceChange) *Change { + s.ResourceChange = v + return s +} + +// SetType sets the Type field's value. +func (s *Change) SetType(v string) *Change { + s.Type = &v + return s +} + +// The ChangeSetSummary structure describes a change set, its status, and the +// stack with which it's associated. +type ChangeSetSummary struct { + _ struct{} `type:"structure"` + + // The ID of the change set. + ChangeSetId *string `min:"1" type:"string"` + + // The name of the change set. + ChangeSetName *string `min:"1" type:"string"` + + // The start time when the change set was created, in UTC. + CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // Descriptive information about the change set. + Description *string `min:"1" type:"string"` + + // If the change set execution status is AVAILABLE, you can execute the change + // set. If you can’t execute the change set, the status indicates why. For example, + // a change set might be in an UNAVAILABLE state because AWS CloudFormation + // is still creating it or in an OBSOLETE state because the stack was already + // updated. + ExecutionStatus *string `type:"string" enum:"ExecutionStatus"` + + // The ID of the stack with which the change set is associated. + StackId *string `type:"string"` + + // The name of the stack with which the change set is associated. + StackName *string `type:"string"` + + // The state of the change set, such as CREATE_IN_PROGRESS, CREATE_COMPLETE, + // or FAILED. + Status *string `type:"string" enum:"ChangeSetStatus"` + + // A description of the change set's status. For example, if your change set + // is in the FAILED state, AWS CloudFormation shows the error message. + StatusReason *string `type:"string"` +} + +// String returns the string representation +func (s ChangeSetSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChangeSetSummary) GoString() string { + return s.String() +} + +// SetChangeSetId sets the ChangeSetId field's value. +func (s *ChangeSetSummary) SetChangeSetId(v string) *ChangeSetSummary { + s.ChangeSetId = &v + return s +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *ChangeSetSummary) SetChangeSetName(v string) *ChangeSetSummary { + s.ChangeSetName = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *ChangeSetSummary) SetCreationTime(v time.Time) *ChangeSetSummary { + s.CreationTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ChangeSetSummary) SetDescription(v string) *ChangeSetSummary { + s.Description = &v + return s +} + +// SetExecutionStatus sets the ExecutionStatus field's value. +func (s *ChangeSetSummary) SetExecutionStatus(v string) *ChangeSetSummary { + s.ExecutionStatus = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *ChangeSetSummary) SetStackId(v string) *ChangeSetSummary { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *ChangeSetSummary) SetStackName(v string) *ChangeSetSummary { + s.StackName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ChangeSetSummary) SetStatus(v string) *ChangeSetSummary { + s.Status = &v + return s +} + +// SetStatusReason sets the StatusReason field's value. +func (s *ChangeSetSummary) SetStatusReason(v string) *ChangeSetSummary { + s.StatusReason = &v + return s +} + +// The input for the ContinueUpdateRollback action. +type ContinueUpdateRollbackInput struct { + _ struct{} `type:"structure"` + + // A unique identifier for this ContinueUpdateRollback request. Specify this + // token if you plan to retry requests so that AWS CloudFormation knows that + // you're not attempting to continue the rollback to a stack with the same name. + // You might retry ContinueUpdateRollback requests to ensure that AWS CloudFormation + // successfully received them. + ClientRequestToken *string `min:"1" type:"string"` + + // A list of the logical IDs of the resources that AWS CloudFormation skips + // during the continue update rollback operation. You can specify only resources + // that are in the UPDATE_FAILED state because a rollback failed. You can't + // specify resources that are in the UPDATE_FAILED state for other reasons, + // for example, because an update was cancelled. To check why a resource update + // failed, use the DescribeStackResources action, and view the resource status + // reason. + // + // Specify this property to skip rolling back resources that AWS CloudFormation + // can't successfully roll back. We recommend that you troubleshoot (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-update-rollback-failed) + // resources before skipping them. AWS CloudFormation sets the status of the + // specified resources to UPDATE_COMPLETE and continues to roll back the stack. + // After the rollback is complete, the state of the skipped resources will be + // inconsistent with the state of the resources in the stack template. Before + // performing another stack update, you must update the stack or resources to + // be consistent with each other. If you don't, subsequent stack updates might + // fail, and the stack will become unrecoverable. + // + // Specify the minimum number of resources required to successfully roll back + // your stack. For example, a failed resource update might cause dependent resources + // to fail. In this case, it might not be necessary to skip the dependent resources. + // + // To skip resources that are part of nested stacks, use the following format: + // NestedStackName.ResourceLogicalID. If you want to specify the logical ID + // of a stack resource (Type: AWS::CloudFormation::Stack) in the ResourcesToSkip + // list, then its corresponding embedded stack must be in one of the following + // states: DELETE_IN_PROGRESS, DELETE_COMPLETE, or DELETE_FAILED. + // + // Don't confuse a child stack's name with its corresponding logical ID defined + // in the parent stack. For an example of a continue update rollback operation + // with nested stacks, see Using ResourcesToSkip to recover a nested stacks + // hierarchy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html#nested-stacks). + ResourcesToSkip []*string `type:"list"` + + // The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) + // role that AWS CloudFormation assumes to roll back the stack. AWS CloudFormation + // uses the role's credentials to make calls on your behalf. AWS CloudFormation + // always uses this role for all future operations on the stack. As long as + // users have permission to operate on the stack, AWS CloudFormation uses this + // role even if the users don't have permission to pass it. Ensure that the + // role grants least privilege. + // + // If you don't specify a value, AWS CloudFormation uses the role that was previously + // associated with the stack. If no role is available, AWS CloudFormation uses + // a temporary session that is generated from your user credentials. + RoleARN *string `min:"20" type:"string"` + + // The name or the unique ID of the stack that you want to continue rolling + // back. + // + // Don't specify the name of a nested stack (a stack that was created by using + // the AWS::CloudFormation::Stack resource). Instead, use this operation on + // the parent stack (the stack that contains the AWS::CloudFormation::Stack + // resource). + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ContinueUpdateRollbackInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContinueUpdateRollbackInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContinueUpdateRollbackInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContinueUpdateRollbackInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.RoleARN != nil && len(*s.RoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 20)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *ContinueUpdateRollbackInput) SetClientRequestToken(v string) *ContinueUpdateRollbackInput { + s.ClientRequestToken = &v + return s +} + +// SetResourcesToSkip sets the ResourcesToSkip field's value. +func (s *ContinueUpdateRollbackInput) SetResourcesToSkip(v []*string) *ContinueUpdateRollbackInput { + s.ResourcesToSkip = v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *ContinueUpdateRollbackInput) SetRoleARN(v string) *ContinueUpdateRollbackInput { + s.RoleARN = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *ContinueUpdateRollbackInput) SetStackName(v string) *ContinueUpdateRollbackInput { + s.StackName = &v + return s +} + +// The output for a ContinueUpdateRollback action. +type ContinueUpdateRollbackOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ContinueUpdateRollbackOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContinueUpdateRollbackOutput) GoString() string { + return s.String() +} + +// The input for the CreateChangeSet action. +type CreateChangeSetInput struct { + _ struct{} `type:"structure"` + + // A list of values that you must specify before AWS CloudFormation can update + // certain stacks. Some stack templates might include resources that can affect + // permissions in your AWS account, for example, by creating new AWS Identity + // and Access Management (IAM) users. For those stacks, you must explicitly + // acknowledge their capabilities by specifying this parameter. + // + // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following + // resources require you to specify this parameter: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html), + // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html), + // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html), + // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html), + // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html), + // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html), + // and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html). + // If your stack template contains these resources, we recommend that you review + // all permissions associated with them and edit their permissions if necessary. + // + // If you have IAM resources, you can specify either capability. If you have + // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If + // you don't specify this parameter, this action returns an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + Capabilities []*string `type:"list"` + + // The name of the change set. The name must be unique among all change sets + // that are associated with the specified stack. + // + // A change set name can contain only alphanumeric, case sensitive characters + // and hyphens. It must start with an alphabetic character and cannot exceed + // 128 characters. + // + // ChangeSetName is a required field + ChangeSetName *string `min:"1" type:"string" required:"true"` + + // The type of change set operation. To create a change set for a new stack, + // specify CREATE. To create a change set for an existing stack, specify UPDATE. + // + // If you create a change set for a new stack, AWS Cloudformation creates a + // stack with a unique stack ID, but no template or resources. The stack will + // be in the REVIEW_IN_PROGRESS (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-describing-stacks.html#d0e11995) + // state until you execute the change set. + // + // By default, AWS CloudFormation specifies UPDATE. You can't use the UPDATE + // type to create a change set for a new stack or the CREATE type to create + // a change set for an existing stack. + ChangeSetType *string `type:"string" enum:"ChangeSetType"` + + // A unique identifier for this CreateChangeSet request. Specify this token + // if you plan to retry requests so that AWS CloudFormation knows that you're + // not attempting to create another change set with the same name. You might + // retry CreateChangeSet requests to ensure that AWS CloudFormation successfully + // received them. + ClientToken *string `min:"1" type:"string"` + + // A description to help you identify this change set. + Description *string `min:"1" type:"string"` + + // The Amazon Resource Names (ARNs) of Amazon Simple Notification Service (Amazon + // SNS) topics that AWS CloudFormation associates with the stack. To remove + // all associated notification topics, specify an empty list. + NotificationARNs []*string `type:"list"` + + // A list of Parameter structures that specify input parameters for the change + // set. For more information, see the Parameter (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) + // data type. + Parameters []*Parameter `type:"list"` + + // The template resource types that you have permissions to work with if you + // execute this change set, such as AWS::EC2::Instance, AWS::EC2::*, or Custom::MyCustomInstance. + // + // If the list of resource types doesn't include a resource type that you're + // updating, the stack update fails. By default, AWS CloudFormation grants permissions + // to all resource types. AWS Identity and Access Management (IAM) uses this + // parameter for condition keys in IAM policies for AWS CloudFormation. For + // more information, see Controlling Access with AWS Identity and Access Management + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) + // in the AWS CloudFormation User Guide. + ResourceTypes []*string `type:"list"` + + // The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) + // role that AWS CloudFormation assumes when executing the change set. AWS CloudFormation + // uses the role's credentials to make calls on your behalf. AWS CloudFormation + // uses this role for all future operations on the stack. As long as users have + // permission to operate on the stack, AWS CloudFormation uses this role even + // if the users don't have permission to pass it. Ensure that the role grants + // least privilege. + // + // If you don't specify a value, AWS CloudFormation uses the role that was previously + // associated with the stack. If no role is available, AWS CloudFormation uses + // a temporary session that is generated from your user credentials. + RoleARN *string `min:"20" type:"string"` + + // The rollback triggers for AWS CloudFormation to monitor during stack creation + // and updating operations, and for the specified monitoring period afterwards. + RollbackConfiguration *RollbackConfiguration `type:"structure"` + + // The name or the unique ID of the stack for which you are creating a change + // set. AWS CloudFormation generates the change set by comparing this stack's + // information with the information that you submit, such as a modified template + // or different parameter input values. + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` + + // Key-value pairs to associate with this stack. AWS CloudFormation also propagates + // these tags to resources in the stack. You can specify a maximum of 50 tags. + Tags []*Tag `type:"list"` + + // A structure that contains the body of the revised template, with a minimum + // length of 1 byte and a maximum length of 51,200 bytes. AWS CloudFormation + // generates the change set by comparing this template with the template of + // the stack that you specified. + // + // Conditional: You must specify only TemplateBody or TemplateURL. + TemplateBody *string `min:"1" type:"string"` + + // The location of the file that contains the revised template. The URL must + // point to a template (max size: 460,800 bytes) that is located in an S3 bucket. + // AWS CloudFormation generates the change set by comparing this template with + // the stack that you specified. + // + // Conditional: You must specify only TemplateBody or TemplateURL. + TemplateURL *string `min:"1" type:"string"` + + // Whether to reuse the template that is associated with the stack to create + // the change set. + UsePreviousTemplate *bool `type:"boolean"` +} + +// String returns the string representation +func (s CreateChangeSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateChangeSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateChangeSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateChangeSetInput"} + if s.ChangeSetName == nil { + invalidParams.Add(request.NewErrParamRequired("ChangeSetName")) + } + if s.ChangeSetName != nil && len(*s.ChangeSetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ChangeSetName", 1)) + } + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.RoleARN != nil && len(*s.RoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 20)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + if s.RollbackConfiguration != nil { + if err := s.RollbackConfiguration.Validate(); err != nil { + invalidParams.AddNested("RollbackConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCapabilities sets the Capabilities field's value. +func (s *CreateChangeSetInput) SetCapabilities(v []*string) *CreateChangeSetInput { + s.Capabilities = v + return s +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *CreateChangeSetInput) SetChangeSetName(v string) *CreateChangeSetInput { + s.ChangeSetName = &v + return s +} + +// SetChangeSetType sets the ChangeSetType field's value. +func (s *CreateChangeSetInput) SetChangeSetType(v string) *CreateChangeSetInput { + s.ChangeSetType = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateChangeSetInput) SetClientToken(v string) *CreateChangeSetInput { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateChangeSetInput) SetDescription(v string) *CreateChangeSetInput { + s.Description = &v + return s +} + +// SetNotificationARNs sets the NotificationARNs field's value. +func (s *CreateChangeSetInput) SetNotificationARNs(v []*string) *CreateChangeSetInput { + s.NotificationARNs = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *CreateChangeSetInput) SetParameters(v []*Parameter) *CreateChangeSetInput { + s.Parameters = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *CreateChangeSetInput) SetResourceTypes(v []*string) *CreateChangeSetInput { + s.ResourceTypes = v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *CreateChangeSetInput) SetRoleARN(v string) *CreateChangeSetInput { + s.RoleARN = &v + return s +} + +// SetRollbackConfiguration sets the RollbackConfiguration field's value. +func (s *CreateChangeSetInput) SetRollbackConfiguration(v *RollbackConfiguration) *CreateChangeSetInput { + s.RollbackConfiguration = v + return s +} + +// SetStackName sets the StackName field's value. +func (s *CreateChangeSetInput) SetStackName(v string) *CreateChangeSetInput { + s.StackName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateChangeSetInput) SetTags(v []*Tag) *CreateChangeSetInput { + s.Tags = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *CreateChangeSetInput) SetTemplateBody(v string) *CreateChangeSetInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *CreateChangeSetInput) SetTemplateURL(v string) *CreateChangeSetInput { + s.TemplateURL = &v + return s +} + +// SetUsePreviousTemplate sets the UsePreviousTemplate field's value. +func (s *CreateChangeSetInput) SetUsePreviousTemplate(v bool) *CreateChangeSetInput { + s.UsePreviousTemplate = &v + return s +} + +// The output for the CreateChangeSet action. +type CreateChangeSetOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the change set. + Id *string `min:"1" type:"string"` + + // The unique ID of the stack. + StackId *string `type:"string"` +} + +// String returns the string representation +func (s CreateChangeSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateChangeSetOutput) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *CreateChangeSetOutput) SetId(v string) *CreateChangeSetOutput { + s.Id = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *CreateChangeSetOutput) SetStackId(v string) *CreateChangeSetOutput { + s.StackId = &v + return s +} + +// The input for CreateStack action. +type CreateStackInput struct { + _ struct{} `type:"structure"` + + // A list of values that you must specify before AWS CloudFormation can create + // certain stacks. Some stack templates might include resources that can affect + // permissions in your AWS account, for example, by creating new AWS Identity + // and Access Management (IAM) users. For those stacks, you must explicitly + // acknowledge their capabilities by specifying this parameter. + // + // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following + // resources require you to specify this parameter: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html), + // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html), + // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html), + // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html), + // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html), + // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html), + // and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html). + // If your stack template contains these resources, we recommend that you review + // all permissions associated with them and edit their permissions if necessary. + // + // If you have IAM resources, you can specify either capability. If you have + // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If + // you don't specify this parameter, this action returns an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + Capabilities []*string `type:"list"` + + // A unique identifier for this CreateStack request. Specify this token if you + // plan to retry requests so that AWS CloudFormation knows that you're not attempting + // to create a stack with the same name. You might retry CreateStack requests + // to ensure that AWS CloudFormation successfully received them. + // + // All events triggered by a given stack operation are assigned the same client + // request token, which you can use to track operations. For example, if you + // execute a CreateStack operation with the token token1, then all the StackEvents + // generated by that operation will have ClientRequestToken set as token1. + // + // In the console, stack operations display the client request token on the + // Events tab. Stack operations that are initiated from the console use the + // token format Console-StackOperation-ID, which helps you easily identify the + // stack operation . For example, if you create a stack using the console, each + // stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002. + ClientRequestToken *string `min:"1" type:"string"` + + // Set to true to disable rollback of the stack if stack creation failed. You + // can specify either DisableRollback or OnFailure, but not both. + // + // Default: false + DisableRollback *bool `type:"boolean"` + + // Whether to enable termination protection on the specified stack. If a user + // attempts to delete a stack with termination protection enabled, the operation + // fails and the stack remains unchanged. For more information, see Protecting + // a Stack From Being Deleted (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) + // in the AWS CloudFormation User Guide. Termination protection is disabled + // on stacks by default. + // + // For nested stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html), + // termination protection is set on the root stack and cannot be changed directly + // on the nested stack. + EnableTerminationProtection *bool `type:"boolean"` + + // The Simple Notification Service (SNS) topic ARNs to publish stack related + // events. You can find your SNS topic ARNs using the SNS console or your Command + // Line Interface (CLI). + NotificationARNs []*string `type:"list"` + + // Determines what action will be taken if stack creation fails. This must be + // one of: DO_NOTHING, ROLLBACK, or DELETE. You can specify either OnFailure + // or DisableRollback, but not both. + // + // Default: ROLLBACK + OnFailure *string `type:"string" enum:"OnFailure"` + + // A list of Parameter structures that specify input parameters for the stack. + // For more information, see the Parameter (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) + // data type. + Parameters []*Parameter `type:"list"` + + // The template resource types that you have permissions to work with for this + // create stack action, such as AWS::EC2::Instance, AWS::EC2::*, or Custom::MyCustomInstance. + // Use the following syntax to describe template resource types: AWS::* (for + // all AWS resource), Custom::* (for all custom resources), Custom::logical_ID + // (for a specific custom resource), AWS::service_name::* (for all resources + // of a particular AWS service), and AWS::service_name::resource_logical_ID + // (for a specific AWS resource). + // + // If the list of resource types doesn't include a resource that you're creating, + // the stack creation fails. By default, AWS CloudFormation grants permissions + // to all resource types. AWS Identity and Access Management (IAM) uses this + // parameter for AWS CloudFormation-specific condition keys in IAM policies. + // For more information, see Controlling Access with AWS Identity and Access + // Management (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html). + ResourceTypes []*string `type:"list"` + + // The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) + // role that AWS CloudFormation assumes to create the stack. AWS CloudFormation + // uses the role's credentials to make calls on your behalf. AWS CloudFormation + // always uses this role for all future operations on the stack. As long as + // users have permission to operate on the stack, AWS CloudFormation uses this + // role even if the users don't have permission to pass it. Ensure that the + // role grants least privilege. + // + // If you don't specify a value, AWS CloudFormation uses the role that was previously + // associated with the stack. If no role is available, AWS CloudFormation uses + // a temporary session that is generated from your user credentials. + RoleARN *string `min:"20" type:"string"` + + // The rollback triggers for AWS CloudFormation to monitor during stack creation + // and updating operations, and for the specified monitoring period afterwards. + RollbackConfiguration *RollbackConfiguration `type:"structure"` + + // The name that is associated with the stack. The name must be unique in the + // region in which you are creating the stack. + // + // A stack name can contain only alphanumeric characters (case sensitive) and + // hyphens. It must start with an alphabetic character and cannot be longer + // than 128 characters. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` + + // Structure containing the stack policy body. For more information, go to + // Prevent Updates to Stack Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html) + // in the AWS CloudFormation User Guide. You can specify either the StackPolicyBody + // or the StackPolicyURL parameter, but not both. + StackPolicyBody *string `min:"1" type:"string"` + + // Location of a file containing the stack policy. The URL must point to a policy + // (maximum size: 16 KB) located in an S3 bucket in the same region as the stack. + // You can specify either the StackPolicyBody or the StackPolicyURL parameter, + // but not both. + StackPolicyURL *string `min:"1" type:"string"` + + // Key-value pairs to associate with this stack. AWS CloudFormation also propagates + // these tags to the resources created in the stack. A maximum number of 50 + // tags can be specified. + Tags []*Tag `type:"list"` + + // Structure containing the template body with a minimum length of 1 byte and + // a maximum length of 51,200 bytes. For more information, go to Template Anatomy + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify either the TemplateBody or the TemplateURL + // parameter, but not both. + TemplateBody *string `min:"1" type:"string"` + + // Location of file containing the template body. The URL must point to a template + // (max size: 460,800 bytes) that is located in an Amazon S3 bucket. For more + // information, go to the Template Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify either the TemplateBody or the TemplateURL + // parameter, but not both. + TemplateURL *string `min:"1" type:"string"` + + // The amount of time that can pass before the stack status becomes CREATE_FAILED; + // if DisableRollback is not set or is set to false, the stack will be rolled + // back. + TimeoutInMinutes *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s CreateStackInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStackInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStackInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStackInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.RoleARN != nil && len(*s.RoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 20)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackPolicyBody != nil && len(*s.StackPolicyBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyBody", 1)) + } + if s.StackPolicyURL != nil && len(*s.StackPolicyURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyURL", 1)) + } + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + if s.TimeoutInMinutes != nil && *s.TimeoutInMinutes < 1 { + invalidParams.Add(request.NewErrParamMinValue("TimeoutInMinutes", 1)) + } + if s.RollbackConfiguration != nil { + if err := s.RollbackConfiguration.Validate(); err != nil { + invalidParams.AddNested("RollbackConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCapabilities sets the Capabilities field's value. +func (s *CreateStackInput) SetCapabilities(v []*string) *CreateStackInput { + s.Capabilities = v + return s +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *CreateStackInput) SetClientRequestToken(v string) *CreateStackInput { + s.ClientRequestToken = &v + return s +} + +// SetDisableRollback sets the DisableRollback field's value. +func (s *CreateStackInput) SetDisableRollback(v bool) *CreateStackInput { + s.DisableRollback = &v + return s +} + +// SetEnableTerminationProtection sets the EnableTerminationProtection field's value. +func (s *CreateStackInput) SetEnableTerminationProtection(v bool) *CreateStackInput { + s.EnableTerminationProtection = &v + return s +} + +// SetNotificationARNs sets the NotificationARNs field's value. +func (s *CreateStackInput) SetNotificationARNs(v []*string) *CreateStackInput { + s.NotificationARNs = v + return s +} + +// SetOnFailure sets the OnFailure field's value. +func (s *CreateStackInput) SetOnFailure(v string) *CreateStackInput { + s.OnFailure = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *CreateStackInput) SetParameters(v []*Parameter) *CreateStackInput { + s.Parameters = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *CreateStackInput) SetResourceTypes(v []*string) *CreateStackInput { + s.ResourceTypes = v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *CreateStackInput) SetRoleARN(v string) *CreateStackInput { + s.RoleARN = &v + return s +} + +// SetRollbackConfiguration sets the RollbackConfiguration field's value. +func (s *CreateStackInput) SetRollbackConfiguration(v *RollbackConfiguration) *CreateStackInput { + s.RollbackConfiguration = v + return s +} + +// SetStackName sets the StackName field's value. +func (s *CreateStackInput) SetStackName(v string) *CreateStackInput { + s.StackName = &v + return s +} + +// SetStackPolicyBody sets the StackPolicyBody field's value. +func (s *CreateStackInput) SetStackPolicyBody(v string) *CreateStackInput { + s.StackPolicyBody = &v + return s +} + +// SetStackPolicyURL sets the StackPolicyURL field's value. +func (s *CreateStackInput) SetStackPolicyURL(v string) *CreateStackInput { + s.StackPolicyURL = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateStackInput) SetTags(v []*Tag) *CreateStackInput { + s.Tags = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *CreateStackInput) SetTemplateBody(v string) *CreateStackInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *CreateStackInput) SetTemplateURL(v string) *CreateStackInput { + s.TemplateURL = &v + return s +} + +// SetTimeoutInMinutes sets the TimeoutInMinutes field's value. +func (s *CreateStackInput) SetTimeoutInMinutes(v int64) *CreateStackInput { + s.TimeoutInMinutes = &v + return s +} + +type CreateStackInstancesInput struct { + _ struct{} `type:"structure"` + + // The names of one or more AWS accounts that you want to create stack instances + // in the specified region(s) for. + // + // Accounts is a required field + Accounts []*string `type:"list" required:"true"` + + // The unique identifier for this stack set operation. + // + // The operation ID also functions as an idempotency token, to ensure that AWS + // CloudFormation performs the stack set operation only once, even if you retry + // the request multiple times. You might retry stack set operation requests + // to ensure that AWS CloudFormation successfully received them. + // + // If you don't specify an operation ID, the SDK generates one automatically. + // + // Repeating this stack set operation with a new operation ID retries all stack + // instances whose status is OUTDATED. + OperationId *string `min:"1" type:"string" idempotencyToken:"true"` + + // Preferences for how AWS CloudFormation performs this stack set operation. + OperationPreferences *StackSetOperationPreferences `type:"structure"` + + // A list of stack set parameters whose values you want to override in the selected + // stack instances. + // + // Any overridden parameter values will be applied to all stack instances in + // the specified accounts and regions. When specifying parameters and their + // values, be aware of how AWS CloudFormation sets parameter values during stack + // instance operations: + // + // * To override the current value for a parameter, include the parameter + // and specify its value. + // + // * To leave a parameter set to its present value, you can do one of the + // following: + // + // Do not include the parameter in the list. + // + // Include the parameter and specify UsePreviousValue as true. (You cannot specify + // both a value and set UsePreviousValue to true.) + // + // * To set all overridden parameter back to the values specified in the + // stack set, specify a parameter list but do not include any parameters. + // + // * To leave all parameters set to their present values, do not specify + // this property at all. + // + // During stack set updates, any parameter values overridden for a stack instance + // are not updated, but retain their overridden value. + // + // You can only override the parameter values that are specified in the stack + // set; to add or delete a parameter itself, use UpdateStackSet (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStackSet.html) + // to update the stack set template. + ParameterOverrides []*Parameter `type:"list"` + + // The names of one or more regions where you want to create stack instances + // using the specified AWS account(s). + // + // Regions is a required field + Regions []*string `type:"list" required:"true"` + + // The name or unique ID of the stack set that you want to create stack instances + // from. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateStackInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStackInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStackInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStackInstancesInput"} + if s.Accounts == nil { + invalidParams.Add(request.NewErrParamRequired("Accounts")) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.Regions == nil { + invalidParams.Add(request.NewErrParamRequired("Regions")) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + if s.OperationPreferences != nil { + if err := s.OperationPreferences.Validate(); err != nil { + invalidParams.AddNested("OperationPreferences", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccounts sets the Accounts field's value. +func (s *CreateStackInstancesInput) SetAccounts(v []*string) *CreateStackInstancesInput { + s.Accounts = v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *CreateStackInstancesInput) SetOperationId(v string) *CreateStackInstancesInput { + s.OperationId = &v + return s +} + +// SetOperationPreferences sets the OperationPreferences field's value. +func (s *CreateStackInstancesInput) SetOperationPreferences(v *StackSetOperationPreferences) *CreateStackInstancesInput { + s.OperationPreferences = v + return s +} + +// SetParameterOverrides sets the ParameterOverrides field's value. +func (s *CreateStackInstancesInput) SetParameterOverrides(v []*Parameter) *CreateStackInstancesInput { + s.ParameterOverrides = v + return s +} + +// SetRegions sets the Regions field's value. +func (s *CreateStackInstancesInput) SetRegions(v []*string) *CreateStackInstancesInput { + s.Regions = v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *CreateStackInstancesInput) SetStackSetName(v string) *CreateStackInstancesInput { + s.StackSetName = &v + return s +} + +type CreateStackInstancesOutput struct { + _ struct{} `type:"structure"` + + // The unique identifier for this stack set operation. + OperationId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateStackInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStackInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationId sets the OperationId field's value. +func (s *CreateStackInstancesOutput) SetOperationId(v string) *CreateStackInstancesOutput { + s.OperationId = &v + return s +} + +// The output for a CreateStack action. +type CreateStackOutput struct { + _ struct{} `type:"structure"` + + // Unique identifier of the stack. + StackId *string `type:"string"` +} + +// String returns the string representation +func (s CreateStackOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStackOutput) GoString() string { + return s.String() +} + +// SetStackId sets the StackId field's value. +func (s *CreateStackOutput) SetStackId(v string) *CreateStackOutput { + s.StackId = &v + return s +} + +type CreateStackSetInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Number (ARN) of the IAM role to use to create this stack + // set. + // + // Specify an IAM role only if you are using customized administrator roles + // to control which users or groups can manage specific stack sets within the + // same administrator account. For more information, see Prerequisites: Granting + // Permissions for Stack Set Operations (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + AdministrationRoleARN *string `min:"20" type:"string"` + + // A list of values that you must specify before AWS CloudFormation can create + // certain stack sets. Some stack set templates might include resources that + // can affect permissions in your AWS account—for example, by creating new AWS + // Identity and Access Management (IAM) users. For those stack sets, you must + // explicitly acknowledge their capabilities by specifying this parameter. + // + // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following + // resources require you to specify this parameter: + // + // * AWS::IAM::AccessKey + // + // * AWS::IAM::Group + // + // * AWS::IAM::InstanceProfile + // + // * AWS::IAM::Policy + // + // * AWS::IAM::Role + // + // * AWS::IAM::User + // + // * AWS::IAM::UserToGroupAddition + // + // If your stack template contains these resources, we recommend that you review + // all permissions that are associated with them and edit their permissions + // if necessary. + // + // If you have IAM resources, you can specify either capability. If you have + // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If + // you don't specify this parameter, this action returns an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates. (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities) + Capabilities []*string `type:"list"` + + // A unique identifier for this CreateStackSet request. Specify this token if + // you plan to retry requests so that AWS CloudFormation knows that you're not + // attempting to create another stack set with the same name. You might retry + // CreateStackSet requests to ensure that AWS CloudFormation successfully received + // them. + // + // If you don't specify an operation ID, the SDK generates one automatically. + ClientRequestToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // A description of the stack set. You can use the description to identify the + // stack set's purpose or other important information. + Description *string `min:"1" type:"string"` + + // The name of the IAM execution role to use to create the stack set. If you + // do not specify an execution role, AWS CloudFormation uses the AWSCloudFormationStackSetExecutionRole + // role for the stack set operation. + // + // Specify an IAM role only if you are using customized execution roles to control + // which stack resources users and groups can include in their stack sets. + ExecutionRoleName *string `min:"1" type:"string"` + + // The input parameters for the stack set template. + Parameters []*Parameter `type:"list"` + + // The name to associate with the stack set. The name must be unique in the + // region where you create your stack set. + // + // A stack name can contain only alphanumeric characters (case-sensitive) and + // hyphens. It must start with an alphabetic character and can't be longer than + // 128 characters. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` + + // The key-value pairs to associate with this stack set and the stacks created + // from it. AWS CloudFormation also propagates these tags to supported resources + // that are created in the stacks. A maximum number of 50 tags can be specified. + // + // If you specify tags as part of a CreateStackSet action, AWS CloudFormation + // checks to see if you have the required IAM permission to tag resources. If + // you don't, the entire CreateStackSet action fails with an access denied error, + // and the stack set is not created. + Tags []*Tag `type:"list"` + + // The structure that contains the template body, with a minimum length of 1 + // byte and a maximum length of 51,200 bytes. For more information, see Template + // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify either the TemplateBody or the TemplateURL + // parameter, but not both. + TemplateBody *string `min:"1" type:"string"` + + // The location of the file that contains the template body. The URL must point + // to a template (maximum size: 460,800 bytes) that's located in an Amazon S3 + // bucket. For more information, see Template Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify either the TemplateBody or the TemplateURL + // parameter, but not both. + TemplateURL *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateStackSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStackSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStackSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStackSetInput"} + if s.AdministrationRoleARN != nil && len(*s.AdministrationRoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("AdministrationRoleARN", 20)) + } + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.ExecutionRoleName != nil && len(*s.ExecutionRoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleName", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *CreateStackSetInput) SetAdministrationRoleARN(v string) *CreateStackSetInput { + s.AdministrationRoleARN = &v + return s +} + +// SetCapabilities sets the Capabilities field's value. +func (s *CreateStackSetInput) SetCapabilities(v []*string) *CreateStackSetInput { + s.Capabilities = v + return s +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *CreateStackSetInput) SetClientRequestToken(v string) *CreateStackSetInput { + s.ClientRequestToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateStackSetInput) SetDescription(v string) *CreateStackSetInput { + s.Description = &v + return s +} + +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *CreateStackSetInput) SetExecutionRoleName(v string) *CreateStackSetInput { + s.ExecutionRoleName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *CreateStackSetInput) SetParameters(v []*Parameter) *CreateStackSetInput { + s.Parameters = v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *CreateStackSetInput) SetStackSetName(v string) *CreateStackSetInput { + s.StackSetName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateStackSetInput) SetTags(v []*Tag) *CreateStackSetInput { + s.Tags = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *CreateStackSetInput) SetTemplateBody(v string) *CreateStackSetInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *CreateStackSetInput) SetTemplateURL(v string) *CreateStackSetInput { + s.TemplateURL = &v + return s +} + +type CreateStackSetOutput struct { + _ struct{} `type:"structure"` + + // The ID of the stack set that you're creating. + StackSetId *string `type:"string"` +} + +// String returns the string representation +func (s CreateStackSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStackSetOutput) GoString() string { + return s.String() +} + +// SetStackSetId sets the StackSetId field's value. +func (s *CreateStackSetOutput) SetStackSetId(v string) *CreateStackSetOutput { + s.StackSetId = &v + return s +} + +// The input for the DeleteChangeSet action. +type DeleteChangeSetInput struct { + _ struct{} `type:"structure"` + + // The name or Amazon Resource Name (ARN) of the change set that you want to + // delete. + // + // ChangeSetName is a required field + ChangeSetName *string `min:"1" type:"string" required:"true"` + + // If you specified the name of a change set to delete, specify the stack name + // or ID (ARN) that is associated with it. + StackName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteChangeSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteChangeSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteChangeSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteChangeSetInput"} + if s.ChangeSetName == nil { + invalidParams.Add(request.NewErrParamRequired("ChangeSetName")) + } + if s.ChangeSetName != nil && len(*s.ChangeSetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ChangeSetName", 1)) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *DeleteChangeSetInput) SetChangeSetName(v string) *DeleteChangeSetInput { + s.ChangeSetName = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DeleteChangeSetInput) SetStackName(v string) *DeleteChangeSetInput { + s.StackName = &v + return s +} + +// The output for the DeleteChangeSet action. +type DeleteChangeSetOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteChangeSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteChangeSetOutput) GoString() string { + return s.String() +} + +// The input for DeleteStack action. +type DeleteStackInput struct { + _ struct{} `type:"structure"` + + // A unique identifier for this DeleteStack request. Specify this token if you + // plan to retry requests so that AWS CloudFormation knows that you're not attempting + // to delete a stack with the same name. You might retry DeleteStack requests + // to ensure that AWS CloudFormation successfully received them. + // + // All events triggered by a given stack operation are assigned the same client + // request token, which you can use to track operations. For example, if you + // execute a CreateStack operation with the token token1, then all the StackEvents + // generated by that operation will have ClientRequestToken set as token1. + // + // In the console, stack operations display the client request token on the + // Events tab. Stack operations that are initiated from the console use the + // token format Console-StackOperation-ID, which helps you easily identify the + // stack operation . For example, if you create a stack using the console, each + // stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002. + ClientRequestToken *string `min:"1" type:"string"` + + // For stacks in the DELETE_FAILED state, a list of resource logical IDs that + // are associated with the resources you want to retain. During deletion, AWS + // CloudFormation deletes the stack but does not delete the retained resources. + // + // Retaining resources is useful when you cannot delete a resource, such as + // a non-empty S3 bucket, but you want to delete the stack. + RetainResources []*string `type:"list"` + + // The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) + // role that AWS CloudFormation assumes to delete the stack. AWS CloudFormation + // uses the role's credentials to make calls on your behalf. + // + // If you don't specify a value, AWS CloudFormation uses the role that was previously + // associated with the stack. If no role is available, AWS CloudFormation uses + // a temporary session that is generated from your user credentials. + RoleARN *string `min:"20" type:"string"` + + // The name or the unique stack ID that is associated with the stack. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteStackInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStackInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteStackInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteStackInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.RoleARN != nil && len(*s.RoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 20)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *DeleteStackInput) SetClientRequestToken(v string) *DeleteStackInput { + s.ClientRequestToken = &v + return s +} + +// SetRetainResources sets the RetainResources field's value. +func (s *DeleteStackInput) SetRetainResources(v []*string) *DeleteStackInput { + s.RetainResources = v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *DeleteStackInput) SetRoleARN(v string) *DeleteStackInput { + s.RoleARN = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DeleteStackInput) SetStackName(v string) *DeleteStackInput { + s.StackName = &v + return s +} + +type DeleteStackInstancesInput struct { + _ struct{} `type:"structure"` + + // The names of the AWS accounts that you want to delete stack instances for. + // + // Accounts is a required field + Accounts []*string `type:"list" required:"true"` + + // The unique identifier for this stack set operation. + // + // If you don't specify an operation ID, the SDK generates one automatically. + // + // The operation ID also functions as an idempotency token, to ensure that AWS + // CloudFormation performs the stack set operation only once, even if you retry + // the request multiple times. You can retry stack set operation requests to + // ensure that AWS CloudFormation successfully received them. + // + // Repeating this stack set operation with a new operation ID retries all stack + // instances whose status is OUTDATED. + OperationId *string `min:"1" type:"string" idempotencyToken:"true"` + + // Preferences for how AWS CloudFormation performs this stack set operation. + OperationPreferences *StackSetOperationPreferences `type:"structure"` + + // The regions where you want to delete stack set instances. + // + // Regions is a required field + Regions []*string `type:"list" required:"true"` + + // Removes the stack instances from the specified stack set, but doesn't delete + // the stacks. You can't reassociate a retained stack or add an existing, saved + // stack to a new stack set. + // + // For more information, see Stack set operation options (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html#stackset-ops-options). + // + // RetainStacks is a required field + RetainStacks *bool `type:"boolean" required:"true"` + + // The name or unique ID of the stack set that you want to delete stack instances + // for. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteStackInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStackInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteStackInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteStackInstancesInput"} + if s.Accounts == nil { + invalidParams.Add(request.NewErrParamRequired("Accounts")) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.Regions == nil { + invalidParams.Add(request.NewErrParamRequired("Regions")) + } + if s.RetainStacks == nil { + invalidParams.Add(request.NewErrParamRequired("RetainStacks")) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + if s.OperationPreferences != nil { + if err := s.OperationPreferences.Validate(); err != nil { + invalidParams.AddNested("OperationPreferences", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccounts sets the Accounts field's value. +func (s *DeleteStackInstancesInput) SetAccounts(v []*string) *DeleteStackInstancesInput { + s.Accounts = v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *DeleteStackInstancesInput) SetOperationId(v string) *DeleteStackInstancesInput { + s.OperationId = &v + return s +} + +// SetOperationPreferences sets the OperationPreferences field's value. +func (s *DeleteStackInstancesInput) SetOperationPreferences(v *StackSetOperationPreferences) *DeleteStackInstancesInput { + s.OperationPreferences = v + return s +} + +// SetRegions sets the Regions field's value. +func (s *DeleteStackInstancesInput) SetRegions(v []*string) *DeleteStackInstancesInput { + s.Regions = v + return s +} + +// SetRetainStacks sets the RetainStacks field's value. +func (s *DeleteStackInstancesInput) SetRetainStacks(v bool) *DeleteStackInstancesInput { + s.RetainStacks = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *DeleteStackInstancesInput) SetStackSetName(v string) *DeleteStackInstancesInput { + s.StackSetName = &v + return s +} + +type DeleteStackInstancesOutput struct { + _ struct{} `type:"structure"` + + // The unique identifier for this stack set operation. + OperationId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteStackInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStackInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationId sets the OperationId field's value. +func (s *DeleteStackInstancesOutput) SetOperationId(v string) *DeleteStackInstancesOutput { + s.OperationId = &v + return s +} + +type DeleteStackOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteStackOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStackOutput) GoString() string { + return s.String() +} + +type DeleteStackSetInput struct { + _ struct{} `type:"structure"` + + // The name or unique ID of the stack set that you're deleting. You can obtain + // this value by running ListStackSets. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteStackSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStackSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteStackSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteStackSetInput"} + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackSetName sets the StackSetName field's value. +func (s *DeleteStackSetInput) SetStackSetName(v string) *DeleteStackSetInput { + s.StackSetName = &v + return s +} + +type DeleteStackSetOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteStackSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStackSetOutput) GoString() string { + return s.String() +} + +// The input for the DescribeAccountLimits action. +type DescribeAccountLimitsInput struct { + _ struct{} `type:"structure"` + + // A string that identifies the next page of limits that you want to retrieve. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeAccountLimitsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountLimitsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAccountLimitsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAccountLimitsInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAccountLimitsInput) SetNextToken(v string) *DescribeAccountLimitsInput { + s.NextToken = &v + return s +} + +// The output for the DescribeAccountLimits action. +type DescribeAccountLimitsOutput struct { + _ struct{} `type:"structure"` + + // An account limit structure that contain a list of AWS CloudFormation account + // limits and their values. + AccountLimits []*AccountLimit `type:"list"` + + // If the output exceeds 1 MB in size, a string that identifies the next page + // of limits. If no additional page exists, this value is null. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeAccountLimitsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountLimitsOutput) GoString() string { + return s.String() +} + +// SetAccountLimits sets the AccountLimits field's value. +func (s *DescribeAccountLimitsOutput) SetAccountLimits(v []*AccountLimit) *DescribeAccountLimitsOutput { + s.AccountLimits = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAccountLimitsOutput) SetNextToken(v string) *DescribeAccountLimitsOutput { + s.NextToken = &v + return s +} + +// The input for the DescribeChangeSet action. +type DescribeChangeSetInput struct { + _ struct{} `type:"structure"` + + // The name or Amazon Resource Name (ARN) of the change set that you want to + // describe. + // + // ChangeSetName is a required field + ChangeSetName *string `min:"1" type:"string" required:"true"` + + // A string (provided by the DescribeChangeSet response output) that identifies + // the next page of information that you want to retrieve. + NextToken *string `min:"1" type:"string"` + + // If you specified the name of a change set, specify the stack name or ID (ARN) + // of the change set you want to describe. + StackName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeChangeSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeChangeSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeChangeSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeChangeSetInput"} + if s.ChangeSetName == nil { + invalidParams.Add(request.NewErrParamRequired("ChangeSetName")) + } + if s.ChangeSetName != nil && len(*s.ChangeSetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ChangeSetName", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *DescribeChangeSetInput) SetChangeSetName(v string) *DescribeChangeSetInput { + s.ChangeSetName = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeChangeSetInput) SetNextToken(v string) *DescribeChangeSetInput { + s.NextToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeChangeSetInput) SetStackName(v string) *DescribeChangeSetInput { + s.StackName = &v + return s +} + +// The output for the DescribeChangeSet action. +type DescribeChangeSetOutput struct { + _ struct{} `type:"structure"` + + // If you execute the change set, the list of capabilities that were explicitly + // acknowledged when the change set was created. + Capabilities []*string `type:"list"` + + // The ARN of the change set. + ChangeSetId *string `min:"1" type:"string"` + + // The name of the change set. + ChangeSetName *string `min:"1" type:"string"` + + // A list of Change structures that describes the resources AWS CloudFormation + // changes if you execute the change set. + Changes []*Change `type:"list"` + + // The start time when the change set was created, in UTC. + CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // Information about the change set. + Description *string `min:"1" type:"string"` + + // If the change set execution status is AVAILABLE, you can execute the change + // set. If you can’t execute the change set, the status indicates why. For example, + // a change set might be in an UNAVAILABLE state because AWS CloudFormation + // is still creating it or in an OBSOLETE state because the stack was already + // updated. + ExecutionStatus *string `type:"string" enum:"ExecutionStatus"` + + // If the output exceeds 1 MB, a string that identifies the next page of changes. + // If there is no additional page, this value is null. + NextToken *string `min:"1" type:"string"` + + // The ARNs of the Amazon Simple Notification Service (Amazon SNS) topics that + // will be associated with the stack if you execute the change set. + NotificationARNs []*string `type:"list"` + + // A list of Parameter structures that describes the input parameters and their + // values used to create the change set. For more information, see the Parameter + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) + // data type. + Parameters []*Parameter `type:"list"` + + // The rollback triggers for AWS CloudFormation to monitor during stack creation + // and updating operations, and for the specified monitoring period afterwards. + RollbackConfiguration *RollbackConfiguration `type:"structure"` + + // The ARN of the stack that is associated with the change set. + StackId *string `type:"string"` + + // The name of the stack that is associated with the change set. + StackName *string `type:"string"` + + // The current status of the change set, such as CREATE_IN_PROGRESS, CREATE_COMPLETE, + // or FAILED. + Status *string `type:"string" enum:"ChangeSetStatus"` + + // A description of the change set's status. For example, if your attempt to + // create a change set failed, AWS CloudFormation shows the error message. + StatusReason *string `type:"string"` + + // If you execute the change set, the tags that will be associated with the + // stack. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s DescribeChangeSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeChangeSetOutput) GoString() string { + return s.String() +} + +// SetCapabilities sets the Capabilities field's value. +func (s *DescribeChangeSetOutput) SetCapabilities(v []*string) *DescribeChangeSetOutput { + s.Capabilities = v + return s +} + +// SetChangeSetId sets the ChangeSetId field's value. +func (s *DescribeChangeSetOutput) SetChangeSetId(v string) *DescribeChangeSetOutput { + s.ChangeSetId = &v + return s +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *DescribeChangeSetOutput) SetChangeSetName(v string) *DescribeChangeSetOutput { + s.ChangeSetName = &v + return s +} + +// SetChanges sets the Changes field's value. +func (s *DescribeChangeSetOutput) SetChanges(v []*Change) *DescribeChangeSetOutput { + s.Changes = v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeChangeSetOutput) SetCreationTime(v time.Time) *DescribeChangeSetOutput { + s.CreationTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *DescribeChangeSetOutput) SetDescription(v string) *DescribeChangeSetOutput { + s.Description = &v + return s +} + +// SetExecutionStatus sets the ExecutionStatus field's value. +func (s *DescribeChangeSetOutput) SetExecutionStatus(v string) *DescribeChangeSetOutput { + s.ExecutionStatus = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeChangeSetOutput) SetNextToken(v string) *DescribeChangeSetOutput { + s.NextToken = &v + return s +} + +// SetNotificationARNs sets the NotificationARNs field's value. +func (s *DescribeChangeSetOutput) SetNotificationARNs(v []*string) *DescribeChangeSetOutput { + s.NotificationARNs = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *DescribeChangeSetOutput) SetParameters(v []*Parameter) *DescribeChangeSetOutput { + s.Parameters = v + return s +} + +// SetRollbackConfiguration sets the RollbackConfiguration field's value. +func (s *DescribeChangeSetOutput) SetRollbackConfiguration(v *RollbackConfiguration) *DescribeChangeSetOutput { + s.RollbackConfiguration = v + return s +} + +// SetStackId sets the StackId field's value. +func (s *DescribeChangeSetOutput) SetStackId(v string) *DescribeChangeSetOutput { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeChangeSetOutput) SetStackName(v string) *DescribeChangeSetOutput { + s.StackName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DescribeChangeSetOutput) SetStatus(v string) *DescribeChangeSetOutput { + s.Status = &v + return s +} + +// SetStatusReason sets the StatusReason field's value. +func (s *DescribeChangeSetOutput) SetStatusReason(v string) *DescribeChangeSetOutput { + s.StatusReason = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *DescribeChangeSetOutput) SetTags(v []*Tag) *DescribeChangeSetOutput { + s.Tags = v + return s +} + +// The input for DescribeStackEvents action. +type DescribeStackEventsInput struct { + _ struct{} `type:"structure"` + + // A string that identifies the next page of events that you want to retrieve. + NextToken *string `min:"1" type:"string"` + + // The name or the unique stack ID that is associated with the stack, which + // are not always interchangeable: + // + // * Running stacks: You can specify either the stack's name or its unique + // stack ID. + // + // * Deleted stacks: You must specify the unique stack ID. + // + // Default: There is no default value. + StackName *string `type:"string"` +} + +// String returns the string representation +func (s DescribeStackEventsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackEventsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackEventsInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeStackEventsInput) SetNextToken(v string) *DescribeStackEventsInput { + s.NextToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeStackEventsInput) SetStackName(v string) *DescribeStackEventsInput { + s.StackName = &v + return s +} + +// The output for a DescribeStackEvents action. +type DescribeStackEventsOutput struct { + _ struct{} `type:"structure"` + + // If the output exceeds 1 MB in size, a string that identifies the next page + // of events. If no additional page exists, this value is null. + NextToken *string `min:"1" type:"string"` + + // A list of StackEvents structures. + StackEvents []*StackEvent `type:"list"` +} + +// String returns the string representation +func (s DescribeStackEventsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackEventsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeStackEventsOutput) SetNextToken(v string) *DescribeStackEventsOutput { + s.NextToken = &v + return s +} + +// SetStackEvents sets the StackEvents field's value. +func (s *DescribeStackEventsOutput) SetStackEvents(v []*StackEvent) *DescribeStackEventsOutput { + s.StackEvents = v + return s +} + +type DescribeStackInstanceInput struct { + _ struct{} `type:"structure"` + + // The ID of an AWS account that's associated with this stack instance. + // + // StackInstanceAccount is a required field + StackInstanceAccount *string `type:"string" required:"true"` + + // The name of a region that's associated with this stack instance. + // + // StackInstanceRegion is a required field + StackInstanceRegion *string `type:"string" required:"true"` + + // The name or the unique stack ID of the stack set that you want to get stack + // instance information for. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeStackInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackInstanceInput"} + if s.StackInstanceAccount == nil { + invalidParams.Add(request.NewErrParamRequired("StackInstanceAccount")) + } + if s.StackInstanceRegion == nil { + invalidParams.Add(request.NewErrParamRequired("StackInstanceRegion")) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackInstanceAccount sets the StackInstanceAccount field's value. +func (s *DescribeStackInstanceInput) SetStackInstanceAccount(v string) *DescribeStackInstanceInput { + s.StackInstanceAccount = &v + return s +} + +// SetStackInstanceRegion sets the StackInstanceRegion field's value. +func (s *DescribeStackInstanceInput) SetStackInstanceRegion(v string) *DescribeStackInstanceInput { + s.StackInstanceRegion = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *DescribeStackInstanceInput) SetStackSetName(v string) *DescribeStackInstanceInput { + s.StackSetName = &v + return s +} + +type DescribeStackInstanceOutput struct { + _ struct{} `type:"structure"` + + // The stack instance that matches the specified request parameters. + StackInstance *StackInstance `type:"structure"` +} + +// String returns the string representation +func (s DescribeStackInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackInstanceOutput) GoString() string { + return s.String() +} + +// SetStackInstance sets the StackInstance field's value. +func (s *DescribeStackInstanceOutput) SetStackInstance(v *StackInstance) *DescribeStackInstanceOutput { + s.StackInstance = v + return s +} + +// The input for DescribeStackResource action. +type DescribeStackResourceInput struct { + _ struct{} `type:"structure"` + + // The logical name of the resource as specified in the template. + // + // Default: There is no default value. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The name or the unique stack ID that is associated with the stack, which + // are not always interchangeable: + // + // * Running stacks: You can specify either the stack's name or its unique + // stack ID. + // + // * Deleted stacks: You must specify the unique stack ID. + // + // Default: There is no default value. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeStackResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackResourceInput"} + if s.LogicalResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("LogicalResourceId")) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *DescribeStackResourceInput) SetLogicalResourceId(v string) *DescribeStackResourceInput { + s.LogicalResourceId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeStackResourceInput) SetStackName(v string) *DescribeStackResourceInput { + s.StackName = &v + return s +} + +// The output for a DescribeStackResource action. +type DescribeStackResourceOutput struct { + _ struct{} `type:"structure"` + + // A StackResourceDetail structure containing the description of the specified + // resource in the specified stack. + StackResourceDetail *StackResourceDetail `type:"structure"` +} + +// String returns the string representation +func (s DescribeStackResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackResourceOutput) GoString() string { + return s.String() +} + +// SetStackResourceDetail sets the StackResourceDetail field's value. +func (s *DescribeStackResourceOutput) SetStackResourceDetail(v *StackResourceDetail) *DescribeStackResourceOutput { + s.StackResourceDetail = v + return s +} + +// The input for DescribeStackResources action. +type DescribeStackResourcesInput struct { + _ struct{} `type:"structure"` + + // The logical name of the resource as specified in the template. + // + // Default: There is no default value. + LogicalResourceId *string `type:"string"` + + // The name or unique identifier that corresponds to a physical instance ID + // of a resource supported by AWS CloudFormation. + // + // For example, for an Amazon Elastic Compute Cloud (EC2) instance, PhysicalResourceId + // corresponds to the InstanceId. You can pass the EC2 InstanceId to DescribeStackResources + // to find which stack the instance belongs to and what other resources are + // part of the stack. + // + // Required: Conditional. If you do not specify PhysicalResourceId, you must + // specify StackName. + // + // Default: There is no default value. + PhysicalResourceId *string `type:"string"` + + // The name or the unique stack ID that is associated with the stack, which + // are not always interchangeable: + // + // * Running stacks: You can specify either the stack's name or its unique + // stack ID. + // + // * Deleted stacks: You must specify the unique stack ID. + // + // Default: There is no default value. + // + // Required: Conditional. If you do not specify StackName, you must specify + // PhysicalResourceId. + StackName *string `type:"string"` +} + +// String returns the string representation +func (s DescribeStackResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackResourcesInput) GoString() string { + return s.String() +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *DescribeStackResourcesInput) SetLogicalResourceId(v string) *DescribeStackResourcesInput { + s.LogicalResourceId = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *DescribeStackResourcesInput) SetPhysicalResourceId(v string) *DescribeStackResourcesInput { + s.PhysicalResourceId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeStackResourcesInput) SetStackName(v string) *DescribeStackResourcesInput { + s.StackName = &v + return s +} + +// The output for a DescribeStackResources action. +type DescribeStackResourcesOutput struct { + _ struct{} `type:"structure"` + + // A list of StackResource structures. + StackResources []*StackResource `type:"list"` +} + +// String returns the string representation +func (s DescribeStackResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackResourcesOutput) GoString() string { + return s.String() +} + +// SetStackResources sets the StackResources field's value. +func (s *DescribeStackResourcesOutput) SetStackResources(v []*StackResource) *DescribeStackResourcesOutput { + s.StackResources = v + return s +} + +type DescribeStackSetInput struct { + _ struct{} `type:"structure"` + + // The name or unique ID of the stack set whose description you want. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeStackSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackSetInput"} + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackSetName sets the StackSetName field's value. +func (s *DescribeStackSetInput) SetStackSetName(v string) *DescribeStackSetInput { + s.StackSetName = &v + return s +} + +type DescribeStackSetOperationInput struct { + _ struct{} `type:"structure"` + + // The unique ID of the stack set operation. + // + // OperationId is a required field + OperationId *string `min:"1" type:"string" required:"true"` + + // The name or the unique stack ID of the stack set for the stack operation. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeStackSetOperationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackSetOperationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackSetOperationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackSetOperationInput"} + if s.OperationId == nil { + invalidParams.Add(request.NewErrParamRequired("OperationId")) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOperationId sets the OperationId field's value. +func (s *DescribeStackSetOperationInput) SetOperationId(v string) *DescribeStackSetOperationInput { + s.OperationId = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *DescribeStackSetOperationInput) SetStackSetName(v string) *DescribeStackSetOperationInput { + s.StackSetName = &v + return s +} + +type DescribeStackSetOperationOutput struct { + _ struct{} `type:"structure"` + + // The specified stack set operation. + StackSetOperation *StackSetOperation `type:"structure"` +} + +// String returns the string representation +func (s DescribeStackSetOperationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackSetOperationOutput) GoString() string { + return s.String() +} + +// SetStackSetOperation sets the StackSetOperation field's value. +func (s *DescribeStackSetOperationOutput) SetStackSetOperation(v *StackSetOperation) *DescribeStackSetOperationOutput { + s.StackSetOperation = v + return s +} + +type DescribeStackSetOutput struct { + _ struct{} `type:"structure"` + + // The specified stack set. + StackSet *StackSet `type:"structure"` +} + +// String returns the string representation +func (s DescribeStackSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackSetOutput) GoString() string { + return s.String() +} + +// SetStackSet sets the StackSet field's value. +func (s *DescribeStackSetOutput) SetStackSet(v *StackSet) *DescribeStackSetOutput { + s.StackSet = v + return s +} + +// The input for DescribeStacks action. +type DescribeStacksInput struct { + _ struct{} `type:"structure"` + + // A string that identifies the next page of stacks that you want to retrieve. + NextToken *string `min:"1" type:"string"` + + // The name or the unique stack ID that is associated with the stack, which + // are not always interchangeable: + // + // * Running stacks: You can specify either the stack's name or its unique + // stack ID. + // + // * Deleted stacks: You must specify the unique stack ID. + // + // Default: There is no default value. + StackName *string `type:"string"` +} + +// String returns the string representation +func (s DescribeStacksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStacksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStacksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStacksInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeStacksInput) SetNextToken(v string) *DescribeStacksInput { + s.NextToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeStacksInput) SetStackName(v string) *DescribeStacksInput { + s.StackName = &v + return s +} + +// The output for a DescribeStacks action. +type DescribeStacksOutput struct { + _ struct{} `type:"structure"` + + // If the output exceeds 1 MB in size, a string that identifies the next page + // of stacks. If no additional page exists, this value is null. + NextToken *string `min:"1" type:"string"` + + // A list of stack structures. + Stacks []*Stack `type:"list"` +} + +// String returns the string representation +func (s DescribeStacksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStacksOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeStacksOutput) SetNextToken(v string) *DescribeStacksOutput { + s.NextToken = &v + return s +} + +// SetStacks sets the Stacks field's value. +func (s *DescribeStacksOutput) SetStacks(v []*Stack) *DescribeStacksOutput { + s.Stacks = v + return s +} + +// The input for an EstimateTemplateCost action. +type EstimateTemplateCostInput struct { + _ struct{} `type:"structure"` + + // A list of Parameter structures that specify input parameters. + Parameters []*Parameter `type:"list"` + + // Structure containing the template body with a minimum length of 1 byte and + // a maximum length of 51,200 bytes. (For more information, go to Template Anatomy + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide.) + // + // Conditional: You must pass TemplateBody or TemplateURL. If both are passed, + // only TemplateBody is used. + TemplateBody *string `min:"1" type:"string"` + + // Location of file containing the template body. The URL must point to a template + // that is located in an Amazon S3 bucket. For more information, go to Template + // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must pass TemplateURL or TemplateBody. If both are passed, + // only TemplateBody is used. + TemplateURL *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s EstimateTemplateCostInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EstimateTemplateCostInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *EstimateTemplateCostInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EstimateTemplateCostInput"} + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameters sets the Parameters field's value. +func (s *EstimateTemplateCostInput) SetParameters(v []*Parameter) *EstimateTemplateCostInput { + s.Parameters = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *EstimateTemplateCostInput) SetTemplateBody(v string) *EstimateTemplateCostInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *EstimateTemplateCostInput) SetTemplateURL(v string) *EstimateTemplateCostInput { + s.TemplateURL = &v + return s +} + +// The output for a EstimateTemplateCost action. +type EstimateTemplateCostOutput struct { + _ struct{} `type:"structure"` + + // An AWS Simple Monthly Calculator URL with a query string that describes the + // resources required to run the template. + Url *string `type:"string"` +} + +// String returns the string representation +func (s EstimateTemplateCostOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EstimateTemplateCostOutput) GoString() string { + return s.String() +} + +// SetUrl sets the Url field's value. +func (s *EstimateTemplateCostOutput) SetUrl(v string) *EstimateTemplateCostOutput { + s.Url = &v + return s +} + +// The input for the ExecuteChangeSet action. +type ExecuteChangeSetInput struct { + _ struct{} `type:"structure"` + + // The name or ARN of the change set that you want use to update the specified + // stack. + // + // ChangeSetName is a required field + ChangeSetName *string `min:"1" type:"string" required:"true"` + + // A unique identifier for this ExecuteChangeSet request. Specify this token + // if you plan to retry requests so that AWS CloudFormation knows that you're + // not attempting to execute a change set to update a stack with the same name. + // You might retry ExecuteChangeSet requests to ensure that AWS CloudFormation + // successfully received them. + ClientRequestToken *string `min:"1" type:"string"` + + // If you specified the name of a change set, specify the stack name or ID (ARN) + // that is associated with the change set you want to execute. + StackName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ExecuteChangeSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExecuteChangeSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ExecuteChangeSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExecuteChangeSetInput"} + if s.ChangeSetName == nil { + invalidParams.Add(request.NewErrParamRequired("ChangeSetName")) + } + if s.ChangeSetName != nil && len(*s.ChangeSetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ChangeSetName", 1)) + } + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *ExecuteChangeSetInput) SetChangeSetName(v string) *ExecuteChangeSetInput { + s.ChangeSetName = &v + return s +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *ExecuteChangeSetInput) SetClientRequestToken(v string) *ExecuteChangeSetInput { + s.ClientRequestToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *ExecuteChangeSetInput) SetStackName(v string) *ExecuteChangeSetInput { + s.StackName = &v + return s +} + +// The output for the ExecuteChangeSet action. +type ExecuteChangeSetOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ExecuteChangeSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExecuteChangeSetOutput) GoString() string { + return s.String() +} + +// The Export structure describes the exported output values for a stack. +type Export struct { + _ struct{} `type:"structure"` + + // The stack that contains the exported output name and value. + ExportingStackId *string `type:"string"` + + // The name of exported output value. Use this name and the Fn::ImportValue + // function to import the associated value into other stacks. The name is defined + // in the Export field in the associated stack's Outputs section. + Name *string `type:"string"` + + // The value of the exported output, such as a resource physical ID. This value + // is defined in the Export field in the associated stack's Outputs section. + Value *string `type:"string"` +} + +// String returns the string representation +func (s Export) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Export) GoString() string { + return s.String() +} + +// SetExportingStackId sets the ExportingStackId field's value. +func (s *Export) SetExportingStackId(v string) *Export { + s.ExportingStackId = &v + return s +} + +// SetName sets the Name field's value. +func (s *Export) SetName(v string) *Export { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Export) SetValue(v string) *Export { + s.Value = &v + return s +} + +// The input for the GetStackPolicy action. +type GetStackPolicyInput struct { + _ struct{} `type:"structure"` + + // The name or unique stack ID that is associated with the stack whose policy + // you want to get. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s GetStackPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetStackPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetStackPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetStackPolicyInput"} + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackName sets the StackName field's value. +func (s *GetStackPolicyInput) SetStackName(v string) *GetStackPolicyInput { + s.StackName = &v + return s +} + +// The output for the GetStackPolicy action. +type GetStackPolicyOutput struct { + _ struct{} `type:"structure"` + + // Structure containing the stack policy body. (For more information, go to + // Prevent Updates to Stack Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html) + // in the AWS CloudFormation User Guide.) + StackPolicyBody *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetStackPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetStackPolicyOutput) GoString() string { + return s.String() +} + +// SetStackPolicyBody sets the StackPolicyBody field's value. +func (s *GetStackPolicyOutput) SetStackPolicyBody(v string) *GetStackPolicyOutput { + s.StackPolicyBody = &v + return s +} + +// The input for a GetTemplate action. +type GetTemplateInput struct { + _ struct{} `type:"structure"` + + // The name or Amazon Resource Name (ARN) of a change set for which AWS CloudFormation + // returns the associated template. If you specify a name, you must also specify + // the StackName. + ChangeSetName *string `min:"1" type:"string"` + + // The name or the unique stack ID that is associated with the stack, which + // are not always interchangeable: + // + // * Running stacks: You can specify either the stack's name or its unique + // stack ID. + // + // * Deleted stacks: You must specify the unique stack ID. + // + // Default: There is no default value. + StackName *string `type:"string"` + + // For templates that include transforms, the stage of the template that AWS + // CloudFormation returns. To get the user-submitted template, specify Original. + // To get the template after AWS CloudFormation has processed all transforms, + // specify Processed. + // + // If the template doesn't include transforms, Original and Processed return + // the same template. By default, AWS CloudFormation specifies Original. + TemplateStage *string `type:"string" enum:"TemplateStage"` +} + +// String returns the string representation +func (s GetTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetTemplateInput"} + if s.ChangeSetName != nil && len(*s.ChangeSetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ChangeSetName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *GetTemplateInput) SetChangeSetName(v string) *GetTemplateInput { + s.ChangeSetName = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *GetTemplateInput) SetStackName(v string) *GetTemplateInput { + s.StackName = &v + return s +} + +// SetTemplateStage sets the TemplateStage field's value. +func (s *GetTemplateInput) SetTemplateStage(v string) *GetTemplateInput { + s.TemplateStage = &v + return s +} + +// The output for GetTemplate action. +type GetTemplateOutput struct { + _ struct{} `type:"structure"` + + // The stage of the template that you can retrieve. For stacks, the Original + // and Processed templates are always available. For change sets, the Original + // template is always available. After AWS CloudFormation finishes creating + // the change set, the Processed template becomes available. + StagesAvailable []*string `type:"list"` + + // Structure containing the template body. (For more information, go to Template + // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide.) + // + // AWS CloudFormation returns the same template that was used when the stack + // was created. + TemplateBody *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTemplateOutput) GoString() string { + return s.String() +} + +// SetStagesAvailable sets the StagesAvailable field's value. +func (s *GetTemplateOutput) SetStagesAvailable(v []*string) *GetTemplateOutput { + s.StagesAvailable = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *GetTemplateOutput) SetTemplateBody(v string) *GetTemplateOutput { + s.TemplateBody = &v + return s +} + +// The input for the GetTemplateSummary action. +type GetTemplateSummaryInput struct { + _ struct{} `type:"structure"` + + // The name or the stack ID that is associated with the stack, which are not + // always interchangeable. For running stacks, you can specify either the stack's + // name or its unique stack ID. For deleted stack, you must specify the unique + // stack ID. + // + // Conditional: You must specify only one of the following parameters: StackName, + // StackSetName, TemplateBody, or TemplateURL. + StackName *string `min:"1" type:"string"` + + // The name or unique ID of the stack set from which the stack was created. + // + // Conditional: You must specify only one of the following parameters: StackName, + // StackSetName, TemplateBody, or TemplateURL. + StackSetName *string `type:"string"` + + // Structure containing the template body with a minimum length of 1 byte and + // a maximum length of 51,200 bytes. For more information about templates, see + // Template Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify only one of the following parameters: StackName, + // StackSetName, TemplateBody, or TemplateURL. + TemplateBody *string `min:"1" type:"string"` + + // Location of file containing the template body. The URL must point to a template + // (max size: 460,800 bytes) that is located in an Amazon S3 bucket. For more + // information about templates, see Template Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify only one of the following parameters: StackName, + // StackSetName, TemplateBody, or TemplateURL. + TemplateURL *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetTemplateSummaryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTemplateSummaryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetTemplateSummaryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetTemplateSummaryInput"} + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackName sets the StackName field's value. +func (s *GetTemplateSummaryInput) SetStackName(v string) *GetTemplateSummaryInput { + s.StackName = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *GetTemplateSummaryInput) SetStackSetName(v string) *GetTemplateSummaryInput { + s.StackSetName = &v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *GetTemplateSummaryInput) SetTemplateBody(v string) *GetTemplateSummaryInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *GetTemplateSummaryInput) SetTemplateURL(v string) *GetTemplateSummaryInput { + s.TemplateURL = &v + return s +} + +// The output for the GetTemplateSummary action. +type GetTemplateSummaryOutput struct { + _ struct{} `type:"structure"` + + // The capabilities found within the template. If your template contains IAM + // resources, you must specify the CAPABILITY_IAM or CAPABILITY_NAMED_IAM value + // for this parameter when you use the CreateStack or UpdateStack actions with + // your template; otherwise, those actions return an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + Capabilities []*string `type:"list"` + + // The list of resources that generated the values in the Capabilities response + // element. + CapabilitiesReason *string `type:"string"` + + // A list of the transforms that are declared in the template. + DeclaredTransforms []*string `type:"list"` + + // The value that is defined in the Description property of the template. + Description *string `min:"1" type:"string"` + + // The value that is defined for the Metadata property of the template. + Metadata *string `type:"string"` + + // A list of parameter declarations that describe various properties for each + // parameter. + Parameters []*ParameterDeclaration `type:"list"` + + // A list of all the template resource types that are defined in the template, + // such as AWS::EC2::Instance, AWS::Dynamo::Table, and Custom::MyCustomInstance. + ResourceTypes []*string `type:"list"` + + // The AWS template format version, which identifies the capabilities of the + // template. + Version *string `type:"string"` +} + +// String returns the string representation +func (s GetTemplateSummaryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTemplateSummaryOutput) GoString() string { + return s.String() +} + +// SetCapabilities sets the Capabilities field's value. +func (s *GetTemplateSummaryOutput) SetCapabilities(v []*string) *GetTemplateSummaryOutput { + s.Capabilities = v + return s +} + +// SetCapabilitiesReason sets the CapabilitiesReason field's value. +func (s *GetTemplateSummaryOutput) SetCapabilitiesReason(v string) *GetTemplateSummaryOutput { + s.CapabilitiesReason = &v + return s +} + +// SetDeclaredTransforms sets the DeclaredTransforms field's value. +func (s *GetTemplateSummaryOutput) SetDeclaredTransforms(v []*string) *GetTemplateSummaryOutput { + s.DeclaredTransforms = v + return s +} + +// SetDescription sets the Description field's value. +func (s *GetTemplateSummaryOutput) SetDescription(v string) *GetTemplateSummaryOutput { + s.Description = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *GetTemplateSummaryOutput) SetMetadata(v string) *GetTemplateSummaryOutput { + s.Metadata = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *GetTemplateSummaryOutput) SetParameters(v []*ParameterDeclaration) *GetTemplateSummaryOutput { + s.Parameters = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *GetTemplateSummaryOutput) SetResourceTypes(v []*string) *GetTemplateSummaryOutput { + s.ResourceTypes = v + return s +} + +// SetVersion sets the Version field's value. +func (s *GetTemplateSummaryOutput) SetVersion(v string) *GetTemplateSummaryOutput { + s.Version = &v + return s +} + +// The input for the ListChangeSets action. +type ListChangeSetsInput struct { + _ struct{} `type:"structure"` + + // A string (provided by the ListChangeSets response output) that identifies + // the next page of change sets that you want to retrieve. + NextToken *string `min:"1" type:"string"` + + // The name or the Amazon Resource Name (ARN) of the stack for which you want + // to list change sets. + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListChangeSetsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListChangeSetsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListChangeSetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListChangeSetsInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *ListChangeSetsInput) SetNextToken(v string) *ListChangeSetsInput { + s.NextToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *ListChangeSetsInput) SetStackName(v string) *ListChangeSetsInput { + s.StackName = &v + return s +} + +// The output for the ListChangeSets action. +type ListChangeSetsOutput struct { + _ struct{} `type:"structure"` + + // If the output exceeds 1 MB, a string that identifies the next page of change + // sets. If there is no additional page, this value is null. + NextToken *string `min:"1" type:"string"` + + // A list of ChangeSetSummary structures that provides the ID and status of + // each change set for the specified stack. + Summaries []*ChangeSetSummary `type:"list"` +} + +// String returns the string representation +func (s ListChangeSetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListChangeSetsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListChangeSetsOutput) SetNextToken(v string) *ListChangeSetsOutput { + s.NextToken = &v + return s +} + +// SetSummaries sets the Summaries field's value. +func (s *ListChangeSetsOutput) SetSummaries(v []*ChangeSetSummary) *ListChangeSetsOutput { + s.Summaries = v + return s +} + +type ListExportsInput struct { + _ struct{} `type:"structure"` + + // A string (provided by the ListExports response output) that identifies the + // next page of exported output values that you asked to retrieve. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListExportsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListExportsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListExportsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListExportsInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *ListExportsInput) SetNextToken(v string) *ListExportsInput { + s.NextToken = &v + return s +} + +type ListExportsOutput struct { + _ struct{} `type:"structure"` + + // The output for the ListExports action. + Exports []*Export `type:"list"` + + // If the output exceeds 100 exported output values, a string that identifies + // the next page of exports. If there is no additional page, this value is null. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListExportsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListExportsOutput) GoString() string { + return s.String() +} + +// SetExports sets the Exports field's value. +func (s *ListExportsOutput) SetExports(v []*Export) *ListExportsOutput { + s.Exports = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListExportsOutput) SetNextToken(v string) *ListExportsOutput { + s.NextToken = &v + return s +} + +type ListImportsInput struct { + _ struct{} `type:"structure"` + + // The name of the exported output value. AWS CloudFormation returns the stack + // names that are importing this value. + // + // ExportName is a required field + ExportName *string `type:"string" required:"true"` + + // A string (provided by the ListImports response output) that identifies the + // next page of stacks that are importing the specified exported output value. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListImportsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListImportsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListImportsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListImportsInput"} + if s.ExportName == nil { + invalidParams.Add(request.NewErrParamRequired("ExportName")) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExportName sets the ExportName field's value. +func (s *ListImportsInput) SetExportName(v string) *ListImportsInput { + s.ExportName = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListImportsInput) SetNextToken(v string) *ListImportsInput { + s.NextToken = &v + return s +} + +type ListImportsOutput struct { + _ struct{} `type:"structure"` + + // A list of stack names that are importing the specified exported output value. + Imports []*string `type:"list"` + + // A string that identifies the next page of exports. If there is no additional + // page, this value is null. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListImportsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListImportsOutput) GoString() string { + return s.String() +} + +// SetImports sets the Imports field's value. +func (s *ListImportsOutput) SetImports(v []*string) *ListImportsOutput { + s.Imports = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListImportsOutput) SetNextToken(v string) *ListImportsOutput { + s.NextToken = &v + return s +} + +type ListStackInstancesInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to be returned with a single call. If the number + // of available results exceeds this maximum, the response includes a NextToken + // value that you can assign to the NextToken request parameter to get the next + // set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // If the previous request didn't return all of the remaining results, the response's + // NextToken parameter value is set to a token. To retrieve the next set of + // results, call ListStackInstances again and assign that token to the request + // object's NextToken parameter. If there are no remaining results, the previous + // response object's NextToken parameter is set to null. + NextToken *string `min:"1" type:"string"` + + // The name of the AWS account that you want to list stack instances for. + StackInstanceAccount *string `type:"string"` + + // The name of the region where you want to list stack instances. + StackInstanceRegion *string `type:"string"` + + // The name or unique ID of the stack set that you want to list stack instances + // for. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListStackInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStackInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStackInstancesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListStackInstancesInput) SetMaxResults(v int64) *ListStackInstancesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackInstancesInput) SetNextToken(v string) *ListStackInstancesInput { + s.NextToken = &v + return s +} + +// SetStackInstanceAccount sets the StackInstanceAccount field's value. +func (s *ListStackInstancesInput) SetStackInstanceAccount(v string) *ListStackInstancesInput { + s.StackInstanceAccount = &v + return s +} + +// SetStackInstanceRegion sets the StackInstanceRegion field's value. +func (s *ListStackInstancesInput) SetStackInstanceRegion(v string) *ListStackInstancesInput { + s.StackInstanceRegion = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *ListStackInstancesInput) SetStackSetName(v string) *ListStackInstancesInput { + s.StackSetName = &v + return s +} + +type ListStackInstancesOutput struct { + _ struct{} `type:"structure"` + + // If the request doesn't return all of the remaining results, NextToken is + // set to a token. To retrieve the next set of results, call ListStackInstances + // again and assign that token to the request object's NextToken parameter. + // If the request returns all results, NextToken is set to null. + NextToken *string `min:"1" type:"string"` + + // A list of StackInstanceSummary structures that contain information about + // the specified stack instances. + Summaries []*StackInstanceSummary `type:"list"` +} + +// String returns the string representation +func (s ListStackInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackInstancesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackInstancesOutput) SetNextToken(v string) *ListStackInstancesOutput { + s.NextToken = &v + return s +} + +// SetSummaries sets the Summaries field's value. +func (s *ListStackInstancesOutput) SetSummaries(v []*StackInstanceSummary) *ListStackInstancesOutput { + s.Summaries = v + return s +} + +// The input for the ListStackResource action. +type ListStackResourcesInput struct { + _ struct{} `type:"structure"` + + // A string that identifies the next page of stack resources that you want to + // retrieve. + NextToken *string `min:"1" type:"string"` + + // The name or the unique stack ID that is associated with the stack, which + // are not always interchangeable: + // + // * Running stacks: You can specify either the stack's name or its unique + // stack ID. + // + // * Deleted stacks: You must specify the unique stack ID. + // + // Default: There is no default value. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListStackResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackResourcesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStackResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStackResourcesInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackResourcesInput) SetNextToken(v string) *ListStackResourcesInput { + s.NextToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *ListStackResourcesInput) SetStackName(v string) *ListStackResourcesInput { + s.StackName = &v + return s +} + +// The output for a ListStackResources action. +type ListStackResourcesOutput struct { + _ struct{} `type:"structure"` + + // If the output exceeds 1 MB, a string that identifies the next page of stack + // resources. If no additional page exists, this value is null. + NextToken *string `min:"1" type:"string"` + + // A list of StackResourceSummary structures. + StackResourceSummaries []*StackResourceSummary `type:"list"` +} + +// String returns the string representation +func (s ListStackResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackResourcesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackResourcesOutput) SetNextToken(v string) *ListStackResourcesOutput { + s.NextToken = &v + return s +} + +// SetStackResourceSummaries sets the StackResourceSummaries field's value. +func (s *ListStackResourcesOutput) SetStackResourceSummaries(v []*StackResourceSummary) *ListStackResourcesOutput { + s.StackResourceSummaries = v + return s +} + +type ListStackSetOperationResultsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to be returned with a single call. If the number + // of available results exceeds this maximum, the response includes a NextToken + // value that you can assign to the NextToken request parameter to get the next + // set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // If the previous request didn't return all of the remaining results, the response + // object's NextToken parameter value is set to a token. To retrieve the next + // set of results, call ListStackSetOperationResults again and assign that token + // to the request object's NextToken parameter. If there are no remaining results, + // the previous response object's NextToken parameter is set to null. + NextToken *string `min:"1" type:"string"` + + // The ID of the stack set operation. + // + // OperationId is a required field + OperationId *string `min:"1" type:"string" required:"true"` + + // The name or unique ID of the stack set that you want to get operation results + // for. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListStackSetOperationResultsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackSetOperationResultsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStackSetOperationResultsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStackSetOperationResultsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.OperationId == nil { + invalidParams.Add(request.NewErrParamRequired("OperationId")) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListStackSetOperationResultsInput) SetMaxResults(v int64) *ListStackSetOperationResultsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackSetOperationResultsInput) SetNextToken(v string) *ListStackSetOperationResultsInput { + s.NextToken = &v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *ListStackSetOperationResultsInput) SetOperationId(v string) *ListStackSetOperationResultsInput { + s.OperationId = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *ListStackSetOperationResultsInput) SetStackSetName(v string) *ListStackSetOperationResultsInput { + s.StackSetName = &v + return s +} + +type ListStackSetOperationResultsOutput struct { + _ struct{} `type:"structure"` + + // If the request doesn't return all results, NextToken is set to a token. To + // retrieve the next set of results, call ListOperationResults again and assign + // that token to the request object's NextToken parameter. If there are no remaining + // results, NextToken is set to null. + NextToken *string `min:"1" type:"string"` + + // A list of StackSetOperationResultSummary structures that contain information + // about the specified operation results, for accounts and regions that are + // included in the operation. + Summaries []*StackSetOperationResultSummary `type:"list"` +} + +// String returns the string representation +func (s ListStackSetOperationResultsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackSetOperationResultsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackSetOperationResultsOutput) SetNextToken(v string) *ListStackSetOperationResultsOutput { + s.NextToken = &v + return s +} + +// SetSummaries sets the Summaries field's value. +func (s *ListStackSetOperationResultsOutput) SetSummaries(v []*StackSetOperationResultSummary) *ListStackSetOperationResultsOutput { + s.Summaries = v + return s +} + +type ListStackSetOperationsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to be returned with a single call. If the number + // of available results exceeds this maximum, the response includes a NextToken + // value that you can assign to the NextToken request parameter to get the next + // set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // If the previous paginated request didn't return all of the remaining results, + // the response object's NextToken parameter value is set to a token. To retrieve + // the next set of results, call ListStackSetOperations again and assign that + // token to the request object's NextToken parameter. If there are no remaining + // results, the previous response object's NextToken parameter is set to null. + NextToken *string `min:"1" type:"string"` + + // The name or unique ID of the stack set that you want to get operation summaries + // for. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListStackSetOperationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackSetOperationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStackSetOperationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStackSetOperationsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListStackSetOperationsInput) SetMaxResults(v int64) *ListStackSetOperationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackSetOperationsInput) SetNextToken(v string) *ListStackSetOperationsInput { + s.NextToken = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *ListStackSetOperationsInput) SetStackSetName(v string) *ListStackSetOperationsInput { + s.StackSetName = &v + return s +} + +type ListStackSetOperationsOutput struct { + _ struct{} `type:"structure"` + + // If the request doesn't return all results, NextToken is set to a token. To + // retrieve the next set of results, call ListOperationResults again and assign + // that token to the request object's NextToken parameter. If there are no remaining + // results, NextToken is set to null. + NextToken *string `min:"1" type:"string"` + + // A list of StackSetOperationSummary structures that contain summary information + // about operations for the specified stack set. + Summaries []*StackSetOperationSummary `type:"list"` +} + +// String returns the string representation +func (s ListStackSetOperationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackSetOperationsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackSetOperationsOutput) SetNextToken(v string) *ListStackSetOperationsOutput { + s.NextToken = &v + return s +} + +// SetSummaries sets the Summaries field's value. +func (s *ListStackSetOperationsOutput) SetSummaries(v []*StackSetOperationSummary) *ListStackSetOperationsOutput { + s.Summaries = v + return s +} + +type ListStackSetsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to be returned with a single call. If the number + // of available results exceeds this maximum, the response includes a NextToken + // value that you can assign to the NextToken request parameter to get the next + // set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // If the previous paginated request didn't return all of the remaining results, + // the response object's NextToken parameter value is set to a token. To retrieve + // the next set of results, call ListStackSets again and assign that token to + // the request object's NextToken parameter. If there are no remaining results, + // the previous response object's NextToken parameter is set to null. + NextToken *string `min:"1" type:"string"` + + // The status of the stack sets that you want to get summary information about. + Status *string `type:"string" enum:"StackSetStatus"` +} + +// String returns the string representation +func (s ListStackSetsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackSetsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStackSetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStackSetsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListStackSetsInput) SetMaxResults(v int64) *ListStackSetsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackSetsInput) SetNextToken(v string) *ListStackSetsInput { + s.NextToken = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ListStackSetsInput) SetStatus(v string) *ListStackSetsInput { + s.Status = &v + return s +} + +type ListStackSetsOutput struct { + _ struct{} `type:"structure"` + + // If the request doesn't return all of the remaining results, NextToken is + // set to a token. To retrieve the next set of results, call ListStackInstances + // again and assign that token to the request object's NextToken parameter. + // If the request returns all results, NextToken is set to null. + NextToken *string `min:"1" type:"string"` + + // A list of StackSetSummary structures that contain information about the user's + // stack sets. + Summaries []*StackSetSummary `type:"list"` +} + +// String returns the string representation +func (s ListStackSetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStackSetsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStackSetsOutput) SetNextToken(v string) *ListStackSetsOutput { + s.NextToken = &v + return s +} + +// SetSummaries sets the Summaries field's value. +func (s *ListStackSetsOutput) SetSummaries(v []*StackSetSummary) *ListStackSetsOutput { + s.Summaries = v + return s +} + +// The input for ListStacks action. +type ListStacksInput struct { + _ struct{} `type:"structure"` + + // A string that identifies the next page of stacks that you want to retrieve. + NextToken *string `min:"1" type:"string"` + + // Stack status to use as a filter. Specify one or more stack status codes to + // list only stacks with the specified status codes. For a complete list of + // stack status codes, see the StackStatus parameter of the Stack data type. + StackStatusFilter []*string `type:"list"` +} + +// String returns the string representation +func (s ListStacksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStacksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStacksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStacksInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStacksInput) SetNextToken(v string) *ListStacksInput { + s.NextToken = &v + return s +} + +// SetStackStatusFilter sets the StackStatusFilter field's value. +func (s *ListStacksInput) SetStackStatusFilter(v []*string) *ListStacksInput { + s.StackStatusFilter = v + return s +} + +// The output for ListStacks action. +type ListStacksOutput struct { + _ struct{} `type:"structure"` + + // If the output exceeds 1 MB in size, a string that identifies the next page + // of stacks. If no additional page exists, this value is null. + NextToken *string `min:"1" type:"string"` + + // A list of StackSummary structures containing information about the specified + // stacks. + StackSummaries []*StackSummary `type:"list"` +} + +// String returns the string representation +func (s ListStacksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStacksOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStacksOutput) SetNextToken(v string) *ListStacksOutput { + s.NextToken = &v + return s +} + +// SetStackSummaries sets the StackSummaries field's value. +func (s *ListStacksOutput) SetStackSummaries(v []*StackSummary) *ListStacksOutput { + s.StackSummaries = v + return s +} + +// The Output data type. +type Output struct { + _ struct{} `type:"structure"` + + // User defined description associated with the output. + Description *string `min:"1" type:"string"` + + // The name of the export associated with the output. + ExportName *string `type:"string"` + + // The key associated with the output. + OutputKey *string `type:"string"` + + // The value associated with the output. + OutputValue *string `type:"string"` +} + +// String returns the string representation +func (s Output) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Output) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *Output) SetDescription(v string) *Output { + s.Description = &v + return s +} + +// SetExportName sets the ExportName field's value. +func (s *Output) SetExportName(v string) *Output { + s.ExportName = &v + return s +} + +// SetOutputKey sets the OutputKey field's value. +func (s *Output) SetOutputKey(v string) *Output { + s.OutputKey = &v + return s +} + +// SetOutputValue sets the OutputValue field's value. +func (s *Output) SetOutputValue(v string) *Output { + s.OutputValue = &v + return s +} + +// The Parameter data type. +type Parameter struct { + _ struct{} `type:"structure"` + + // The key associated with the parameter. If you don't specify a key and value + // for a particular parameter, AWS CloudFormation uses the default value that + // is specified in your template. + ParameterKey *string `type:"string"` + + // The input value associated with the parameter. + ParameterValue *string `type:"string"` + + // Read-only. The value that corresponds to a Systems Manager parameter key. + // This field is returned only for SSM (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#aws-ssm-parameter-types) + // parameter types in the template. + ResolvedValue *string `type:"string"` + + // During a stack update, use the existing parameter value that the stack is + // using for a given parameter key. If you specify true, do not specify a parameter + // value. + UsePreviousValue *bool `type:"boolean"` +} + +// String returns the string representation +func (s Parameter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Parameter) GoString() string { + return s.String() +} + +// SetParameterKey sets the ParameterKey field's value. +func (s *Parameter) SetParameterKey(v string) *Parameter { + s.ParameterKey = &v + return s +} + +// SetParameterValue sets the ParameterValue field's value. +func (s *Parameter) SetParameterValue(v string) *Parameter { + s.ParameterValue = &v + return s +} + +// SetResolvedValue sets the ResolvedValue field's value. +func (s *Parameter) SetResolvedValue(v string) *Parameter { + s.ResolvedValue = &v + return s +} + +// SetUsePreviousValue sets the UsePreviousValue field's value. +func (s *Parameter) SetUsePreviousValue(v bool) *Parameter { + s.UsePreviousValue = &v + return s +} + +// A set of criteria that AWS CloudFormation uses to validate parameter values. +// Although other constraints might be defined in the stack template, AWS CloudFormation +// returns only the AllowedValues property. +type ParameterConstraints struct { + _ struct{} `type:"structure"` + + // A list of values that are permitted for a parameter. + AllowedValues []*string `type:"list"` +} + +// String returns the string representation +func (s ParameterConstraints) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterConstraints) GoString() string { + return s.String() +} + +// SetAllowedValues sets the AllowedValues field's value. +func (s *ParameterConstraints) SetAllowedValues(v []*string) *ParameterConstraints { + s.AllowedValues = v + return s +} + +// The ParameterDeclaration data type. +type ParameterDeclaration struct { + _ struct{} `type:"structure"` + + // The default value of the parameter. + DefaultValue *string `type:"string"` + + // The description that is associate with the parameter. + Description *string `min:"1" type:"string"` + + // Flag that indicates whether the parameter value is shown as plain text in + // logs and in the AWS Management Console. + NoEcho *bool `type:"boolean"` + + // The criteria that AWS CloudFormation uses to validate parameter values. + ParameterConstraints *ParameterConstraints `type:"structure"` + + // The name that is associated with the parameter. + ParameterKey *string `type:"string"` + + // The type of parameter. + ParameterType *string `type:"string"` +} + +// String returns the string representation +func (s ParameterDeclaration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterDeclaration) GoString() string { + return s.String() +} + +// SetDefaultValue sets the DefaultValue field's value. +func (s *ParameterDeclaration) SetDefaultValue(v string) *ParameterDeclaration { + s.DefaultValue = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ParameterDeclaration) SetDescription(v string) *ParameterDeclaration { + s.Description = &v + return s +} + +// SetNoEcho sets the NoEcho field's value. +func (s *ParameterDeclaration) SetNoEcho(v bool) *ParameterDeclaration { + s.NoEcho = &v + return s +} + +// SetParameterConstraints sets the ParameterConstraints field's value. +func (s *ParameterDeclaration) SetParameterConstraints(v *ParameterConstraints) *ParameterDeclaration { + s.ParameterConstraints = v + return s +} + +// SetParameterKey sets the ParameterKey field's value. +func (s *ParameterDeclaration) SetParameterKey(v string) *ParameterDeclaration { + s.ParameterKey = &v + return s +} + +// SetParameterType sets the ParameterType field's value. +func (s *ParameterDeclaration) SetParameterType(v string) *ParameterDeclaration { + s.ParameterType = &v + return s +} + +// The ResourceChange structure describes the resource and the action that AWS +// CloudFormation will perform on it if you execute this change set. +type ResourceChange struct { + _ struct{} `type:"structure"` + + // The action that AWS CloudFormation takes on the resource, such as Add (adds + // a new resource), Modify (changes a resource), or Remove (deletes a resource). + Action *string `type:"string" enum:"ChangeAction"` + + // For the Modify action, a list of ResourceChangeDetail structures that describes + // the changes that AWS CloudFormation will make to the resource. + Details []*ResourceChangeDetail `type:"list"` + + // The resource's logical ID, which is defined in the stack's template. + LogicalResourceId *string `type:"string"` + + // The resource's physical ID (resource name). Resources that you are adding + // don't have physical IDs because they haven't been created. + PhysicalResourceId *string `type:"string"` + + // For the Modify action, indicates whether AWS CloudFormation will replace + // the resource by creating a new one and deleting the old one. This value depends + // on the value of the RequiresRecreation property in the ResourceTargetDefinition + // structure. For example, if the RequiresRecreation field is Always and the + // Evaluation field is Static, Replacement is True. If the RequiresRecreation + // field is Always and the Evaluation field is Dynamic, Replacement is Conditionally. + // + // If you have multiple changes with different RequiresRecreation values, the + // Replacement value depends on the change with the most impact. A RequiresRecreation + // value of Always has the most impact, followed by Conditionally, and then + // Never. + Replacement *string `type:"string" enum:"Replacement"` + + // The type of AWS CloudFormation resource, such as AWS::S3::Bucket. + ResourceType *string `min:"1" type:"string"` + + // For the Modify action, indicates which resource attribute is triggering this + // update, such as a change in the resource attribute's Metadata, Properties, + // or Tags. + Scope []*string `type:"list"` +} + +// String returns the string representation +func (s ResourceChange) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceChange) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *ResourceChange) SetAction(v string) *ResourceChange { + s.Action = &v + return s +} + +// SetDetails sets the Details field's value. +func (s *ResourceChange) SetDetails(v []*ResourceChangeDetail) *ResourceChange { + s.Details = v + return s +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *ResourceChange) SetLogicalResourceId(v string) *ResourceChange { + s.LogicalResourceId = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *ResourceChange) SetPhysicalResourceId(v string) *ResourceChange { + s.PhysicalResourceId = &v + return s +} + +// SetReplacement sets the Replacement field's value. +func (s *ResourceChange) SetReplacement(v string) *ResourceChange { + s.Replacement = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ResourceChange) SetResourceType(v string) *ResourceChange { + s.ResourceType = &v + return s +} + +// SetScope sets the Scope field's value. +func (s *ResourceChange) SetScope(v []*string) *ResourceChange { + s.Scope = v + return s +} + +// For a resource with Modify as the action, the ResourceChange structure describes +// the changes AWS CloudFormation will make to that resource. +type ResourceChangeDetail struct { + _ struct{} `type:"structure"` + + // The identity of the entity that triggered this change. This entity is a member + // of the group that is specified by the ChangeSource field. For example, if + // you modified the value of the KeyPairName parameter, the CausingEntity is + // the name of the parameter (KeyPairName). + // + // If the ChangeSource value is DirectModification, no value is given for CausingEntity. + CausingEntity *string `type:"string"` + + // The group to which the CausingEntity value belongs. There are five entity + // groups: + // + // * ResourceReference entities are Ref intrinsic functions that refer to + // resources in the template, such as { "Ref" : "MyEC2InstanceResource" }. + // + // * ParameterReference entities are Ref intrinsic functions that get template + // parameter values, such as { "Ref" : "MyPasswordParameter" }. + // + // * ResourceAttribute entities are Fn::GetAtt intrinsic functions that get + // resource attribute values, such as { "Fn::GetAtt" : [ "MyEC2InstanceResource", + // "PublicDnsName" ] }. + // + // * DirectModification entities are changes that are made directly to the + // template. + // + // * Automatic entities are AWS::CloudFormation::Stack resource types, which + // are also known as nested stacks. If you made no changes to the AWS::CloudFormation::Stack + // resource, AWS CloudFormation sets the ChangeSource to Automatic because + // the nested stack's template might have changed. Changes to a nested stack's + // template aren't visible to AWS CloudFormation until you run an update + // on the parent stack. + ChangeSource *string `type:"string" enum:"ChangeSource"` + + // Indicates whether AWS CloudFormation can determine the target value, and + // whether the target value will change before you execute a change set. + // + // For Static evaluations, AWS CloudFormation can determine that the target + // value will change, and its value. For example, if you directly modify the + // InstanceType property of an EC2 instance, AWS CloudFormation knows that this + // property value will change, and its value, so this is a Static evaluation. + // + // For Dynamic evaluations, cannot determine the target value because it depends + // on the result of an intrinsic function, such as a Ref or Fn::GetAtt intrinsic + // function, when the stack is updated. For example, if your template includes + // a reference to a resource that is conditionally recreated, the value of the + // reference (the physical ID of the resource) might change, depending on if + // the resource is recreated. If the resource is recreated, it will have a new + // physical ID, so all references to that resource will also be updated. + Evaluation *string `type:"string" enum:"EvaluationType"` + + // A ResourceTargetDefinition structure that describes the field that AWS CloudFormation + // will change and whether the resource will be recreated. + Target *ResourceTargetDefinition `type:"structure"` +} + +// String returns the string representation +func (s ResourceChangeDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceChangeDetail) GoString() string { + return s.String() +} + +// SetCausingEntity sets the CausingEntity field's value. +func (s *ResourceChangeDetail) SetCausingEntity(v string) *ResourceChangeDetail { + s.CausingEntity = &v + return s +} + +// SetChangeSource sets the ChangeSource field's value. +func (s *ResourceChangeDetail) SetChangeSource(v string) *ResourceChangeDetail { + s.ChangeSource = &v + return s +} + +// SetEvaluation sets the Evaluation field's value. +func (s *ResourceChangeDetail) SetEvaluation(v string) *ResourceChangeDetail { + s.Evaluation = &v + return s +} + +// SetTarget sets the Target field's value. +func (s *ResourceChangeDetail) SetTarget(v *ResourceTargetDefinition) *ResourceChangeDetail { + s.Target = v + return s +} + +// The field that AWS CloudFormation will change, such as the name of a resource's +// property, and whether the resource will be recreated. +type ResourceTargetDefinition struct { + _ struct{} `type:"structure"` + + // Indicates which resource attribute is triggering this update, such as a change + // in the resource attribute's Metadata, Properties, or Tags. + Attribute *string `type:"string" enum:"ResourceAttribute"` + + // If the Attribute value is Properties, the name of the property. For all other + // attributes, the value is null. + Name *string `type:"string"` + + // If the Attribute value is Properties, indicates whether a change to this + // property causes the resource to be recreated. The value can be Never, Always, + // or Conditionally. To determine the conditions for a Conditionally recreation, + // see the update behavior for that property (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide. + RequiresRecreation *string `type:"string" enum:"RequiresRecreation"` +} + +// String returns the string representation +func (s ResourceTargetDefinition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceTargetDefinition) GoString() string { + return s.String() +} + +// SetAttribute sets the Attribute field's value. +func (s *ResourceTargetDefinition) SetAttribute(v string) *ResourceTargetDefinition { + s.Attribute = &v + return s +} + +// SetName sets the Name field's value. +func (s *ResourceTargetDefinition) SetName(v string) *ResourceTargetDefinition { + s.Name = &v + return s +} + +// SetRequiresRecreation sets the RequiresRecreation field's value. +func (s *ResourceTargetDefinition) SetRequiresRecreation(v string) *ResourceTargetDefinition { + s.RequiresRecreation = &v + return s +} + +// Structure containing the rollback triggers for AWS CloudFormation to monitor +// during stack creation and updating operations, and for the specified monitoring +// period afterwards. +// +// Rollback triggers enable you to have AWS CloudFormation monitor the state +// of your application during stack creation and updating, and to roll back +// that operation if the application breaches the threshold of any of the alarms +// you've specified. For more information, see Monitor and Roll Back Stack Operations +// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-rollback-triggers.html). +type RollbackConfiguration struct { + _ struct{} `type:"structure"` + + // The amount of time, in minutes, during which CloudFormation should monitor + // all the rollback triggers after the stack creation or update operation deploys + // all necessary resources. + // + // The default is 0 minutes. + // + // If you specify a monitoring period but do not specify any rollback triggers, + // CloudFormation still waits the specified period of time before cleaning up + // old resources after update operations. You can use this monitoring period + // to perform any manual stack validation desired, and manually cancel the stack + // creation or update (using CancelUpdateStack (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CancelUpdateStack.html), + // for example) as necessary. + // + // If you specify 0 for this parameter, CloudFormation still monitors the specified + // rollback triggers during stack creation and update operations. Then, for + // update operations, it begins disposing of old resources immediately once + // the operation completes. + MonitoringTimeInMinutes *int64 `type:"integer"` + + // The triggers to monitor during stack creation or update actions. + // + // By default, AWS CloudFormation saves the rollback triggers specified for + // a stack and applies them to any subsequent update operations for the stack, + // unless you specify otherwise. If you do specify rollback triggers for this + // parameter, those triggers replace any list of triggers previously specified + // for the stack. This means: + // + // * To use the rollback triggers previously specified for this stack, if + // any, don't specify this parameter. + // + // * To specify new or updated rollback triggers, you must specify all the + // triggers that you want used for this stack, even triggers you've specifed + // before (for example, when creating the stack or during a previous stack + // update). Any triggers that you don't include in the updated list of triggers + // are no longer applied to the stack. + // + // * To remove all currently specified triggers, specify an empty list for + // this parameter. + // + // If a specified trigger is missing, the entire stack operation fails and is + // rolled back. + RollbackTriggers []*RollbackTrigger `type:"list"` +} + +// String returns the string representation +func (s RollbackConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RollbackConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RollbackConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RollbackConfiguration"} + if s.RollbackTriggers != nil { + for i, v := range s.RollbackTriggers { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RollbackTriggers", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMonitoringTimeInMinutes sets the MonitoringTimeInMinutes field's value. +func (s *RollbackConfiguration) SetMonitoringTimeInMinutes(v int64) *RollbackConfiguration { + s.MonitoringTimeInMinutes = &v + return s +} + +// SetRollbackTriggers sets the RollbackTriggers field's value. +func (s *RollbackConfiguration) SetRollbackTriggers(v []*RollbackTrigger) *RollbackConfiguration { + s.RollbackTriggers = v + return s +} + +// A rollback trigger AWS CloudFormation monitors during creation and updating +// of stacks. If any of the alarms you specify goes to ALARM state during the +// stack operation or within the specified monitoring period afterwards, CloudFormation +// rolls back the entire stack operation. +type RollbackTrigger struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the rollback trigger. + // + // If a specified trigger is missing, the entire stack operation fails and is + // rolled back. + // + // Arn is a required field + Arn *string `type:"string" required:"true"` + + // The resource type of the rollback trigger. Currently, AWS::CloudWatch::Alarm + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cw-alarm.html) + // is the only supported resource type. + // + // Type is a required field + Type *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s RollbackTrigger) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RollbackTrigger) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RollbackTrigger) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RollbackTrigger"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *RollbackTrigger) SetArn(v string) *RollbackTrigger { + s.Arn = &v + return s +} + +// SetType sets the Type field's value. +func (s *RollbackTrigger) SetType(v string) *RollbackTrigger { + s.Type = &v + return s +} + +// The input for the SetStackPolicy action. +type SetStackPolicyInput struct { + _ struct{} `type:"structure"` + + // The name or unique stack ID that you want to associate a policy with. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` + + // Structure containing the stack policy body. For more information, go to + // Prevent Updates to Stack Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html) + // in the AWS CloudFormation User Guide. You can specify either the StackPolicyBody + // or the StackPolicyURL parameter, but not both. + StackPolicyBody *string `min:"1" type:"string"` + + // Location of a file containing the stack policy. The URL must point to a policy + // (maximum size: 16 KB) located in an S3 bucket in the same region as the stack. + // You can specify either the StackPolicyBody or the StackPolicyURL parameter, + // but not both. + StackPolicyURL *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s SetStackPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetStackPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetStackPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetStackPolicyInput"} + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackPolicyBody != nil && len(*s.StackPolicyBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyBody", 1)) + } + if s.StackPolicyURL != nil && len(*s.StackPolicyURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyURL", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackName sets the StackName field's value. +func (s *SetStackPolicyInput) SetStackName(v string) *SetStackPolicyInput { + s.StackName = &v + return s +} + +// SetStackPolicyBody sets the StackPolicyBody field's value. +func (s *SetStackPolicyInput) SetStackPolicyBody(v string) *SetStackPolicyInput { + s.StackPolicyBody = &v + return s +} + +// SetStackPolicyURL sets the StackPolicyURL field's value. +func (s *SetStackPolicyInput) SetStackPolicyURL(v string) *SetStackPolicyInput { + s.StackPolicyURL = &v + return s +} + +type SetStackPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SetStackPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetStackPolicyOutput) GoString() string { + return s.String() +} + +// The input for the SignalResource action. +type SignalResourceInput struct { + _ struct{} `type:"structure"` + + // The logical ID of the resource that you want to signal. The logical ID is + // the name of the resource that given in the template. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The stack name or unique stack ID that includes the resource that you want + // to signal. + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` + + // The status of the signal, which is either success or failure. A failure signal + // causes AWS CloudFormation to immediately fail the stack creation or update. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"ResourceSignalStatus"` + + // A unique ID of the signal. When you signal Amazon EC2 instances or Auto Scaling + // groups, specify the instance ID that you are signaling as the unique ID. + // If you send multiple signals to a single resource (such as signaling a wait + // condition), each signal requires a different unique ID. + // + // UniqueId is a required field + UniqueId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s SignalResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SignalResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SignalResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SignalResourceInput"} + if s.LogicalResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("LogicalResourceId")) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.UniqueId == nil { + invalidParams.Add(request.NewErrParamRequired("UniqueId")) + } + if s.UniqueId != nil && len(*s.UniqueId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UniqueId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *SignalResourceInput) SetLogicalResourceId(v string) *SignalResourceInput { + s.LogicalResourceId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *SignalResourceInput) SetStackName(v string) *SignalResourceInput { + s.StackName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *SignalResourceInput) SetStatus(v string) *SignalResourceInput { + s.Status = &v + return s +} + +// SetUniqueId sets the UniqueId field's value. +func (s *SignalResourceInput) SetUniqueId(v string) *SignalResourceInput { + s.UniqueId = &v + return s +} + +type SignalResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SignalResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SignalResourceOutput) GoString() string { + return s.String() +} + +// The Stack data type. +type Stack struct { + _ struct{} `type:"structure"` + + // The capabilities allowed in the stack. + Capabilities []*string `type:"list"` + + // The unique ID of the change set. + ChangeSetId *string `min:"1" type:"string"` + + // The time at which the stack was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The time the stack was deleted. + DeletionTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // A user-defined description associated with the stack. + Description *string `min:"1" type:"string"` + + // Boolean to enable or disable rollback on stack creation failures: + // + // * true: disable rollback + // + // * false: enable rollback + DisableRollback *bool `type:"boolean"` + + // Whether termination protection is enabled for the stack. + // + // For nested stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html), + // termination protection is set on the root stack and cannot be changed directly + // on the nested stack. For more information, see Protecting a Stack From Being + // Deleted (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) + // in the AWS CloudFormation User Guide. + EnableTerminationProtection *bool `type:"boolean"` + + // The time the stack was last updated. This field will only be returned if + // the stack has been updated at least once. + LastUpdatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // SNS topic ARNs to which stack related events are published. + NotificationARNs []*string `type:"list"` + + // A list of output structures. + Outputs []*Output `type:"list"` + + // A list of Parameter structures. + Parameters []*Parameter `type:"list"` + + // For nested stacks--stacks created as resources for another stack--the stack + // ID of the direct parent of this stack. For the first level of nested stacks, + // the root stack is also the parent stack. + // + // For more information, see Working with Nested Stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) + // in the AWS CloudFormation User Guide. + ParentId *string `type:"string"` + + // The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) + // role that is associated with the stack. During a stack operation, AWS CloudFormation + // uses this role's credentials to make calls on your behalf. + RoleARN *string `min:"20" type:"string"` + + // The rollback triggers for AWS CloudFormation to monitor during stack creation + // and updating operations, and for the specified monitoring period afterwards. + RollbackConfiguration *RollbackConfiguration `type:"structure"` + + // For nested stacks--stacks created as resources for another stack--the stack + // ID of the the top-level stack to which the nested stack ultimately belongs. + // + // For more information, see Working with Nested Stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) + // in the AWS CloudFormation User Guide. + RootId *string `type:"string"` + + // Unique identifier of the stack. + StackId *string `type:"string"` + + // The name associated with the stack. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` + + // Current status of the stack. + // + // StackStatus is a required field + StackStatus *string `type:"string" required:"true" enum:"StackStatus"` + + // Success/failure message associated with the stack status. + StackStatusReason *string `type:"string"` + + // A list of Tags that specify information about the stack. + Tags []*Tag `type:"list"` + + // The amount of time within which stack creation should complete. + TimeoutInMinutes *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s Stack) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Stack) GoString() string { + return s.String() +} + +// SetCapabilities sets the Capabilities field's value. +func (s *Stack) SetCapabilities(v []*string) *Stack { + s.Capabilities = v + return s +} + +// SetChangeSetId sets the ChangeSetId field's value. +func (s *Stack) SetChangeSetId(v string) *Stack { + s.ChangeSetId = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *Stack) SetCreationTime(v time.Time) *Stack { + s.CreationTime = &v + return s +} + +// SetDeletionTime sets the DeletionTime field's value. +func (s *Stack) SetDeletionTime(v time.Time) *Stack { + s.DeletionTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Stack) SetDescription(v string) *Stack { + s.Description = &v + return s +} + +// SetDisableRollback sets the DisableRollback field's value. +func (s *Stack) SetDisableRollback(v bool) *Stack { + s.DisableRollback = &v + return s +} + +// SetEnableTerminationProtection sets the EnableTerminationProtection field's value. +func (s *Stack) SetEnableTerminationProtection(v bool) *Stack { + s.EnableTerminationProtection = &v + return s +} + +// SetLastUpdatedTime sets the LastUpdatedTime field's value. +func (s *Stack) SetLastUpdatedTime(v time.Time) *Stack { + s.LastUpdatedTime = &v + return s +} + +// SetNotificationARNs sets the NotificationARNs field's value. +func (s *Stack) SetNotificationARNs(v []*string) *Stack { + s.NotificationARNs = v + return s +} + +// SetOutputs sets the Outputs field's value. +func (s *Stack) SetOutputs(v []*Output) *Stack { + s.Outputs = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *Stack) SetParameters(v []*Parameter) *Stack { + s.Parameters = v + return s +} + +// SetParentId sets the ParentId field's value. +func (s *Stack) SetParentId(v string) *Stack { + s.ParentId = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *Stack) SetRoleARN(v string) *Stack { + s.RoleARN = &v + return s +} + +// SetRollbackConfiguration sets the RollbackConfiguration field's value. +func (s *Stack) SetRollbackConfiguration(v *RollbackConfiguration) *Stack { + s.RollbackConfiguration = v + return s +} + +// SetRootId sets the RootId field's value. +func (s *Stack) SetRootId(v string) *Stack { + s.RootId = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *Stack) SetStackId(v string) *Stack { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *Stack) SetStackName(v string) *Stack { + s.StackName = &v + return s +} + +// SetStackStatus sets the StackStatus field's value. +func (s *Stack) SetStackStatus(v string) *Stack { + s.StackStatus = &v + return s +} + +// SetStackStatusReason sets the StackStatusReason field's value. +func (s *Stack) SetStackStatusReason(v string) *Stack { + s.StackStatusReason = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *Stack) SetTags(v []*Tag) *Stack { + s.Tags = v + return s +} + +// SetTimeoutInMinutes sets the TimeoutInMinutes field's value. +func (s *Stack) SetTimeoutInMinutes(v int64) *Stack { + s.TimeoutInMinutes = &v + return s +} + +// The StackEvent data type. +type StackEvent struct { + _ struct{} `type:"structure"` + + // The token passed to the operation that generated this event. + // + // All events triggered by a given stack operation are assigned the same client + // request token, which you can use to track operations. For example, if you + // execute a CreateStack operation with the token token1, then all the StackEvents + // generated by that operation will have ClientRequestToken set as token1. + // + // In the console, stack operations display the client request token on the + // Events tab. Stack operations that are initiated from the console use the + // token format Console-StackOperation-ID, which helps you easily identify the + // stack operation . For example, if you create a stack using the console, each + // stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002. + ClientRequestToken *string `min:"1" type:"string"` + + // The unique ID of this event. + // + // EventId is a required field + EventId *string `type:"string" required:"true"` + + // The logical name of the resource specified in the template. + LogicalResourceId *string `type:"string"` + + // The name or unique identifier associated with the physical instance of the + // resource. + PhysicalResourceId *string `type:"string"` + + // BLOB of the properties used to create the resource. + ResourceProperties *string `type:"string"` + + // Current status of the resource. + ResourceStatus *string `type:"string" enum:"ResourceStatus"` + + // Success/failure message associated with the resource. + ResourceStatusReason *string `type:"string"` + + // Type of resource. (For more information, go to AWS Resource Types Reference + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide.) + ResourceType *string `min:"1" type:"string"` + + // The unique ID name of the instance of the stack. + // + // StackId is a required field + StackId *string `type:"string" required:"true"` + + // The name associated with a stack. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` + + // Time the status was updated. + // + // Timestamp is a required field + Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` +} + +// String returns the string representation +func (s StackEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackEvent) GoString() string { + return s.String() +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *StackEvent) SetClientRequestToken(v string) *StackEvent { + s.ClientRequestToken = &v + return s +} + +// SetEventId sets the EventId field's value. +func (s *StackEvent) SetEventId(v string) *StackEvent { + s.EventId = &v + return s +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *StackEvent) SetLogicalResourceId(v string) *StackEvent { + s.LogicalResourceId = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *StackEvent) SetPhysicalResourceId(v string) *StackEvent { + s.PhysicalResourceId = &v + return s +} + +// SetResourceProperties sets the ResourceProperties field's value. +func (s *StackEvent) SetResourceProperties(v string) *StackEvent { + s.ResourceProperties = &v + return s +} + +// SetResourceStatus sets the ResourceStatus field's value. +func (s *StackEvent) SetResourceStatus(v string) *StackEvent { + s.ResourceStatus = &v + return s +} + +// SetResourceStatusReason sets the ResourceStatusReason field's value. +func (s *StackEvent) SetResourceStatusReason(v string) *StackEvent { + s.ResourceStatusReason = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *StackEvent) SetResourceType(v string) *StackEvent { + s.ResourceType = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackEvent) SetStackId(v string) *StackEvent { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *StackEvent) SetStackName(v string) *StackEvent { + s.StackName = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *StackEvent) SetTimestamp(v time.Time) *StackEvent { + s.Timestamp = &v + return s +} + +// An AWS CloudFormation stack, in a specific account and region, that's part +// of a stack set operation. A stack instance is a reference to an attempted +// or actual stack in a given account within a given region. A stack instance +// can exist without a stack—for example, if the stack couldn't be created for +// some reason. A stack instance is associated with only one stack set. Each +// stack instance contains the ID of its associated stack set, as well as the +// ID of the actual stack and the stack status. +type StackInstance struct { + _ struct{} `type:"structure"` + + // The name of the AWS account that the stack instance is associated with. + Account *string `type:"string"` + + // A list of parameters from the stack set template whose values have been overridden + // in this stack instance. + ParameterOverrides []*Parameter `type:"list"` + + // The name of the AWS region that the stack instance is associated with. + Region *string `type:"string"` + + // The ID of the stack instance. + StackId *string `type:"string"` + + // The name or unique ID of the stack set that the stack instance is associated + // with. + StackSetId *string `type:"string"` + + // The status of the stack instance, in terms of its synchronization with its + // associated stack set. + // + // * INOPERABLE: A DeleteStackInstances operation has failed and left the + // stack in an unstable state. Stacks in this state are excluded from further + // UpdateStackSet operations. You might need to perform a DeleteStackInstances + // operation, with RetainStacks set to true, to delete the stack instance, + // and then delete the stack manually. + // + // * OUTDATED: The stack isn't currently up to date with the stack set because: + // + // The associated stack failed during a CreateStackSet or UpdateStackSet operation. + // + // + // The stack was part of a CreateStackSet or UpdateStackSet operation that failed + // or was stopped before the stack was created or updated. + // + // * CURRENT: The stack is currently up to date with the stack set. + Status *string `type:"string" enum:"StackInstanceStatus"` + + // The explanation for the specific status code that is assigned to this stack + // instance. + StatusReason *string `type:"string"` +} + +// String returns the string representation +func (s StackInstance) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackInstance) GoString() string { + return s.String() +} + +// SetAccount sets the Account field's value. +func (s *StackInstance) SetAccount(v string) *StackInstance { + s.Account = &v + return s +} + +// SetParameterOverrides sets the ParameterOverrides field's value. +func (s *StackInstance) SetParameterOverrides(v []*Parameter) *StackInstance { + s.ParameterOverrides = v + return s +} + +// SetRegion sets the Region field's value. +func (s *StackInstance) SetRegion(v string) *StackInstance { + s.Region = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackInstance) SetStackId(v string) *StackInstance { + s.StackId = &v + return s +} + +// SetStackSetId sets the StackSetId field's value. +func (s *StackInstance) SetStackSetId(v string) *StackInstance { + s.StackSetId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackInstance) SetStatus(v string) *StackInstance { + s.Status = &v + return s +} + +// SetStatusReason sets the StatusReason field's value. +func (s *StackInstance) SetStatusReason(v string) *StackInstance { + s.StatusReason = &v + return s +} + +// The structure that contains summary information about a stack instance. +type StackInstanceSummary struct { + _ struct{} `type:"structure"` + + // The name of the AWS account that the stack instance is associated with. + Account *string `type:"string"` + + // The name of the AWS region that the stack instance is associated with. + Region *string `type:"string"` + + // The ID of the stack instance. + StackId *string `type:"string"` + + // The name or unique ID of the stack set that the stack instance is associated + // with. + StackSetId *string `type:"string"` + + // The status of the stack instance, in terms of its synchronization with its + // associated stack set. + // + // * INOPERABLE: A DeleteStackInstances operation has failed and left the + // stack in an unstable state. Stacks in this state are excluded from further + // UpdateStackSet operations. You might need to perform a DeleteStackInstances + // operation, with RetainStacks set to true, to delete the stack instance, + // and then delete the stack manually. + // + // * OUTDATED: The stack isn't currently up to date with the stack set because: + // + // The associated stack failed during a CreateStackSet or UpdateStackSet operation. + // + // + // The stack was part of a CreateStackSet or UpdateStackSet operation that failed + // or was stopped before the stack was created or updated. + // + // * CURRENT: The stack is currently up to date with the stack set. + Status *string `type:"string" enum:"StackInstanceStatus"` + + // The explanation for the specific status code assigned to this stack instance. + StatusReason *string `type:"string"` +} + +// String returns the string representation +func (s StackInstanceSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackInstanceSummary) GoString() string { + return s.String() +} + +// SetAccount sets the Account field's value. +func (s *StackInstanceSummary) SetAccount(v string) *StackInstanceSummary { + s.Account = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *StackInstanceSummary) SetRegion(v string) *StackInstanceSummary { + s.Region = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackInstanceSummary) SetStackId(v string) *StackInstanceSummary { + s.StackId = &v + return s +} + +// SetStackSetId sets the StackSetId field's value. +func (s *StackInstanceSummary) SetStackSetId(v string) *StackInstanceSummary { + s.StackSetId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackInstanceSummary) SetStatus(v string) *StackInstanceSummary { + s.Status = &v + return s +} + +// SetStatusReason sets the StatusReason field's value. +func (s *StackInstanceSummary) SetStatusReason(v string) *StackInstanceSummary { + s.StatusReason = &v + return s +} + +// The StackResource data type. +type StackResource struct { + _ struct{} `type:"structure"` + + // User defined description associated with the resource. + Description *string `min:"1" type:"string"` + + // The logical name of the resource specified in the template. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The name or unique identifier that corresponds to a physical instance ID + // of a resource supported by AWS CloudFormation. + PhysicalResourceId *string `type:"string"` + + // Current status of the resource. + // + // ResourceStatus is a required field + ResourceStatus *string `type:"string" required:"true" enum:"ResourceStatus"` + + // Success/failure message associated with the resource. + ResourceStatusReason *string `type:"string"` + + // Type of resource. (For more information, go to AWS Resource Types Reference + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide.) + // + // ResourceType is a required field + ResourceType *string `min:"1" type:"string" required:"true"` + + // Unique identifier of the stack. + StackId *string `type:"string"` + + // The name associated with the stack. + StackName *string `type:"string"` + + // Time the status was updated. + // + // Timestamp is a required field + Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` +} + +// String returns the string representation +func (s StackResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackResource) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *StackResource) SetDescription(v string) *StackResource { + s.Description = &v + return s +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *StackResource) SetLogicalResourceId(v string) *StackResource { + s.LogicalResourceId = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *StackResource) SetPhysicalResourceId(v string) *StackResource { + s.PhysicalResourceId = &v + return s +} + +// SetResourceStatus sets the ResourceStatus field's value. +func (s *StackResource) SetResourceStatus(v string) *StackResource { + s.ResourceStatus = &v + return s +} + +// SetResourceStatusReason sets the ResourceStatusReason field's value. +func (s *StackResource) SetResourceStatusReason(v string) *StackResource { + s.ResourceStatusReason = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *StackResource) SetResourceType(v string) *StackResource { + s.ResourceType = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackResource) SetStackId(v string) *StackResource { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *StackResource) SetStackName(v string) *StackResource { + s.StackName = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *StackResource) SetTimestamp(v time.Time) *StackResource { + s.Timestamp = &v + return s +} + +// Contains detailed information about the specified stack resource. +type StackResourceDetail struct { + _ struct{} `type:"structure"` + + // User defined description associated with the resource. + Description *string `min:"1" type:"string"` + + // Time the status was updated. + // + // LastUpdatedTimestamp is a required field + LastUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The logical name of the resource specified in the template. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The content of the Metadata attribute declared for the resource. For more + // information, see Metadata Attribute (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-metadata.html) + // in the AWS CloudFormation User Guide. + Metadata *string `type:"string"` + + // The name or unique identifier that corresponds to a physical instance ID + // of a resource supported by AWS CloudFormation. + PhysicalResourceId *string `type:"string"` + + // Current status of the resource. + // + // ResourceStatus is a required field + ResourceStatus *string `type:"string" required:"true" enum:"ResourceStatus"` + + // Success/failure message associated with the resource. + ResourceStatusReason *string `type:"string"` + + // Type of resource. ((For more information, go to AWS Resource Types Reference + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide.) + // + // ResourceType is a required field + ResourceType *string `min:"1" type:"string" required:"true"` + + // Unique identifier of the stack. + StackId *string `type:"string"` + + // The name associated with the stack. + StackName *string `type:"string"` +} + +// String returns the string representation +func (s StackResourceDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackResourceDetail) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *StackResourceDetail) SetDescription(v string) *StackResourceDetail { + s.Description = &v + return s +} + +// SetLastUpdatedTimestamp sets the LastUpdatedTimestamp field's value. +func (s *StackResourceDetail) SetLastUpdatedTimestamp(v time.Time) *StackResourceDetail { + s.LastUpdatedTimestamp = &v + return s +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *StackResourceDetail) SetLogicalResourceId(v string) *StackResourceDetail { + s.LogicalResourceId = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *StackResourceDetail) SetMetadata(v string) *StackResourceDetail { + s.Metadata = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *StackResourceDetail) SetPhysicalResourceId(v string) *StackResourceDetail { + s.PhysicalResourceId = &v + return s +} + +// SetResourceStatus sets the ResourceStatus field's value. +func (s *StackResourceDetail) SetResourceStatus(v string) *StackResourceDetail { + s.ResourceStatus = &v + return s +} + +// SetResourceStatusReason sets the ResourceStatusReason field's value. +func (s *StackResourceDetail) SetResourceStatusReason(v string) *StackResourceDetail { + s.ResourceStatusReason = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *StackResourceDetail) SetResourceType(v string) *StackResourceDetail { + s.ResourceType = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackResourceDetail) SetStackId(v string) *StackResourceDetail { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *StackResourceDetail) SetStackName(v string) *StackResourceDetail { + s.StackName = &v + return s +} + +// Contains high-level information about the specified stack resource. +type StackResourceSummary struct { + _ struct{} `type:"structure"` + + // Time the status was updated. + // + // LastUpdatedTimestamp is a required field + LastUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The logical name of the resource specified in the template. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The name or unique identifier that corresponds to a physical instance ID + // of the resource. + PhysicalResourceId *string `type:"string"` + + // Current status of the resource. + // + // ResourceStatus is a required field + ResourceStatus *string `type:"string" required:"true" enum:"ResourceStatus"` + + // Success/failure message associated with the resource. + ResourceStatusReason *string `type:"string"` + + // Type of resource. (For more information, go to AWS Resource Types Reference + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide.) + // + // ResourceType is a required field + ResourceType *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s StackResourceSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackResourceSummary) GoString() string { + return s.String() +} + +// SetLastUpdatedTimestamp sets the LastUpdatedTimestamp field's value. +func (s *StackResourceSummary) SetLastUpdatedTimestamp(v time.Time) *StackResourceSummary { + s.LastUpdatedTimestamp = &v + return s +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *StackResourceSummary) SetLogicalResourceId(v string) *StackResourceSummary { + s.LogicalResourceId = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *StackResourceSummary) SetPhysicalResourceId(v string) *StackResourceSummary { + s.PhysicalResourceId = &v + return s +} + +// SetResourceStatus sets the ResourceStatus field's value. +func (s *StackResourceSummary) SetResourceStatus(v string) *StackResourceSummary { + s.ResourceStatus = &v + return s +} + +// SetResourceStatusReason sets the ResourceStatusReason field's value. +func (s *StackResourceSummary) SetResourceStatusReason(v string) *StackResourceSummary { + s.ResourceStatusReason = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *StackResourceSummary) SetResourceType(v string) *StackResourceSummary { + s.ResourceType = &v + return s +} + +// A structure that contains information about a stack set. A stack set enables +// you to provision stacks into AWS accounts and across regions by using a single +// CloudFormation template. In the stack set, you specify the template to use, +// as well as any parameters and capabilities that the template requires. +type StackSet struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Number (ARN) of the IAM role used to create or update + // the stack set. + // + // Use customized administrator roles to control which users or groups can manage + // specific stack sets within the same administrator account. For more information, + // see Prerequisites: Granting Permissions for Stack Set Operations (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + AdministrationRoleARN *string `min:"20" type:"string"` + + // The capabilities that are allowed in the stack set. Some stack set templates + // might include resources that can affect permissions in your AWS account—for + // example, by creating new AWS Identity and Access Management (IAM) users. + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates. (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities) + Capabilities []*string `type:"list"` + + // A description of the stack set that you specify when the stack set is created + // or updated. + Description *string `min:"1" type:"string"` + + // The name of the IAM execution role used to create or update the stack set. + // + // Use customized execution roles to control which stack resources users and + // groups can include in their stack sets. + ExecutionRoleName *string `min:"1" type:"string"` + + // A list of input parameters for a stack set. + Parameters []*Parameter `type:"list"` + + // The Amazon Resource Number (ARN) of the stack set. + StackSetARN *string `type:"string"` + + // The ID of the stack set. + StackSetId *string `type:"string"` + + // The name that's associated with the stack set. + StackSetName *string `type:"string"` + + // The status of the stack set. + Status *string `type:"string" enum:"StackSetStatus"` + + // A list of tags that specify information about the stack set. A maximum number + // of 50 tags can be specified. + Tags []*Tag `type:"list"` + + // The structure that contains the body of the template that was used to create + // or update the stack set. + TemplateBody *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s StackSet) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSet) GoString() string { + return s.String() +} + +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *StackSet) SetAdministrationRoleARN(v string) *StackSet { + s.AdministrationRoleARN = &v + return s +} + +// SetCapabilities sets the Capabilities field's value. +func (s *StackSet) SetCapabilities(v []*string) *StackSet { + s.Capabilities = v + return s +} + +// SetDescription sets the Description field's value. +func (s *StackSet) SetDescription(v string) *StackSet { + s.Description = &v + return s +} + +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *StackSet) SetExecutionRoleName(v string) *StackSet { + s.ExecutionRoleName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *StackSet) SetParameters(v []*Parameter) *StackSet { + s.Parameters = v + return s +} + +// SetStackSetARN sets the StackSetARN field's value. +func (s *StackSet) SetStackSetARN(v string) *StackSet { + s.StackSetARN = &v + return s +} + +// SetStackSetId sets the StackSetId field's value. +func (s *StackSet) SetStackSetId(v string) *StackSet { + s.StackSetId = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *StackSet) SetStackSetName(v string) *StackSet { + s.StackSetName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackSet) SetStatus(v string) *StackSet { + s.Status = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *StackSet) SetTags(v []*Tag) *StackSet { + s.Tags = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *StackSet) SetTemplateBody(v string) *StackSet { + s.TemplateBody = &v + return s +} + +// The structure that contains information about a stack set operation. +type StackSetOperation struct { + _ struct{} `type:"structure"` + + // The type of stack set operation: CREATE, UPDATE, or DELETE. Create and delete + // operations affect only the specified stack set instances that are associated + // with the specified stack set. Update operations affect both the stack set + // itself, as well as all associated stack set instances. + Action *string `type:"string" enum:"StackSetOperationAction"` + + // The Amazon Resource Number (ARN) of the IAM role used to perform this stack + // set operation. + // + // Use customized administrator roles to control which users or groups can manage + // specific stack sets within the same administrator account. For more information, + // see Define Permissions for Multiple Administrators (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + AdministrationRoleARN *string `min:"20" type:"string"` + + // The time at which the operation was initiated. Note that the creation times + // for the stack set operation might differ from the creation time of the individual + // stacks themselves. This is because AWS CloudFormation needs to perform preparatory + // work for the operation, such as dispatching the work to the requested regions, + // before actually creating the first stacks. + CreationTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The time at which the stack set operation ended, across all accounts and + // regions specified. Note that this doesn't necessarily mean that the stack + // set operation was successful, or even attempted, in each account or region. + EndTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The name of the IAM execution role used to create or update the stack set. + // + // Use customized execution roles to control which stack resources users and + // groups can include in their stack sets. + ExecutionRoleName *string `min:"1" type:"string"` + + // The unique ID of a stack set operation. + OperationId *string `min:"1" type:"string"` + + // The preferences for how AWS CloudFormation performs this stack set operation. + OperationPreferences *StackSetOperationPreferences `type:"structure"` + + // For stack set operations of action type DELETE, specifies whether to remove + // the stack instances from the specified stack set, but doesn't delete the + // stacks. You can't reassociate a retained stack, or add an existing, saved + // stack to a new stack set. + RetainStacks *bool `type:"boolean"` + + // The ID of the stack set. + StackSetId *string `type:"string"` + + // The status of the operation. + // + // * FAILED: The operation exceeded the specified failure tolerance. The + // failure tolerance value that you've set for an operation is applied for + // each region during stack create and update operations. If the number of + // failed stacks within a region exceeds the failure tolerance, the status + // of the operation in the region is set to FAILED. This in turn sets the + // status of the operation as a whole to FAILED, and AWS CloudFormation cancels + // the operation in any remaining regions. + // + // * RUNNING: The operation is currently being performed. + // + // * STOPPED: The user has cancelled the operation. + // + // * STOPPING: The operation is in the process of stopping, at user request. + // + // + // * SUCCEEDED: The operation completed creating or updating all the specified + // stacks without exceeding the failure tolerance for the operation. + Status *string `type:"string" enum:"StackSetOperationStatus"` +} + +// String returns the string representation +func (s StackSetOperation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSetOperation) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *StackSetOperation) SetAction(v string) *StackSetOperation { + s.Action = &v + return s +} + +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *StackSetOperation) SetAdministrationRoleARN(v string) *StackSetOperation { + s.AdministrationRoleARN = &v + return s +} + +// SetCreationTimestamp sets the CreationTimestamp field's value. +func (s *StackSetOperation) SetCreationTimestamp(v time.Time) *StackSetOperation { + s.CreationTimestamp = &v + return s +} + +// SetEndTimestamp sets the EndTimestamp field's value. +func (s *StackSetOperation) SetEndTimestamp(v time.Time) *StackSetOperation { + s.EndTimestamp = &v + return s +} + +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *StackSetOperation) SetExecutionRoleName(v string) *StackSetOperation { + s.ExecutionRoleName = &v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *StackSetOperation) SetOperationId(v string) *StackSetOperation { + s.OperationId = &v + return s +} + +// SetOperationPreferences sets the OperationPreferences field's value. +func (s *StackSetOperation) SetOperationPreferences(v *StackSetOperationPreferences) *StackSetOperation { + s.OperationPreferences = v + return s +} + +// SetRetainStacks sets the RetainStacks field's value. +func (s *StackSetOperation) SetRetainStacks(v bool) *StackSetOperation { + s.RetainStacks = &v + return s +} + +// SetStackSetId sets the StackSetId field's value. +func (s *StackSetOperation) SetStackSetId(v string) *StackSetOperation { + s.StackSetId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackSetOperation) SetStatus(v string) *StackSetOperation { + s.Status = &v + return s +} + +// The user-specified preferences for how AWS CloudFormation performs a stack +// set operation. +// +// For more information on maximum concurrent accounts and failure tolerance, +// see Stack set operation options (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html#stackset-ops-options). +type StackSetOperationPreferences struct { + _ struct{} `type:"structure"` + + // The number of accounts, per region, for which this operation can fail before + // AWS CloudFormation stops the operation in that region. If the operation is + // stopped in a region, AWS CloudFormation doesn't attempt the operation in + // any subsequent regions. + // + // Conditional: You must specify either FailureToleranceCount or FailureTolerancePercentage + // (but not both). + FailureToleranceCount *int64 `type:"integer"` + + // The percentage of accounts, per region, for which this stack operation can + // fail before AWS CloudFormation stops the operation in that region. If the + // operation is stopped in a region, AWS CloudFormation doesn't attempt the + // operation in any subsequent regions. + // + // When calculating the number of accounts based on the specified percentage, + // AWS CloudFormation rounds down to the next whole number. + // + // Conditional: You must specify either FailureToleranceCount or FailureTolerancePercentage, + // but not both. + FailureTolerancePercentage *int64 `type:"integer"` + + // The maximum number of accounts in which to perform this operation at one + // time. This is dependent on the value of FailureToleranceCount—MaxConcurrentCount + // is at most one more than the FailureToleranceCount . + // + // Note that this setting lets you specify the maximum for operations. For large + // deployments, under certain circumstances the actual number of accounts acted + // upon concurrently may be lower due to service throttling. + // + // Conditional: You must specify either MaxConcurrentCount or MaxConcurrentPercentage, + // but not both. + MaxConcurrentCount *int64 `min:"1" type:"integer"` + + // The maximum percentage of accounts in which to perform this operation at + // one time. + // + // When calculating the number of accounts based on the specified percentage, + // AWS CloudFormation rounds down to the next whole number. This is true except + // in cases where rounding down would result is zero. In this case, CloudFormation + // sets the number as one instead. + // + // Note that this setting lets you specify the maximum for operations. For large + // deployments, under certain circumstances the actual number of accounts acted + // upon concurrently may be lower due to service throttling. + // + // Conditional: You must specify either MaxConcurrentCount or MaxConcurrentPercentage, + // but not both. + MaxConcurrentPercentage *int64 `min:"1" type:"integer"` + + // The order of the regions in where you want to perform the stack operation. + RegionOrder []*string `type:"list"` +} + +// String returns the string representation +func (s StackSetOperationPreferences) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSetOperationPreferences) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StackSetOperationPreferences) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StackSetOperationPreferences"} + if s.MaxConcurrentCount != nil && *s.MaxConcurrentCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxConcurrentCount", 1)) + } + if s.MaxConcurrentPercentage != nil && *s.MaxConcurrentPercentage < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxConcurrentPercentage", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFailureToleranceCount sets the FailureToleranceCount field's value. +func (s *StackSetOperationPreferences) SetFailureToleranceCount(v int64) *StackSetOperationPreferences { + s.FailureToleranceCount = &v + return s +} + +// SetFailureTolerancePercentage sets the FailureTolerancePercentage field's value. +func (s *StackSetOperationPreferences) SetFailureTolerancePercentage(v int64) *StackSetOperationPreferences { + s.FailureTolerancePercentage = &v + return s +} + +// SetMaxConcurrentCount sets the MaxConcurrentCount field's value. +func (s *StackSetOperationPreferences) SetMaxConcurrentCount(v int64) *StackSetOperationPreferences { + s.MaxConcurrentCount = &v + return s +} + +// SetMaxConcurrentPercentage sets the MaxConcurrentPercentage field's value. +func (s *StackSetOperationPreferences) SetMaxConcurrentPercentage(v int64) *StackSetOperationPreferences { + s.MaxConcurrentPercentage = &v + return s +} + +// SetRegionOrder sets the RegionOrder field's value. +func (s *StackSetOperationPreferences) SetRegionOrder(v []*string) *StackSetOperationPreferences { + s.RegionOrder = v + return s +} + +// The structure that contains information about a specified operation's results +// for a given account in a given region. +type StackSetOperationResultSummary struct { + _ struct{} `type:"structure"` + + // The name of the AWS account for this operation result. + Account *string `type:"string"` + + // The results of the account gate function AWS CloudFormation invokes, if present, + // before proceeding with stack set operations in an account + AccountGateResult *AccountGateResult `type:"structure"` + + // The name of the AWS region for this operation result. + Region *string `type:"string"` + + // The result status of the stack set operation for the given account in the + // given region. + // + // * CANCELLED: The operation in the specified account and region has been + // cancelled. This is either because a user has stopped the stack set operation, + // or because the failure tolerance of the stack set operation has been exceeded. + // + // * FAILED: The operation in the specified account and region failed. + // + // If the stack set operation fails in enough accounts within a region, the + // failure tolerance for the stack set operation as a whole might be exceeded. + // + // + // * RUNNING: The operation in the specified account and region is currently + // in progress. + // + // * PENDING: The operation in the specified account and region has yet to + // start. + // + // * SUCCEEDED: The operation in the specified account and region completed + // successfully. + Status *string `type:"string" enum:"StackSetOperationResultStatus"` + + // The reason for the assigned result status. + StatusReason *string `type:"string"` +} + +// String returns the string representation +func (s StackSetOperationResultSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSetOperationResultSummary) GoString() string { + return s.String() +} + +// SetAccount sets the Account field's value. +func (s *StackSetOperationResultSummary) SetAccount(v string) *StackSetOperationResultSummary { + s.Account = &v + return s +} + +// SetAccountGateResult sets the AccountGateResult field's value. +func (s *StackSetOperationResultSummary) SetAccountGateResult(v *AccountGateResult) *StackSetOperationResultSummary { + s.AccountGateResult = v + return s +} + +// SetRegion sets the Region field's value. +func (s *StackSetOperationResultSummary) SetRegion(v string) *StackSetOperationResultSummary { + s.Region = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackSetOperationResultSummary) SetStatus(v string) *StackSetOperationResultSummary { + s.Status = &v + return s +} + +// SetStatusReason sets the StatusReason field's value. +func (s *StackSetOperationResultSummary) SetStatusReason(v string) *StackSetOperationResultSummary { + s.StatusReason = &v + return s +} + +// The structures that contain summary information about the specified operation. +type StackSetOperationSummary struct { + _ struct{} `type:"structure"` + + // The type of operation: CREATE, UPDATE, or DELETE. Create and delete operations + // affect only the specified stack instances that are associated with the specified + // stack set. Update operations affect both the stack set itself as well as + // all associated stack set instances. + Action *string `type:"string" enum:"StackSetOperationAction"` + + // The time at which the operation was initiated. Note that the creation times + // for the stack set operation might differ from the creation time of the individual + // stacks themselves. This is because AWS CloudFormation needs to perform preparatory + // work for the operation, such as dispatching the work to the requested regions, + // before actually creating the first stacks. + CreationTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The time at which the stack set operation ended, across all accounts and + // regions specified. Note that this doesn't necessarily mean that the stack + // set operation was successful, or even attempted, in each account or region. + EndTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The unique ID of the stack set operation. + OperationId *string `min:"1" type:"string"` + + // The overall status of the operation. + // + // * FAILED: The operation exceeded the specified failure tolerance. The + // failure tolerance value that you've set for an operation is applied for + // each region during stack create and update operations. If the number of + // failed stacks within a region exceeds the failure tolerance, the status + // of the operation in the region is set to FAILED. This in turn sets the + // status of the operation as a whole to FAILED, and AWS CloudFormation cancels + // the operation in any remaining regions. + // + // * RUNNING: The operation is currently being performed. + // + // * STOPPED: The user has cancelled the operation. + // + // * STOPPING: The operation is in the process of stopping, at user request. + // + // + // * SUCCEEDED: The operation completed creating or updating all the specified + // stacks without exceeding the failure tolerance for the operation. + Status *string `type:"string" enum:"StackSetOperationStatus"` +} + +// String returns the string representation +func (s StackSetOperationSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSetOperationSummary) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *StackSetOperationSummary) SetAction(v string) *StackSetOperationSummary { + s.Action = &v + return s +} + +// SetCreationTimestamp sets the CreationTimestamp field's value. +func (s *StackSetOperationSummary) SetCreationTimestamp(v time.Time) *StackSetOperationSummary { + s.CreationTimestamp = &v + return s +} + +// SetEndTimestamp sets the EndTimestamp field's value. +func (s *StackSetOperationSummary) SetEndTimestamp(v time.Time) *StackSetOperationSummary { + s.EndTimestamp = &v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *StackSetOperationSummary) SetOperationId(v string) *StackSetOperationSummary { + s.OperationId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackSetOperationSummary) SetStatus(v string) *StackSetOperationSummary { + s.Status = &v + return s +} + +// The structures that contain summary information about the specified stack +// set. +type StackSetSummary struct { + _ struct{} `type:"structure"` + + // A description of the stack set that you specify when the stack set is created + // or updated. + Description *string `min:"1" type:"string"` + + // The ID of the stack set. + StackSetId *string `type:"string"` + + // The name of the stack set. + StackSetName *string `type:"string"` + + // The status of the stack set. + Status *string `type:"string" enum:"StackSetStatus"` +} + +// String returns the string representation +func (s StackSetSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSetSummary) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *StackSetSummary) SetDescription(v string) *StackSetSummary { + s.Description = &v + return s +} + +// SetStackSetId sets the StackSetId field's value. +func (s *StackSetSummary) SetStackSetId(v string) *StackSetSummary { + s.StackSetId = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *StackSetSummary) SetStackSetName(v string) *StackSetSummary { + s.StackSetName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StackSetSummary) SetStatus(v string) *StackSetSummary { + s.Status = &v + return s +} + +// The StackSummary Data Type +type StackSummary struct { + _ struct{} `type:"structure"` + + // The time the stack was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The time the stack was deleted. + DeletionTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The time the stack was last updated. This field will only be returned if + // the stack has been updated at least once. + LastUpdatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // For nested stacks--stacks created as resources for another stack--the stack + // ID of the direct parent of this stack. For the first level of nested stacks, + // the root stack is also the parent stack. + // + // For more information, see Working with Nested Stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) + // in the AWS CloudFormation User Guide. + ParentId *string `type:"string"` + + // For nested stacks--stacks created as resources for another stack--the stack + // ID of the the top-level stack to which the nested stack ultimately belongs. + // + // For more information, see Working with Nested Stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) + // in the AWS CloudFormation User Guide. + RootId *string `type:"string"` + + // Unique stack identifier. + StackId *string `type:"string"` + + // The name associated with the stack. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` + + // The current status of the stack. + // + // StackStatus is a required field + StackStatus *string `type:"string" required:"true" enum:"StackStatus"` + + // Success/Failure message associated with the stack status. + StackStatusReason *string `type:"string"` + + // The template description of the template used to create the stack. + TemplateDescription *string `type:"string"` +} + +// String returns the string representation +func (s StackSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackSummary) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *StackSummary) SetCreationTime(v time.Time) *StackSummary { + s.CreationTime = &v + return s +} + +// SetDeletionTime sets the DeletionTime field's value. +func (s *StackSummary) SetDeletionTime(v time.Time) *StackSummary { + s.DeletionTime = &v + return s +} + +// SetLastUpdatedTime sets the LastUpdatedTime field's value. +func (s *StackSummary) SetLastUpdatedTime(v time.Time) *StackSummary { + s.LastUpdatedTime = &v + return s +} + +// SetParentId sets the ParentId field's value. +func (s *StackSummary) SetParentId(v string) *StackSummary { + s.ParentId = &v + return s +} + +// SetRootId sets the RootId field's value. +func (s *StackSummary) SetRootId(v string) *StackSummary { + s.RootId = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackSummary) SetStackId(v string) *StackSummary { + s.StackId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *StackSummary) SetStackName(v string) *StackSummary { + s.StackName = &v + return s +} + +// SetStackStatus sets the StackStatus field's value. +func (s *StackSummary) SetStackStatus(v string) *StackSummary { + s.StackStatus = &v + return s +} + +// SetStackStatusReason sets the StackStatusReason field's value. +func (s *StackSummary) SetStackStatusReason(v string) *StackSummary { + s.StackStatusReason = &v + return s +} + +// SetTemplateDescription sets the TemplateDescription field's value. +func (s *StackSummary) SetTemplateDescription(v string) *StackSummary { + s.TemplateDescription = &v + return s +} + +type StopStackSetOperationInput struct { + _ struct{} `type:"structure"` + + // The ID of the stack operation. + // + // OperationId is a required field + OperationId *string `min:"1" type:"string" required:"true"` + + // The name or unique ID of the stack set that you want to stop the operation + // for. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s StopStackSetOperationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopStackSetOperationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopStackSetOperationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopStackSetOperationInput"} + if s.OperationId == nil { + invalidParams.Add(request.NewErrParamRequired("OperationId")) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOperationId sets the OperationId field's value. +func (s *StopStackSetOperationInput) SetOperationId(v string) *StopStackSetOperationInput { + s.OperationId = &v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *StopStackSetOperationInput) SetStackSetName(v string) *StopStackSetOperationInput { + s.StackSetName = &v + return s +} + +type StopStackSetOperationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StopStackSetOperationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopStackSetOperationOutput) GoString() string { + return s.String() +} + +// The Tag type enables you to specify a key-value pair that can be used to +// store information about an AWS CloudFormation stack. +type Tag struct { + _ struct{} `type:"structure"` + + // Required. A string used to identify this tag. You can specify a maximum of + // 128 characters for a tag key. Tags owned by Amazon Web Services (AWS) have + // the reserved prefix: aws:. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // Required. A string containing the value for this tag. You can specify a maximum + // of 256 characters for a tag value. + // + // Value is a required field + Value *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +// The TemplateParameter data type. +type TemplateParameter struct { + _ struct{} `type:"structure"` + + // The default value associated with the parameter. + DefaultValue *string `type:"string"` + + // User defined description associated with the parameter. + Description *string `min:"1" type:"string"` + + // Flag indicating whether the parameter should be displayed as plain text in + // logs and UIs. + NoEcho *bool `type:"boolean"` + + // The name associated with the parameter. + ParameterKey *string `type:"string"` +} + +// String returns the string representation +func (s TemplateParameter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TemplateParameter) GoString() string { + return s.String() +} + +// SetDefaultValue sets the DefaultValue field's value. +func (s *TemplateParameter) SetDefaultValue(v string) *TemplateParameter { + s.DefaultValue = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *TemplateParameter) SetDescription(v string) *TemplateParameter { + s.Description = &v + return s +} + +// SetNoEcho sets the NoEcho field's value. +func (s *TemplateParameter) SetNoEcho(v bool) *TemplateParameter { + s.NoEcho = &v + return s +} + +// SetParameterKey sets the ParameterKey field's value. +func (s *TemplateParameter) SetParameterKey(v string) *TemplateParameter { + s.ParameterKey = &v + return s +} + +// The input for an UpdateStack action. +type UpdateStackInput struct { + _ struct{} `type:"structure"` + + // A list of values that you must specify before AWS CloudFormation can update + // certain stacks. Some stack templates might include resources that can affect + // permissions in your AWS account, for example, by creating new AWS Identity + // and Access Management (IAM) users. For those stacks, you must explicitly + // acknowledge their capabilities by specifying this parameter. + // + // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following + // resources require you to specify this parameter: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html), + // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html), + // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html), + // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html), + // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html), + // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html), + // and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html). + // If your stack template contains these resources, we recommend that you review + // all permissions associated with them and edit their permissions if necessary. + // + // If you have IAM resources, you can specify either capability. If you have + // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If + // you don't specify this parameter, this action returns an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + Capabilities []*string `type:"list"` + + // A unique identifier for this UpdateStack request. Specify this token if you + // plan to retry requests so that AWS CloudFormation knows that you're not attempting + // to update a stack with the same name. You might retry UpdateStack requests + // to ensure that AWS CloudFormation successfully received them. + // + // All events triggered by a given stack operation are assigned the same client + // request token, which you can use to track operations. For example, if you + // execute a CreateStack operation with the token token1, then all the StackEvents + // generated by that operation will have ClientRequestToken set as token1. + // + // In the console, stack operations display the client request token on the + // Events tab. Stack operations that are initiated from the console use the + // token format Console-StackOperation-ID, which helps you easily identify the + // stack operation . For example, if you create a stack using the console, each + // stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002. + ClientRequestToken *string `min:"1" type:"string"` + + // Amazon Simple Notification Service topic Amazon Resource Names (ARNs) that + // AWS CloudFormation associates with the stack. Specify an empty list to remove + // all notification topics. + NotificationARNs []*string `type:"list"` + + // A list of Parameter structures that specify input parameters for the stack. + // For more information, see the Parameter (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) + // data type. + Parameters []*Parameter `type:"list"` + + // The template resource types that you have permissions to work with for this + // update stack action, such as AWS::EC2::Instance, AWS::EC2::*, or Custom::MyCustomInstance. + // + // If the list of resource types doesn't include a resource that you're updating, + // the stack update fails. By default, AWS CloudFormation grants permissions + // to all resource types. AWS Identity and Access Management (IAM) uses this + // parameter for AWS CloudFormation-specific condition keys in IAM policies. + // For more information, see Controlling Access with AWS Identity and Access + // Management (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html). + ResourceTypes []*string `type:"list"` + + // The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) + // role that AWS CloudFormation assumes to update the stack. AWS CloudFormation + // uses the role's credentials to make calls on your behalf. AWS CloudFormation + // always uses this role for all future operations on the stack. As long as + // users have permission to operate on the stack, AWS CloudFormation uses this + // role even if the users don't have permission to pass it. Ensure that the + // role grants least privilege. + // + // If you don't specify a value, AWS CloudFormation uses the role that was previously + // associated with the stack. If no role is available, AWS CloudFormation uses + // a temporary session that is generated from your user credentials. + RoleARN *string `min:"20" type:"string"` + + // The rollback triggers for AWS CloudFormation to monitor during stack creation + // and updating operations, and for the specified monitoring period afterwards. + RollbackConfiguration *RollbackConfiguration `type:"structure"` + + // The name or unique stack ID of the stack to update. + // + // StackName is a required field + StackName *string `type:"string" required:"true"` + + // Structure containing a new stack policy body. You can specify either the + // StackPolicyBody or the StackPolicyURL parameter, but not both. + // + // You might update the stack policy, for example, in order to protect a new + // resource that you created during a stack update. If you do not specify a + // stack policy, the current policy that is associated with the stack is unchanged. + StackPolicyBody *string `min:"1" type:"string"` + + // Structure containing the temporary overriding stack policy body. You can + // specify either the StackPolicyDuringUpdateBody or the StackPolicyDuringUpdateURL + // parameter, but not both. + // + // If you want to update protected resources, specify a temporary overriding + // stack policy during this update. If you do not specify a stack policy, the + // current policy that is associated with the stack will be used. + StackPolicyDuringUpdateBody *string `min:"1" type:"string"` + + // Location of a file containing the temporary overriding stack policy. The + // URL must point to a policy (max size: 16KB) located in an S3 bucket in the + // same region as the stack. You can specify either the StackPolicyDuringUpdateBody + // or the StackPolicyDuringUpdateURL parameter, but not both. + // + // If you want to update protected resources, specify a temporary overriding + // stack policy during this update. If you do not specify a stack policy, the + // current policy that is associated with the stack will be used. + StackPolicyDuringUpdateURL *string `min:"1" type:"string"` + + // Location of a file containing the updated stack policy. The URL must point + // to a policy (max size: 16KB) located in an S3 bucket in the same region as + // the stack. You can specify either the StackPolicyBody or the StackPolicyURL + // parameter, but not both. + // + // You might update the stack policy, for example, in order to protect a new + // resource that you created during a stack update. If you do not specify a + // stack policy, the current policy that is associated with the stack is unchanged. + StackPolicyURL *string `min:"1" type:"string"` + + // Key-value pairs to associate with this stack. AWS CloudFormation also propagates + // these tags to supported resources in the stack. You can specify a maximum + // number of 50 tags. + // + // If you don't specify this parameter, AWS CloudFormation doesn't modify the + // stack's tags. If you specify an empty value, AWS CloudFormation removes all + // associated tags. + Tags []*Tag `type:"list"` + + // Structure containing the template body with a minimum length of 1 byte and + // a maximum length of 51,200 bytes. (For more information, go to Template Anatomy + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide.) + // + // Conditional: You must specify only one of the following parameters: TemplateBody, + // TemplateURL, or set the UsePreviousTemplate to true. + TemplateBody *string `min:"1" type:"string"` + + // Location of file containing the template body. The URL must point to a template + // that is located in an Amazon S3 bucket. For more information, go to Template + // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify only one of the following parameters: TemplateBody, + // TemplateURL, or set the UsePreviousTemplate to true. + TemplateURL *string `min:"1" type:"string"` + + // Reuse the existing template that is associated with the stack that you are + // updating. + // + // Conditional: You must specify only one of the following parameters: TemplateBody, + // TemplateURL, or set the UsePreviousTemplate to true. + UsePreviousTemplate *bool `type:"boolean"` +} + +// String returns the string representation +func (s UpdateStackInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateStackInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateStackInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateStackInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } + if s.RoleARN != nil && len(*s.RoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 20)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackPolicyBody != nil && len(*s.StackPolicyBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyBody", 1)) + } + if s.StackPolicyDuringUpdateBody != nil && len(*s.StackPolicyDuringUpdateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyDuringUpdateBody", 1)) + } + if s.StackPolicyDuringUpdateURL != nil && len(*s.StackPolicyDuringUpdateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyDuringUpdateURL", 1)) + } + if s.StackPolicyURL != nil && len(*s.StackPolicyURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackPolicyURL", 1)) + } + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + if s.RollbackConfiguration != nil { + if err := s.RollbackConfiguration.Validate(); err != nil { + invalidParams.AddNested("RollbackConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCapabilities sets the Capabilities field's value. +func (s *UpdateStackInput) SetCapabilities(v []*string) *UpdateStackInput { + s.Capabilities = v + return s +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *UpdateStackInput) SetClientRequestToken(v string) *UpdateStackInput { + s.ClientRequestToken = &v + return s +} + +// SetNotificationARNs sets the NotificationARNs field's value. +func (s *UpdateStackInput) SetNotificationARNs(v []*string) *UpdateStackInput { + s.NotificationARNs = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *UpdateStackInput) SetParameters(v []*Parameter) *UpdateStackInput { + s.Parameters = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *UpdateStackInput) SetResourceTypes(v []*string) *UpdateStackInput { + s.ResourceTypes = v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *UpdateStackInput) SetRoleARN(v string) *UpdateStackInput { + s.RoleARN = &v + return s +} + +// SetRollbackConfiguration sets the RollbackConfiguration field's value. +func (s *UpdateStackInput) SetRollbackConfiguration(v *RollbackConfiguration) *UpdateStackInput { + s.RollbackConfiguration = v + return s +} + +// SetStackName sets the StackName field's value. +func (s *UpdateStackInput) SetStackName(v string) *UpdateStackInput { + s.StackName = &v + return s +} + +// SetStackPolicyBody sets the StackPolicyBody field's value. +func (s *UpdateStackInput) SetStackPolicyBody(v string) *UpdateStackInput { + s.StackPolicyBody = &v + return s +} + +// SetStackPolicyDuringUpdateBody sets the StackPolicyDuringUpdateBody field's value. +func (s *UpdateStackInput) SetStackPolicyDuringUpdateBody(v string) *UpdateStackInput { + s.StackPolicyDuringUpdateBody = &v + return s +} + +// SetStackPolicyDuringUpdateURL sets the StackPolicyDuringUpdateURL field's value. +func (s *UpdateStackInput) SetStackPolicyDuringUpdateURL(v string) *UpdateStackInput { + s.StackPolicyDuringUpdateURL = &v + return s +} + +// SetStackPolicyURL sets the StackPolicyURL field's value. +func (s *UpdateStackInput) SetStackPolicyURL(v string) *UpdateStackInput { + s.StackPolicyURL = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *UpdateStackInput) SetTags(v []*Tag) *UpdateStackInput { + s.Tags = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *UpdateStackInput) SetTemplateBody(v string) *UpdateStackInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *UpdateStackInput) SetTemplateURL(v string) *UpdateStackInput { + s.TemplateURL = &v + return s +} + +// SetUsePreviousTemplate sets the UsePreviousTemplate field's value. +func (s *UpdateStackInput) SetUsePreviousTemplate(v bool) *UpdateStackInput { + s.UsePreviousTemplate = &v + return s +} + +type UpdateStackInstancesInput struct { + _ struct{} `type:"structure"` + + // The names of one or more AWS accounts for which you want to update parameter + // values for stack instances. The overridden parameter values will be applied + // to all stack instances in the specified accounts and regions. + // + // Accounts is a required field + Accounts []*string `type:"list" required:"true"` + + // The unique identifier for this stack set operation. + // + // The operation ID also functions as an idempotency token, to ensure that AWS + // CloudFormation performs the stack set operation only once, even if you retry + // the request multiple times. You might retry stack set operation requests + // to ensure that AWS CloudFormation successfully received them. + // + // If you don't specify an operation ID, the SDK generates one automatically. + OperationId *string `min:"1" type:"string" idempotencyToken:"true"` + + // Preferences for how AWS CloudFormation performs this stack set operation. + OperationPreferences *StackSetOperationPreferences `type:"structure"` + + // A list of input parameters whose values you want to update for the specified + // stack instances. + // + // Any overridden parameter values will be applied to all stack instances in + // the specified accounts and regions. When specifying parameters and their + // values, be aware of how AWS CloudFormation sets parameter values during stack + // instance update operations: + // + // * To override the current value for a parameter, include the parameter + // and specify its value. + // + // * To leave a parameter set to its present value, you can do one of the + // following: + // + // Do not include the parameter in the list. + // + // Include the parameter and specify UsePreviousValue as true. (You cannot specify + // both a value and set UsePreviousValue to true.) + // + // * To set all overridden parameter back to the values specified in the + // stack set, specify a parameter list but do not include any parameters. + // + // * To leave all parameters set to their present values, do not specify + // this property at all. + // + // During stack set updates, any parameter values overridden for a stack instance + // are not updated, but retain their overridden value. + // + // You can only override the parameter values that are specified in the stack + // set; to add or delete a parameter itself, use UpdateStackSet to update the + // stack set template. If you add a parameter to a template, before you can + // override the parameter value specified in the stack set you must first use + // UpdateStackSet (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStackSet.html) + // to update all stack instances with the updated template and parameter value + // specified in the stack set. Once a stack instance has been updated with the + // new parameter, you can then override the parameter value using UpdateStackInstances. + ParameterOverrides []*Parameter `type:"list"` + + // The names of one or more regions in which you want to update parameter values + // for stack instances. The overridden parameter values will be applied to all + // stack instances in the specified accounts and regions. + // + // Regions is a required field + Regions []*string `type:"list" required:"true"` + + // The name or unique ID of the stack set associated with the stack instances. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateStackInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateStackInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateStackInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateStackInstancesInput"} + if s.Accounts == nil { + invalidParams.Add(request.NewErrParamRequired("Accounts")) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.Regions == nil { + invalidParams.Add(request.NewErrParamRequired("Regions")) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + if s.OperationPreferences != nil { + if err := s.OperationPreferences.Validate(); err != nil { + invalidParams.AddNested("OperationPreferences", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccounts sets the Accounts field's value. +func (s *UpdateStackInstancesInput) SetAccounts(v []*string) *UpdateStackInstancesInput { + s.Accounts = v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *UpdateStackInstancesInput) SetOperationId(v string) *UpdateStackInstancesInput { + s.OperationId = &v + return s +} + +// SetOperationPreferences sets the OperationPreferences field's value. +func (s *UpdateStackInstancesInput) SetOperationPreferences(v *StackSetOperationPreferences) *UpdateStackInstancesInput { + s.OperationPreferences = v + return s +} + +// SetParameterOverrides sets the ParameterOverrides field's value. +func (s *UpdateStackInstancesInput) SetParameterOverrides(v []*Parameter) *UpdateStackInstancesInput { + s.ParameterOverrides = v + return s +} + +// SetRegions sets the Regions field's value. +func (s *UpdateStackInstancesInput) SetRegions(v []*string) *UpdateStackInstancesInput { + s.Regions = v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *UpdateStackInstancesInput) SetStackSetName(v string) *UpdateStackInstancesInput { + s.StackSetName = &v + return s +} + +type UpdateStackInstancesOutput struct { + _ struct{} `type:"structure"` + + // The unique identifier for this stack set operation. + OperationId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateStackInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateStackInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationId sets the OperationId field's value. +func (s *UpdateStackInstancesOutput) SetOperationId(v string) *UpdateStackInstancesOutput { + s.OperationId = &v + return s +} + +// The output for an UpdateStack action. +type UpdateStackOutput struct { + _ struct{} `type:"structure"` + + // Unique identifier of the stack. + StackId *string `type:"string"` +} + +// String returns the string representation +func (s UpdateStackOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateStackOutput) GoString() string { + return s.String() +} + +// SetStackId sets the StackId field's value. +func (s *UpdateStackOutput) SetStackId(v string) *UpdateStackOutput { + s.StackId = &v + return s +} + +type UpdateStackSetInput struct { + _ struct{} `type:"structure"` + + // The accounts in which to update associated stack instances. If you specify + // accounts, you must also specify the regions in which to update stack set + // instances. + // + // To update all the stack instances associated with this stack set, do not + // specify the Accounts or Regions properties. + // + // If the stack set update includes changes to the template (that is, if the + // TemplateBody or TemplateURL properties are specified), or the Parameters + // property, AWS CloudFormation marks all stack instances with a status of OUTDATED + // prior to updating the stack instances in the specified accounts and regions. + // If the stack set update does not include changes to the template or parameters, + // AWS CloudFormation updates the stack instances in the specified accounts + // and regions, while leaving all other stack instances with their existing + // stack instance status. + Accounts []*string `type:"list"` + + // The Amazon Resource Number (ARN) of the IAM role to use to update this stack + // set. + // + // Specify an IAM role only if you are using customized administrator roles + // to control which users or groups can manage specific stack sets within the + // same administrator account. For more information, see Define Permissions + // for Multiple Administrators (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + // + // If you specify a customized administrator role, AWS CloudFormation uses that + // role to update the stack. If you do not specify a customized administrator + // role, AWS CloudFormation performs the update using the role previously associated + // with the stack set, so long as you have permissions to perform operations + // on the stack set. + AdministrationRoleARN *string `min:"20" type:"string"` + + // A list of values that you must specify before AWS CloudFormation can create + // certain stack sets. Some stack set templates might include resources that + // can affect permissions in your AWS account—for example, by creating new AWS + // Identity and Access Management (IAM) users. For those stack sets, you must + // explicitly acknowledge their capabilities by specifying this parameter. + // + // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following + // resources require you to specify this parameter: + // + // * AWS::IAM::AccessKey + // + // * AWS::IAM::Group + // + // * AWS::IAM::InstanceProfile + // + // * AWS::IAM::Policy + // + // * AWS::IAM::Role + // + // * AWS::IAM::User + // + // * AWS::IAM::UserToGroupAddition + // + // If your stack template contains these resources, we recommend that you review + // all permissions that are associated with them and edit their permissions + // if necessary. + // + // If you have IAM resources, you can specify either capability. If you have + // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If + // you don't specify this parameter, this action returns an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates. (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities) + Capabilities []*string `type:"list"` + + // A brief description of updates that you are making. + Description *string `min:"1" type:"string"` + + // The name of the IAM execution role to use to update the stack set. If you + // do not specify an execution role, AWS CloudFormation uses the AWSCloudFormationStackSetExecutionRole + // role for the stack set operation. + // + // Specify an IAM role only if you are using customized execution roles to control + // which stack resources users and groups can include in their stack sets. + // + // If you specify a customized execution role, AWS CloudFormation uses that + // role to update the stack. If you do not specify a customized execution role, + // AWS CloudFormation performs the update using the role previously associated + // with the stack set, so long as you have permissions to perform operations + // on the stack set. + ExecutionRoleName *string `min:"1" type:"string"` + + // The unique ID for this stack set operation. + // + // The operation ID also functions as an idempotency token, to ensure that AWS + // CloudFormation performs the stack set operation only once, even if you retry + // the request multiple times. You might retry stack set operation requests + // to ensure that AWS CloudFormation successfully received them. + // + // If you don't specify an operation ID, AWS CloudFormation generates one automatically. + // + // Repeating this stack set operation with a new operation ID retries all stack + // instances whose status is OUTDATED. + OperationId *string `min:"1" type:"string" idempotencyToken:"true"` + + // Preferences for how AWS CloudFormation performs this stack set operation. + OperationPreferences *StackSetOperationPreferences `type:"structure"` + + // A list of input parameters for the stack set template. + Parameters []*Parameter `type:"list"` + + // The regions in which to update associated stack instances. If you specify + // regions, you must also specify accounts in which to update stack set instances. + // + // To update all the stack instances associated with this stack set, do not + // specify the Accounts or Regions properties. + // + // If the stack set update includes changes to the template (that is, if the + // TemplateBody or TemplateURL properties are specified), or the Parameters + // property, AWS CloudFormation marks all stack instances with a status of OUTDATED + // prior to updating the stack instances in the specified accounts and regions. + // If the stack set update does not include changes to the template or parameters, + // AWS CloudFormation updates the stack instances in the specified accounts + // and regions, while leaving all other stack instances with their existing + // stack instance status. + Regions []*string `type:"list"` + + // The name or unique ID of the stack set that you want to update. + // + // StackSetName is a required field + StackSetName *string `type:"string" required:"true"` + + // The key-value pairs to associate with this stack set and the stacks created + // from it. AWS CloudFormation also propagates these tags to supported resources + // that are created in the stacks. You can specify a maximum number of 50 tags. + // + // If you specify tags for this parameter, those tags replace any list of tags + // that are currently associated with this stack set. This means: + // + // * If you don't specify this parameter, AWS CloudFormation doesn't modify + // the stack's tags. + // + // * If you specify any tags using this parameter, you must specify all the + // tags that you want associated with this stack set, even tags you've specifed + // before (for example, when creating the stack set or during a previous + // update of the stack set.). Any tags that you don't include in the updated + // list of tags are removed from the stack set, and therefore from the stacks + // and resources as well. + // + // * If you specify an empty value, AWS CloudFormation removes all currently + // associated tags. + // + // If you specify new tags as part of an UpdateStackSet action, AWS CloudFormation + // checks to see if you have the required IAM permission to tag resources. If + // you omit tags that are currently associated with the stack set from the list + // of tags you specify, AWS CloudFormation assumes that you want to remove those + // tags from the stack set, and checks to see if you have permission to untag + // resources. If you don't have the necessary permission(s), the entire UpdateStackSet + // action fails with an access denied error, and the stack set is not updated. + Tags []*Tag `type:"list"` + + // The structure that contains the template body, with a minimum length of 1 + // byte and a maximum length of 51,200 bytes. For more information, see Template + // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify only one of the following parameters: TemplateBody + // or TemplateURL—or set UsePreviousTemplate to true. + TemplateBody *string `min:"1" type:"string"` + + // The location of the file that contains the template body. The URL must point + // to a template (maximum size: 460,800 bytes) that is located in an Amazon + // S3 bucket. For more information, see Template Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must specify only one of the following parameters: TemplateBody + // or TemplateURL—or set UsePreviousTemplate to true. + TemplateURL *string `min:"1" type:"string"` + + // Use the existing template that's associated with the stack set that you're + // updating. + // + // Conditional: You must specify only one of the following parameters: TemplateBody + // or TemplateURL—or set UsePreviousTemplate to true. + UsePreviousTemplate *bool `type:"boolean"` +} + +// String returns the string representation +func (s UpdateStackSetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateStackSetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateStackSetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateStackSetInput"} + if s.AdministrationRoleARN != nil && len(*s.AdministrationRoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("AdministrationRoleARN", 20)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.ExecutionRoleName != nil && len(*s.ExecutionRoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleName", 1)) + } + if s.OperationId != nil && len(*s.OperationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) + } + if s.StackSetName == nil { + invalidParams.Add(request.NewErrParamRequired("StackSetName")) + } + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + if s.OperationPreferences != nil { + if err := s.OperationPreferences.Validate(); err != nil { + invalidParams.AddNested("OperationPreferences", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccounts sets the Accounts field's value. +func (s *UpdateStackSetInput) SetAccounts(v []*string) *UpdateStackSetInput { + s.Accounts = v + return s +} + +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *UpdateStackSetInput) SetAdministrationRoleARN(v string) *UpdateStackSetInput { + s.AdministrationRoleARN = &v + return s +} + +// SetCapabilities sets the Capabilities field's value. +func (s *UpdateStackSetInput) SetCapabilities(v []*string) *UpdateStackSetInput { + s.Capabilities = v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateStackSetInput) SetDescription(v string) *UpdateStackSetInput { + s.Description = &v + return s +} + +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *UpdateStackSetInput) SetExecutionRoleName(v string) *UpdateStackSetInput { + s.ExecutionRoleName = &v + return s +} + +// SetOperationId sets the OperationId field's value. +func (s *UpdateStackSetInput) SetOperationId(v string) *UpdateStackSetInput { + s.OperationId = &v + return s +} + +// SetOperationPreferences sets the OperationPreferences field's value. +func (s *UpdateStackSetInput) SetOperationPreferences(v *StackSetOperationPreferences) *UpdateStackSetInput { + s.OperationPreferences = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *UpdateStackSetInput) SetParameters(v []*Parameter) *UpdateStackSetInput { + s.Parameters = v + return s +} + +// SetRegions sets the Regions field's value. +func (s *UpdateStackSetInput) SetRegions(v []*string) *UpdateStackSetInput { + s.Regions = v + return s +} + +// SetStackSetName sets the StackSetName field's value. +func (s *UpdateStackSetInput) SetStackSetName(v string) *UpdateStackSetInput { + s.StackSetName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *UpdateStackSetInput) SetTags(v []*Tag) *UpdateStackSetInput { + s.Tags = v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *UpdateStackSetInput) SetTemplateBody(v string) *UpdateStackSetInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *UpdateStackSetInput) SetTemplateURL(v string) *UpdateStackSetInput { + s.TemplateURL = &v + return s +} + +// SetUsePreviousTemplate sets the UsePreviousTemplate field's value. +func (s *UpdateStackSetInput) SetUsePreviousTemplate(v bool) *UpdateStackSetInput { + s.UsePreviousTemplate = &v + return s +} + +type UpdateStackSetOutput struct { + _ struct{} `type:"structure"` + + // The unique ID for this stack set operation. + OperationId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateStackSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateStackSetOutput) GoString() string { + return s.String() +} + +// SetOperationId sets the OperationId field's value. +func (s *UpdateStackSetOutput) SetOperationId(v string) *UpdateStackSetOutput { + s.OperationId = &v + return s +} + +type UpdateTerminationProtectionInput struct { + _ struct{} `type:"structure"` + + // Whether to enable termination protection on the specified stack. + // + // EnableTerminationProtection is a required field + EnableTerminationProtection *bool `type:"boolean" required:"true"` + + // The name or unique ID of the stack for which you want to set termination + // protection. + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateTerminationProtectionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTerminationProtectionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateTerminationProtectionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateTerminationProtectionInput"} + if s.EnableTerminationProtection == nil { + invalidParams.Add(request.NewErrParamRequired("EnableTerminationProtection")) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnableTerminationProtection sets the EnableTerminationProtection field's value. +func (s *UpdateTerminationProtectionInput) SetEnableTerminationProtection(v bool) *UpdateTerminationProtectionInput { + s.EnableTerminationProtection = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *UpdateTerminationProtectionInput) SetStackName(v string) *UpdateTerminationProtectionInput { + s.StackName = &v + return s +} + +type UpdateTerminationProtectionOutput struct { + _ struct{} `type:"structure"` + + // The unique ID of the stack. + StackId *string `type:"string"` +} + +// String returns the string representation +func (s UpdateTerminationProtectionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTerminationProtectionOutput) GoString() string { + return s.String() +} + +// SetStackId sets the StackId field's value. +func (s *UpdateTerminationProtectionOutput) SetStackId(v string) *UpdateTerminationProtectionOutput { + s.StackId = &v + return s +} + +// The input for ValidateTemplate action. +type ValidateTemplateInput struct { + _ struct{} `type:"structure"` + + // Structure containing the template body with a minimum length of 1 byte and + // a maximum length of 51,200 bytes. For more information, go to Template Anatomy + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must pass TemplateURL or TemplateBody. If both are passed, + // only TemplateBody is used. + TemplateBody *string `min:"1" type:"string"` + + // Location of file containing the template body. The URL must point to a template + // (max size: 460,800 bytes) that is located in an Amazon S3 bucket. For more + // information, go to Template Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must pass TemplateURL or TemplateBody. If both are passed, + // only TemplateBody is used. + TemplateURL *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ValidateTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidateTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ValidateTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ValidateTemplateInput"} + if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) + } + if s.TemplateURL != nil && len(*s.TemplateURL) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TemplateURL", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *ValidateTemplateInput) SetTemplateBody(v string) *ValidateTemplateInput { + s.TemplateBody = &v + return s +} + +// SetTemplateURL sets the TemplateURL field's value. +func (s *ValidateTemplateInput) SetTemplateURL(v string) *ValidateTemplateInput { + s.TemplateURL = &v + return s +} + +// The output for ValidateTemplate action. +type ValidateTemplateOutput struct { + _ struct{} `type:"structure"` + + // The capabilities found within the template. If your template contains IAM + // resources, you must specify the CAPABILITY_IAM or CAPABILITY_NAMED_IAM value + // for this parameter when you use the CreateStack or UpdateStack actions with + // your template; otherwise, those actions return an InsufficientCapabilities + // error. + // + // For more information, see Acknowledging IAM Resources in AWS CloudFormation + // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + Capabilities []*string `type:"list"` + + // The list of resources that generated the values in the Capabilities response + // element. + CapabilitiesReason *string `type:"string"` + + // A list of the transforms that are declared in the template. + DeclaredTransforms []*string `type:"list"` + + // The description found within the template. + Description *string `min:"1" type:"string"` + + // A list of TemplateParameter structures. + Parameters []*TemplateParameter `type:"list"` +} + +// String returns the string representation +func (s ValidateTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidateTemplateOutput) GoString() string { + return s.String() +} + +// SetCapabilities sets the Capabilities field's value. +func (s *ValidateTemplateOutput) SetCapabilities(v []*string) *ValidateTemplateOutput { + s.Capabilities = v + return s +} + +// SetCapabilitiesReason sets the CapabilitiesReason field's value. +func (s *ValidateTemplateOutput) SetCapabilitiesReason(v string) *ValidateTemplateOutput { + s.CapabilitiesReason = &v + return s +} + +// SetDeclaredTransforms sets the DeclaredTransforms field's value. +func (s *ValidateTemplateOutput) SetDeclaredTransforms(v []*string) *ValidateTemplateOutput { + s.DeclaredTransforms = v + return s +} + +// SetDescription sets the Description field's value. +func (s *ValidateTemplateOutput) SetDescription(v string) *ValidateTemplateOutput { + s.Description = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ValidateTemplateOutput) SetParameters(v []*TemplateParameter) *ValidateTemplateOutput { + s.Parameters = v + return s +} + +const ( + // AccountGateStatusSucceeded is a AccountGateStatus enum value + AccountGateStatusSucceeded = "SUCCEEDED" + + // AccountGateStatusFailed is a AccountGateStatus enum value + AccountGateStatusFailed = "FAILED" + + // AccountGateStatusSkipped is a AccountGateStatus enum value + AccountGateStatusSkipped = "SKIPPED" +) + +const ( + // CapabilityCapabilityIam is a Capability enum value + CapabilityCapabilityIam = "CAPABILITY_IAM" + + // CapabilityCapabilityNamedIam is a Capability enum value + CapabilityCapabilityNamedIam = "CAPABILITY_NAMED_IAM" +) + +const ( + // ChangeActionAdd is a ChangeAction enum value + ChangeActionAdd = "Add" + + // ChangeActionModify is a ChangeAction enum value + ChangeActionModify = "Modify" + + // ChangeActionRemove is a ChangeAction enum value + ChangeActionRemove = "Remove" +) + +const ( + // ChangeSetStatusCreatePending is a ChangeSetStatus enum value + ChangeSetStatusCreatePending = "CREATE_PENDING" + + // ChangeSetStatusCreateInProgress is a ChangeSetStatus enum value + ChangeSetStatusCreateInProgress = "CREATE_IN_PROGRESS" + + // ChangeSetStatusCreateComplete is a ChangeSetStatus enum value + ChangeSetStatusCreateComplete = "CREATE_COMPLETE" + + // ChangeSetStatusDeleteComplete is a ChangeSetStatus enum value + ChangeSetStatusDeleteComplete = "DELETE_COMPLETE" + + // ChangeSetStatusFailed is a ChangeSetStatus enum value + ChangeSetStatusFailed = "FAILED" +) + +const ( + // ChangeSetTypeCreate is a ChangeSetType enum value + ChangeSetTypeCreate = "CREATE" + + // ChangeSetTypeUpdate is a ChangeSetType enum value + ChangeSetTypeUpdate = "UPDATE" +) + +const ( + // ChangeSourceResourceReference is a ChangeSource enum value + ChangeSourceResourceReference = "ResourceReference" + + // ChangeSourceParameterReference is a ChangeSource enum value + ChangeSourceParameterReference = "ParameterReference" + + // ChangeSourceResourceAttribute is a ChangeSource enum value + ChangeSourceResourceAttribute = "ResourceAttribute" + + // ChangeSourceDirectModification is a ChangeSource enum value + ChangeSourceDirectModification = "DirectModification" + + // ChangeSourceAutomatic is a ChangeSource enum value + ChangeSourceAutomatic = "Automatic" +) + +const ( + // ChangeTypeResource is a ChangeType enum value + ChangeTypeResource = "Resource" +) + +const ( + // EvaluationTypeStatic is a EvaluationType enum value + EvaluationTypeStatic = "Static" + + // EvaluationTypeDynamic is a EvaluationType enum value + EvaluationTypeDynamic = "Dynamic" +) + +const ( + // ExecutionStatusUnavailable is a ExecutionStatus enum value + ExecutionStatusUnavailable = "UNAVAILABLE" + + // ExecutionStatusAvailable is a ExecutionStatus enum value + ExecutionStatusAvailable = "AVAILABLE" + + // ExecutionStatusExecuteInProgress is a ExecutionStatus enum value + ExecutionStatusExecuteInProgress = "EXECUTE_IN_PROGRESS" + + // ExecutionStatusExecuteComplete is a ExecutionStatus enum value + ExecutionStatusExecuteComplete = "EXECUTE_COMPLETE" + + // ExecutionStatusExecuteFailed is a ExecutionStatus enum value + ExecutionStatusExecuteFailed = "EXECUTE_FAILED" + + // ExecutionStatusObsolete is a ExecutionStatus enum value + ExecutionStatusObsolete = "OBSOLETE" +) + +const ( + // OnFailureDoNothing is a OnFailure enum value + OnFailureDoNothing = "DO_NOTHING" + + // OnFailureRollback is a OnFailure enum value + OnFailureRollback = "ROLLBACK" + + // OnFailureDelete is a OnFailure enum value + OnFailureDelete = "DELETE" +) + +const ( + // ReplacementTrue is a Replacement enum value + ReplacementTrue = "True" + + // ReplacementFalse is a Replacement enum value + ReplacementFalse = "False" + + // ReplacementConditional is a Replacement enum value + ReplacementConditional = "Conditional" +) + +const ( + // RequiresRecreationNever is a RequiresRecreation enum value + RequiresRecreationNever = "Never" + + // RequiresRecreationConditionally is a RequiresRecreation enum value + RequiresRecreationConditionally = "Conditionally" + + // RequiresRecreationAlways is a RequiresRecreation enum value + RequiresRecreationAlways = "Always" +) + +const ( + // ResourceAttributeProperties is a ResourceAttribute enum value + ResourceAttributeProperties = "Properties" + + // ResourceAttributeMetadata is a ResourceAttribute enum value + ResourceAttributeMetadata = "Metadata" + + // ResourceAttributeCreationPolicy is a ResourceAttribute enum value + ResourceAttributeCreationPolicy = "CreationPolicy" + + // ResourceAttributeUpdatePolicy is a ResourceAttribute enum value + ResourceAttributeUpdatePolicy = "UpdatePolicy" + + // ResourceAttributeDeletionPolicy is a ResourceAttribute enum value + ResourceAttributeDeletionPolicy = "DeletionPolicy" + + // ResourceAttributeTags is a ResourceAttribute enum value + ResourceAttributeTags = "Tags" +) + +const ( + // ResourceSignalStatusSuccess is a ResourceSignalStatus enum value + ResourceSignalStatusSuccess = "SUCCESS" + + // ResourceSignalStatusFailure is a ResourceSignalStatus enum value + ResourceSignalStatusFailure = "FAILURE" +) + +const ( + // ResourceStatusCreateInProgress is a ResourceStatus enum value + ResourceStatusCreateInProgress = "CREATE_IN_PROGRESS" + + // ResourceStatusCreateFailed is a ResourceStatus enum value + ResourceStatusCreateFailed = "CREATE_FAILED" + + // ResourceStatusCreateComplete is a ResourceStatus enum value + ResourceStatusCreateComplete = "CREATE_COMPLETE" + + // ResourceStatusDeleteInProgress is a ResourceStatus enum value + ResourceStatusDeleteInProgress = "DELETE_IN_PROGRESS" + + // ResourceStatusDeleteFailed is a ResourceStatus enum value + ResourceStatusDeleteFailed = "DELETE_FAILED" + + // ResourceStatusDeleteComplete is a ResourceStatus enum value + ResourceStatusDeleteComplete = "DELETE_COMPLETE" + + // ResourceStatusDeleteSkipped is a ResourceStatus enum value + ResourceStatusDeleteSkipped = "DELETE_SKIPPED" + + // ResourceStatusUpdateInProgress is a ResourceStatus enum value + ResourceStatusUpdateInProgress = "UPDATE_IN_PROGRESS" + + // ResourceStatusUpdateFailed is a ResourceStatus enum value + ResourceStatusUpdateFailed = "UPDATE_FAILED" + + // ResourceStatusUpdateComplete is a ResourceStatus enum value + ResourceStatusUpdateComplete = "UPDATE_COMPLETE" +) + +const ( + // StackInstanceStatusCurrent is a StackInstanceStatus enum value + StackInstanceStatusCurrent = "CURRENT" + + // StackInstanceStatusOutdated is a StackInstanceStatus enum value + StackInstanceStatusOutdated = "OUTDATED" + + // StackInstanceStatusInoperable is a StackInstanceStatus enum value + StackInstanceStatusInoperable = "INOPERABLE" +) + +const ( + // StackSetOperationActionCreate is a StackSetOperationAction enum value + StackSetOperationActionCreate = "CREATE" + + // StackSetOperationActionUpdate is a StackSetOperationAction enum value + StackSetOperationActionUpdate = "UPDATE" + + // StackSetOperationActionDelete is a StackSetOperationAction enum value + StackSetOperationActionDelete = "DELETE" +) + +const ( + // StackSetOperationResultStatusPending is a StackSetOperationResultStatus enum value + StackSetOperationResultStatusPending = "PENDING" + + // StackSetOperationResultStatusRunning is a StackSetOperationResultStatus enum value + StackSetOperationResultStatusRunning = "RUNNING" + + // StackSetOperationResultStatusSucceeded is a StackSetOperationResultStatus enum value + StackSetOperationResultStatusSucceeded = "SUCCEEDED" + + // StackSetOperationResultStatusFailed is a StackSetOperationResultStatus enum value + StackSetOperationResultStatusFailed = "FAILED" + + // StackSetOperationResultStatusCancelled is a StackSetOperationResultStatus enum value + StackSetOperationResultStatusCancelled = "CANCELLED" +) + +const ( + // StackSetOperationStatusRunning is a StackSetOperationStatus enum value + StackSetOperationStatusRunning = "RUNNING" + + // StackSetOperationStatusSucceeded is a StackSetOperationStatus enum value + StackSetOperationStatusSucceeded = "SUCCEEDED" + + // StackSetOperationStatusFailed is a StackSetOperationStatus enum value + StackSetOperationStatusFailed = "FAILED" + + // StackSetOperationStatusStopping is a StackSetOperationStatus enum value + StackSetOperationStatusStopping = "STOPPING" + + // StackSetOperationStatusStopped is a StackSetOperationStatus enum value + StackSetOperationStatusStopped = "STOPPED" +) + +const ( + // StackSetStatusActive is a StackSetStatus enum value + StackSetStatusActive = "ACTIVE" + + // StackSetStatusDeleted is a StackSetStatus enum value + StackSetStatusDeleted = "DELETED" +) + +const ( + // StackStatusCreateInProgress is a StackStatus enum value + StackStatusCreateInProgress = "CREATE_IN_PROGRESS" + + // StackStatusCreateFailed is a StackStatus enum value + StackStatusCreateFailed = "CREATE_FAILED" + + // StackStatusCreateComplete is a StackStatus enum value + StackStatusCreateComplete = "CREATE_COMPLETE" + + // StackStatusRollbackInProgress is a StackStatus enum value + StackStatusRollbackInProgress = "ROLLBACK_IN_PROGRESS" + + // StackStatusRollbackFailed is a StackStatus enum value + StackStatusRollbackFailed = "ROLLBACK_FAILED" + + // StackStatusRollbackComplete is a StackStatus enum value + StackStatusRollbackComplete = "ROLLBACK_COMPLETE" + + // StackStatusDeleteInProgress is a StackStatus enum value + StackStatusDeleteInProgress = "DELETE_IN_PROGRESS" + + // StackStatusDeleteFailed is a StackStatus enum value + StackStatusDeleteFailed = "DELETE_FAILED" + + // StackStatusDeleteComplete is a StackStatus enum value + StackStatusDeleteComplete = "DELETE_COMPLETE" + + // StackStatusUpdateInProgress is a StackStatus enum value + StackStatusUpdateInProgress = "UPDATE_IN_PROGRESS" + + // StackStatusUpdateCompleteCleanupInProgress is a StackStatus enum value + StackStatusUpdateCompleteCleanupInProgress = "UPDATE_COMPLETE_CLEANUP_IN_PROGRESS" + + // StackStatusUpdateComplete is a StackStatus enum value + StackStatusUpdateComplete = "UPDATE_COMPLETE" + + // StackStatusUpdateRollbackInProgress is a StackStatus enum value + StackStatusUpdateRollbackInProgress = "UPDATE_ROLLBACK_IN_PROGRESS" + + // StackStatusUpdateRollbackFailed is a StackStatus enum value + StackStatusUpdateRollbackFailed = "UPDATE_ROLLBACK_FAILED" + + // StackStatusUpdateRollbackCompleteCleanupInProgress is a StackStatus enum value + StackStatusUpdateRollbackCompleteCleanupInProgress = "UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS" + + // StackStatusUpdateRollbackComplete is a StackStatus enum value + StackStatusUpdateRollbackComplete = "UPDATE_ROLLBACK_COMPLETE" + + // StackStatusReviewInProgress is a StackStatus enum value + StackStatusReviewInProgress = "REVIEW_IN_PROGRESS" +) + +const ( + // TemplateStageOriginal is a TemplateStage enum value + TemplateStageOriginal = "Original" + + // TemplateStageProcessed is a TemplateStage enum value + TemplateStageProcessed = "Processed" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface/interface.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface/interface.go new file mode 100644 index 00000000..fd24a178 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface/interface.go @@ -0,0 +1,261 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package cloudformationiface provides an interface to enable mocking the AWS CloudFormation service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package cloudformationiface + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/service/cloudformation" +) + +// CloudFormationAPI provides an interface to enable mocking the +// cloudformation.CloudFormation service client's API operation, +// paginators, and waiters. This make unit testing your code that calls out +// to the SDK's service client's calls easier. +// +// The best way to use this interface is so the SDK's service client's calls +// can be stubbed out for unit testing your code with the SDK without needing +// to inject custom request handlers into the SDK's request pipeline. +// +// // myFunc uses an SDK service client to make a request to +// // AWS CloudFormation. +// func myFunc(svc cloudformationiface.CloudFormationAPI) bool { +// // Make svc.CancelUpdateStack request +// } +// +// func main() { +// sess := session.New() +// svc := cloudformation.New(sess) +// +// myFunc(svc) +// } +// +// In your _test.go file: +// +// // Define a mock struct to be used in your unit tests of myFunc. +// type mockCloudFormationClient struct { +// cloudformationiface.CloudFormationAPI +// } +// func (m *mockCloudFormationClient) CancelUpdateStack(input *cloudformation.CancelUpdateStackInput) (*cloudformation.CancelUpdateStackOutput, error) { +// // mock response/functionality +// } +// +// func TestMyFunc(t *testing.T) { +// // Setup Test +// mockSvc := &mockCloudFormationClient{} +// +// myfunc(mockSvc) +// +// // Verify myFunc's functionality +// } +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. Its suggested to use the pattern above for testing, or using +// tooling to generate mocks to satisfy the interfaces. +type CloudFormationAPI interface { + CancelUpdateStack(*cloudformation.CancelUpdateStackInput) (*cloudformation.CancelUpdateStackOutput, error) + CancelUpdateStackWithContext(aws.Context, *cloudformation.CancelUpdateStackInput, ...request.Option) (*cloudformation.CancelUpdateStackOutput, error) + CancelUpdateStackRequest(*cloudformation.CancelUpdateStackInput) (*request.Request, *cloudformation.CancelUpdateStackOutput) + + ContinueUpdateRollback(*cloudformation.ContinueUpdateRollbackInput) (*cloudformation.ContinueUpdateRollbackOutput, error) + ContinueUpdateRollbackWithContext(aws.Context, *cloudformation.ContinueUpdateRollbackInput, ...request.Option) (*cloudformation.ContinueUpdateRollbackOutput, error) + ContinueUpdateRollbackRequest(*cloudformation.ContinueUpdateRollbackInput) (*request.Request, *cloudformation.ContinueUpdateRollbackOutput) + + CreateChangeSet(*cloudformation.CreateChangeSetInput) (*cloudformation.CreateChangeSetOutput, error) + CreateChangeSetWithContext(aws.Context, *cloudformation.CreateChangeSetInput, ...request.Option) (*cloudformation.CreateChangeSetOutput, error) + CreateChangeSetRequest(*cloudformation.CreateChangeSetInput) (*request.Request, *cloudformation.CreateChangeSetOutput) + + CreateStack(*cloudformation.CreateStackInput) (*cloudformation.CreateStackOutput, error) + CreateStackWithContext(aws.Context, *cloudformation.CreateStackInput, ...request.Option) (*cloudformation.CreateStackOutput, error) + CreateStackRequest(*cloudformation.CreateStackInput) (*request.Request, *cloudformation.CreateStackOutput) + + CreateStackInstances(*cloudformation.CreateStackInstancesInput) (*cloudformation.CreateStackInstancesOutput, error) + CreateStackInstancesWithContext(aws.Context, *cloudformation.CreateStackInstancesInput, ...request.Option) (*cloudformation.CreateStackInstancesOutput, error) + CreateStackInstancesRequest(*cloudformation.CreateStackInstancesInput) (*request.Request, *cloudformation.CreateStackInstancesOutput) + + CreateStackSet(*cloudformation.CreateStackSetInput) (*cloudformation.CreateStackSetOutput, error) + CreateStackSetWithContext(aws.Context, *cloudformation.CreateStackSetInput, ...request.Option) (*cloudformation.CreateStackSetOutput, error) + CreateStackSetRequest(*cloudformation.CreateStackSetInput) (*request.Request, *cloudformation.CreateStackSetOutput) + + DeleteChangeSet(*cloudformation.DeleteChangeSetInput) (*cloudformation.DeleteChangeSetOutput, error) + DeleteChangeSetWithContext(aws.Context, *cloudformation.DeleteChangeSetInput, ...request.Option) (*cloudformation.DeleteChangeSetOutput, error) + DeleteChangeSetRequest(*cloudformation.DeleteChangeSetInput) (*request.Request, *cloudformation.DeleteChangeSetOutput) + + DeleteStack(*cloudformation.DeleteStackInput) (*cloudformation.DeleteStackOutput, error) + DeleteStackWithContext(aws.Context, *cloudformation.DeleteStackInput, ...request.Option) (*cloudformation.DeleteStackOutput, error) + DeleteStackRequest(*cloudformation.DeleteStackInput) (*request.Request, *cloudformation.DeleteStackOutput) + + DeleteStackInstances(*cloudformation.DeleteStackInstancesInput) (*cloudformation.DeleteStackInstancesOutput, error) + DeleteStackInstancesWithContext(aws.Context, *cloudformation.DeleteStackInstancesInput, ...request.Option) (*cloudformation.DeleteStackInstancesOutput, error) + DeleteStackInstancesRequest(*cloudformation.DeleteStackInstancesInput) (*request.Request, *cloudformation.DeleteStackInstancesOutput) + + DeleteStackSet(*cloudformation.DeleteStackSetInput) (*cloudformation.DeleteStackSetOutput, error) + DeleteStackSetWithContext(aws.Context, *cloudformation.DeleteStackSetInput, ...request.Option) (*cloudformation.DeleteStackSetOutput, error) + DeleteStackSetRequest(*cloudformation.DeleteStackSetInput) (*request.Request, *cloudformation.DeleteStackSetOutput) + + DescribeAccountLimits(*cloudformation.DescribeAccountLimitsInput) (*cloudformation.DescribeAccountLimitsOutput, error) + DescribeAccountLimitsWithContext(aws.Context, *cloudformation.DescribeAccountLimitsInput, ...request.Option) (*cloudformation.DescribeAccountLimitsOutput, error) + DescribeAccountLimitsRequest(*cloudformation.DescribeAccountLimitsInput) (*request.Request, *cloudformation.DescribeAccountLimitsOutput) + + DescribeChangeSet(*cloudformation.DescribeChangeSetInput) (*cloudformation.DescribeChangeSetOutput, error) + DescribeChangeSetWithContext(aws.Context, *cloudformation.DescribeChangeSetInput, ...request.Option) (*cloudformation.DescribeChangeSetOutput, error) + DescribeChangeSetRequest(*cloudformation.DescribeChangeSetInput) (*request.Request, *cloudformation.DescribeChangeSetOutput) + + DescribeStackEvents(*cloudformation.DescribeStackEventsInput) (*cloudformation.DescribeStackEventsOutput, error) + DescribeStackEventsWithContext(aws.Context, *cloudformation.DescribeStackEventsInput, ...request.Option) (*cloudformation.DescribeStackEventsOutput, error) + DescribeStackEventsRequest(*cloudformation.DescribeStackEventsInput) (*request.Request, *cloudformation.DescribeStackEventsOutput) + + DescribeStackEventsPages(*cloudformation.DescribeStackEventsInput, func(*cloudformation.DescribeStackEventsOutput, bool) bool) error + DescribeStackEventsPagesWithContext(aws.Context, *cloudformation.DescribeStackEventsInput, func(*cloudformation.DescribeStackEventsOutput, bool) bool, ...request.Option) error + + DescribeStackInstance(*cloudformation.DescribeStackInstanceInput) (*cloudformation.DescribeStackInstanceOutput, error) + DescribeStackInstanceWithContext(aws.Context, *cloudformation.DescribeStackInstanceInput, ...request.Option) (*cloudformation.DescribeStackInstanceOutput, error) + DescribeStackInstanceRequest(*cloudformation.DescribeStackInstanceInput) (*request.Request, *cloudformation.DescribeStackInstanceOutput) + + DescribeStackResource(*cloudformation.DescribeStackResourceInput) (*cloudformation.DescribeStackResourceOutput, error) + DescribeStackResourceWithContext(aws.Context, *cloudformation.DescribeStackResourceInput, ...request.Option) (*cloudformation.DescribeStackResourceOutput, error) + DescribeStackResourceRequest(*cloudformation.DescribeStackResourceInput) (*request.Request, *cloudformation.DescribeStackResourceOutput) + + DescribeStackResources(*cloudformation.DescribeStackResourcesInput) (*cloudformation.DescribeStackResourcesOutput, error) + DescribeStackResourcesWithContext(aws.Context, *cloudformation.DescribeStackResourcesInput, ...request.Option) (*cloudformation.DescribeStackResourcesOutput, error) + DescribeStackResourcesRequest(*cloudformation.DescribeStackResourcesInput) (*request.Request, *cloudformation.DescribeStackResourcesOutput) + + DescribeStackSet(*cloudformation.DescribeStackSetInput) (*cloudformation.DescribeStackSetOutput, error) + DescribeStackSetWithContext(aws.Context, *cloudformation.DescribeStackSetInput, ...request.Option) (*cloudformation.DescribeStackSetOutput, error) + DescribeStackSetRequest(*cloudformation.DescribeStackSetInput) (*request.Request, *cloudformation.DescribeStackSetOutput) + + DescribeStackSetOperation(*cloudformation.DescribeStackSetOperationInput) (*cloudformation.DescribeStackSetOperationOutput, error) + DescribeStackSetOperationWithContext(aws.Context, *cloudformation.DescribeStackSetOperationInput, ...request.Option) (*cloudformation.DescribeStackSetOperationOutput, error) + DescribeStackSetOperationRequest(*cloudformation.DescribeStackSetOperationInput) (*request.Request, *cloudformation.DescribeStackSetOperationOutput) + + DescribeStacks(*cloudformation.DescribeStacksInput) (*cloudformation.DescribeStacksOutput, error) + DescribeStacksWithContext(aws.Context, *cloudformation.DescribeStacksInput, ...request.Option) (*cloudformation.DescribeStacksOutput, error) + DescribeStacksRequest(*cloudformation.DescribeStacksInput) (*request.Request, *cloudformation.DescribeStacksOutput) + + DescribeStacksPages(*cloudformation.DescribeStacksInput, func(*cloudformation.DescribeStacksOutput, bool) bool) error + DescribeStacksPagesWithContext(aws.Context, *cloudformation.DescribeStacksInput, func(*cloudformation.DescribeStacksOutput, bool) bool, ...request.Option) error + + EstimateTemplateCost(*cloudformation.EstimateTemplateCostInput) (*cloudformation.EstimateTemplateCostOutput, error) + EstimateTemplateCostWithContext(aws.Context, *cloudformation.EstimateTemplateCostInput, ...request.Option) (*cloudformation.EstimateTemplateCostOutput, error) + EstimateTemplateCostRequest(*cloudformation.EstimateTemplateCostInput) (*request.Request, *cloudformation.EstimateTemplateCostOutput) + + ExecuteChangeSet(*cloudformation.ExecuteChangeSetInput) (*cloudformation.ExecuteChangeSetOutput, error) + ExecuteChangeSetWithContext(aws.Context, *cloudformation.ExecuteChangeSetInput, ...request.Option) (*cloudformation.ExecuteChangeSetOutput, error) + ExecuteChangeSetRequest(*cloudformation.ExecuteChangeSetInput) (*request.Request, *cloudformation.ExecuteChangeSetOutput) + + GetStackPolicy(*cloudformation.GetStackPolicyInput) (*cloudformation.GetStackPolicyOutput, error) + GetStackPolicyWithContext(aws.Context, *cloudformation.GetStackPolicyInput, ...request.Option) (*cloudformation.GetStackPolicyOutput, error) + GetStackPolicyRequest(*cloudformation.GetStackPolicyInput) (*request.Request, *cloudformation.GetStackPolicyOutput) + + GetTemplate(*cloudformation.GetTemplateInput) (*cloudformation.GetTemplateOutput, error) + GetTemplateWithContext(aws.Context, *cloudformation.GetTemplateInput, ...request.Option) (*cloudformation.GetTemplateOutput, error) + GetTemplateRequest(*cloudformation.GetTemplateInput) (*request.Request, *cloudformation.GetTemplateOutput) + + GetTemplateSummary(*cloudformation.GetTemplateSummaryInput) (*cloudformation.GetTemplateSummaryOutput, error) + GetTemplateSummaryWithContext(aws.Context, *cloudformation.GetTemplateSummaryInput, ...request.Option) (*cloudformation.GetTemplateSummaryOutput, error) + GetTemplateSummaryRequest(*cloudformation.GetTemplateSummaryInput) (*request.Request, *cloudformation.GetTemplateSummaryOutput) + + ListChangeSets(*cloudformation.ListChangeSetsInput) (*cloudformation.ListChangeSetsOutput, error) + ListChangeSetsWithContext(aws.Context, *cloudformation.ListChangeSetsInput, ...request.Option) (*cloudformation.ListChangeSetsOutput, error) + ListChangeSetsRequest(*cloudformation.ListChangeSetsInput) (*request.Request, *cloudformation.ListChangeSetsOutput) + + ListExports(*cloudformation.ListExportsInput) (*cloudformation.ListExportsOutput, error) + ListExportsWithContext(aws.Context, *cloudformation.ListExportsInput, ...request.Option) (*cloudformation.ListExportsOutput, error) + ListExportsRequest(*cloudformation.ListExportsInput) (*request.Request, *cloudformation.ListExportsOutput) + + ListExportsPages(*cloudformation.ListExportsInput, func(*cloudformation.ListExportsOutput, bool) bool) error + ListExportsPagesWithContext(aws.Context, *cloudformation.ListExportsInput, func(*cloudformation.ListExportsOutput, bool) bool, ...request.Option) error + + ListImports(*cloudformation.ListImportsInput) (*cloudformation.ListImportsOutput, error) + ListImportsWithContext(aws.Context, *cloudformation.ListImportsInput, ...request.Option) (*cloudformation.ListImportsOutput, error) + ListImportsRequest(*cloudformation.ListImportsInput) (*request.Request, *cloudformation.ListImportsOutput) + + ListImportsPages(*cloudformation.ListImportsInput, func(*cloudformation.ListImportsOutput, bool) bool) error + ListImportsPagesWithContext(aws.Context, *cloudformation.ListImportsInput, func(*cloudformation.ListImportsOutput, bool) bool, ...request.Option) error + + ListStackInstances(*cloudformation.ListStackInstancesInput) (*cloudformation.ListStackInstancesOutput, error) + ListStackInstancesWithContext(aws.Context, *cloudformation.ListStackInstancesInput, ...request.Option) (*cloudformation.ListStackInstancesOutput, error) + ListStackInstancesRequest(*cloudformation.ListStackInstancesInput) (*request.Request, *cloudformation.ListStackInstancesOutput) + + ListStackResources(*cloudformation.ListStackResourcesInput) (*cloudformation.ListStackResourcesOutput, error) + ListStackResourcesWithContext(aws.Context, *cloudformation.ListStackResourcesInput, ...request.Option) (*cloudformation.ListStackResourcesOutput, error) + ListStackResourcesRequest(*cloudformation.ListStackResourcesInput) (*request.Request, *cloudformation.ListStackResourcesOutput) + + ListStackResourcesPages(*cloudformation.ListStackResourcesInput, func(*cloudformation.ListStackResourcesOutput, bool) bool) error + ListStackResourcesPagesWithContext(aws.Context, *cloudformation.ListStackResourcesInput, func(*cloudformation.ListStackResourcesOutput, bool) bool, ...request.Option) error + + ListStackSetOperationResults(*cloudformation.ListStackSetOperationResultsInput) (*cloudformation.ListStackSetOperationResultsOutput, error) + ListStackSetOperationResultsWithContext(aws.Context, *cloudformation.ListStackSetOperationResultsInput, ...request.Option) (*cloudformation.ListStackSetOperationResultsOutput, error) + ListStackSetOperationResultsRequest(*cloudformation.ListStackSetOperationResultsInput) (*request.Request, *cloudformation.ListStackSetOperationResultsOutput) + + ListStackSetOperations(*cloudformation.ListStackSetOperationsInput) (*cloudformation.ListStackSetOperationsOutput, error) + ListStackSetOperationsWithContext(aws.Context, *cloudformation.ListStackSetOperationsInput, ...request.Option) (*cloudformation.ListStackSetOperationsOutput, error) + ListStackSetOperationsRequest(*cloudformation.ListStackSetOperationsInput) (*request.Request, *cloudformation.ListStackSetOperationsOutput) + + ListStackSets(*cloudformation.ListStackSetsInput) (*cloudformation.ListStackSetsOutput, error) + ListStackSetsWithContext(aws.Context, *cloudformation.ListStackSetsInput, ...request.Option) (*cloudformation.ListStackSetsOutput, error) + ListStackSetsRequest(*cloudformation.ListStackSetsInput) (*request.Request, *cloudformation.ListStackSetsOutput) + + ListStacks(*cloudformation.ListStacksInput) (*cloudformation.ListStacksOutput, error) + ListStacksWithContext(aws.Context, *cloudformation.ListStacksInput, ...request.Option) (*cloudformation.ListStacksOutput, error) + ListStacksRequest(*cloudformation.ListStacksInput) (*request.Request, *cloudformation.ListStacksOutput) + + ListStacksPages(*cloudformation.ListStacksInput, func(*cloudformation.ListStacksOutput, bool) bool) error + ListStacksPagesWithContext(aws.Context, *cloudformation.ListStacksInput, func(*cloudformation.ListStacksOutput, bool) bool, ...request.Option) error + + SetStackPolicy(*cloudformation.SetStackPolicyInput) (*cloudformation.SetStackPolicyOutput, error) + SetStackPolicyWithContext(aws.Context, *cloudformation.SetStackPolicyInput, ...request.Option) (*cloudformation.SetStackPolicyOutput, error) + SetStackPolicyRequest(*cloudformation.SetStackPolicyInput) (*request.Request, *cloudformation.SetStackPolicyOutput) + + SignalResource(*cloudformation.SignalResourceInput) (*cloudformation.SignalResourceOutput, error) + SignalResourceWithContext(aws.Context, *cloudformation.SignalResourceInput, ...request.Option) (*cloudformation.SignalResourceOutput, error) + SignalResourceRequest(*cloudformation.SignalResourceInput) (*request.Request, *cloudformation.SignalResourceOutput) + + StopStackSetOperation(*cloudformation.StopStackSetOperationInput) (*cloudformation.StopStackSetOperationOutput, error) + StopStackSetOperationWithContext(aws.Context, *cloudformation.StopStackSetOperationInput, ...request.Option) (*cloudformation.StopStackSetOperationOutput, error) + StopStackSetOperationRequest(*cloudformation.StopStackSetOperationInput) (*request.Request, *cloudformation.StopStackSetOperationOutput) + + UpdateStack(*cloudformation.UpdateStackInput) (*cloudformation.UpdateStackOutput, error) + UpdateStackWithContext(aws.Context, *cloudformation.UpdateStackInput, ...request.Option) (*cloudformation.UpdateStackOutput, error) + UpdateStackRequest(*cloudformation.UpdateStackInput) (*request.Request, *cloudformation.UpdateStackOutput) + + UpdateStackInstances(*cloudformation.UpdateStackInstancesInput) (*cloudformation.UpdateStackInstancesOutput, error) + UpdateStackInstancesWithContext(aws.Context, *cloudformation.UpdateStackInstancesInput, ...request.Option) (*cloudformation.UpdateStackInstancesOutput, error) + UpdateStackInstancesRequest(*cloudformation.UpdateStackInstancesInput) (*request.Request, *cloudformation.UpdateStackInstancesOutput) + + UpdateStackSet(*cloudformation.UpdateStackSetInput) (*cloudformation.UpdateStackSetOutput, error) + UpdateStackSetWithContext(aws.Context, *cloudformation.UpdateStackSetInput, ...request.Option) (*cloudformation.UpdateStackSetOutput, error) + UpdateStackSetRequest(*cloudformation.UpdateStackSetInput) (*request.Request, *cloudformation.UpdateStackSetOutput) + + UpdateTerminationProtection(*cloudformation.UpdateTerminationProtectionInput) (*cloudformation.UpdateTerminationProtectionOutput, error) + UpdateTerminationProtectionWithContext(aws.Context, *cloudformation.UpdateTerminationProtectionInput, ...request.Option) (*cloudformation.UpdateTerminationProtectionOutput, error) + UpdateTerminationProtectionRequest(*cloudformation.UpdateTerminationProtectionInput) (*request.Request, *cloudformation.UpdateTerminationProtectionOutput) + + ValidateTemplate(*cloudformation.ValidateTemplateInput) (*cloudformation.ValidateTemplateOutput, error) + ValidateTemplateWithContext(aws.Context, *cloudformation.ValidateTemplateInput, ...request.Option) (*cloudformation.ValidateTemplateOutput, error) + ValidateTemplateRequest(*cloudformation.ValidateTemplateInput) (*request.Request, *cloudformation.ValidateTemplateOutput) + + WaitUntilChangeSetCreateComplete(*cloudformation.DescribeChangeSetInput) error + WaitUntilChangeSetCreateCompleteWithContext(aws.Context, *cloudformation.DescribeChangeSetInput, ...request.WaiterOption) error + + WaitUntilStackCreateComplete(*cloudformation.DescribeStacksInput) error + WaitUntilStackCreateCompleteWithContext(aws.Context, *cloudformation.DescribeStacksInput, ...request.WaiterOption) error + + WaitUntilStackDeleteComplete(*cloudformation.DescribeStacksInput) error + WaitUntilStackDeleteCompleteWithContext(aws.Context, *cloudformation.DescribeStacksInput, ...request.WaiterOption) error + + WaitUntilStackExists(*cloudformation.DescribeStacksInput) error + WaitUntilStackExistsWithContext(aws.Context, *cloudformation.DescribeStacksInput, ...request.WaiterOption) error + + WaitUntilStackUpdateComplete(*cloudformation.DescribeStacksInput) error + WaitUntilStackUpdateCompleteWithContext(aws.Context, *cloudformation.DescribeStacksInput, ...request.WaiterOption) error +} + +var _ CloudFormationAPI = (*cloudformation.CloudFormation)(nil) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/doc.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/doc.go new file mode 100644 index 00000000..d82dd221 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/doc.go @@ -0,0 +1,46 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package cloudformation provides the client and types for making API +// requests to AWS CloudFormation. +// +// AWS CloudFormation allows you to create and manage AWS infrastructure deployments +// predictably and repeatedly. You can use AWS CloudFormation to leverage AWS +// products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, +// Amazon Simple Notification Service, Elastic Load Balancing, and Auto Scaling +// to build highly-reliable, highly scalable, cost-effective applications without +// creating or configuring the underlying AWS infrastructure. +// +// With AWS CloudFormation, you declare all of your resources and dependencies +// in a template file. The template defines a collection of resources as a single +// unit called a stack. AWS CloudFormation creates and deletes all member resources +// of the stack together and manages all dependencies between the resources +// for you. +// +// For more information about AWS CloudFormation, see the AWS CloudFormation +// Product Page (http://aws.amazon.com/cloudformation/). +// +// Amazon CloudFormation makes use of other AWS products. If you need additional +// technical information about a specific AWS product, you can find the product's +// technical documentation at docs.aws.amazon.com (http://docs.aws.amazon.com/). +// +// See https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15 for more information on this service. +// +// See cloudformation package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/cloudformation/ +// +// Using the Client +// +// To contact AWS CloudFormation with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS CloudFormation client CloudFormation for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/cloudformation/#New +package cloudformation diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/errors.go new file mode 100644 index 00000000..8744a3b7 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/errors.go @@ -0,0 +1,112 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudformation + +const ( + + // ErrCodeAlreadyExistsException for service response error code + // "AlreadyExistsException". + // + // The resource with the name requested already exists. + ErrCodeAlreadyExistsException = "AlreadyExistsException" + + // ErrCodeChangeSetNotFoundException for service response error code + // "ChangeSetNotFound". + // + // The specified change set name or ID doesn't exit. To view valid change sets + // for a stack, use the ListChangeSets action. + ErrCodeChangeSetNotFoundException = "ChangeSetNotFound" + + // ErrCodeCreatedButModifiedException for service response error code + // "CreatedButModifiedException". + // + // The specified resource exists, but has been changed. + ErrCodeCreatedButModifiedException = "CreatedButModifiedException" + + // ErrCodeInsufficientCapabilitiesException for service response error code + // "InsufficientCapabilitiesException". + // + // The template contains resources with capabilities that weren't specified + // in the Capabilities parameter. + ErrCodeInsufficientCapabilitiesException = "InsufficientCapabilitiesException" + + // ErrCodeInvalidChangeSetStatusException for service response error code + // "InvalidChangeSetStatus". + // + // The specified change set can't be used to update the stack. For example, + // the change set status might be CREATE_IN_PROGRESS, or the stack status might + // be UPDATE_IN_PROGRESS. + ErrCodeInvalidChangeSetStatusException = "InvalidChangeSetStatus" + + // ErrCodeInvalidOperationException for service response error code + // "InvalidOperationException". + // + // The specified operation isn't valid. + ErrCodeInvalidOperationException = "InvalidOperationException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // The quota for the resource has already been reached. + // + // For information on stack set limitations, see Limitations of StackSets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-limitations.html). + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeNameAlreadyExistsException for service response error code + // "NameAlreadyExistsException". + // + // The specified name is already in use. + ErrCodeNameAlreadyExistsException = "NameAlreadyExistsException" + + // ErrCodeOperationIdAlreadyExistsException for service response error code + // "OperationIdAlreadyExistsException". + // + // The specified operation ID already exists. + ErrCodeOperationIdAlreadyExistsException = "OperationIdAlreadyExistsException" + + // ErrCodeOperationInProgressException for service response error code + // "OperationInProgressException". + // + // Another operation is currently in progress for this stack set. Only one operation + // can be performed for a stack set at a given time. + ErrCodeOperationInProgressException = "OperationInProgressException" + + // ErrCodeOperationNotFoundException for service response error code + // "OperationNotFoundException". + // + // The specified ID refers to an operation that doesn't exist. + ErrCodeOperationNotFoundException = "OperationNotFoundException" + + // ErrCodeStackInstanceNotFoundException for service response error code + // "StackInstanceNotFoundException". + // + // The specified stack instance doesn't exist. + ErrCodeStackInstanceNotFoundException = "StackInstanceNotFoundException" + + // ErrCodeStackSetNotEmptyException for service response error code + // "StackSetNotEmptyException". + // + // You can't yet delete this stack set, because it still contains one or more + // stack instances. Delete all stack instances from the stack set before deleting + // the stack set. + ErrCodeStackSetNotEmptyException = "StackSetNotEmptyException" + + // ErrCodeStackSetNotFoundException for service response error code + // "StackSetNotFoundException". + // + // The specified stack set doesn't exist. + ErrCodeStackSetNotFoundException = "StackSetNotFoundException" + + // ErrCodeStaleRequestException for service response error code + // "StaleRequestException". + // + // Another operation has been performed on this stack set since the specified + // operation was performed. + ErrCodeStaleRequestException = "StaleRequestException" + + // ErrCodeTokenAlreadyExistsException for service response error code + // "TokenAlreadyExistsException". + // + // A client request token already exists. + ErrCodeTokenAlreadyExistsException = "TokenAlreadyExistsException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go new file mode 100644 index 00000000..0115c5bb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go @@ -0,0 +1,93 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudformation + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/query" +) + +// CloudFormation provides the API operation methods for making requests to +// AWS CloudFormation. See this package's package overview docs +// for details on the service. +// +// CloudFormation methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type CloudFormation struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "cloudformation" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the CloudFormation client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a CloudFormation client from just a session. +// svc := cloudformation.New(mySession) +// +// // Create a CloudFormation client with additional configuration +// svc := cloudformation.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *CloudFormation { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *CloudFormation { + svc := &CloudFormation{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2010-05-15", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(query.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(query.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(query.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(query.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a CloudFormation operation and runs any +// custom request initialization. +func (c *CloudFormation) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go new file mode 100644 index 00000000..afe8a1b2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go @@ -0,0 +1,335 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudformation + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilChangeSetCreateComplete uses the AWS CloudFormation API operation +// DescribeChangeSet to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *CloudFormation) WaitUntilChangeSetCreateComplete(input *DescribeChangeSetInput) error { + return c.WaitUntilChangeSetCreateCompleteWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilChangeSetCreateCompleteWithContext is an extended version of WaitUntilChangeSetCreateComplete. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) WaitUntilChangeSetCreateCompleteWithContext(ctx aws.Context, input *DescribeChangeSetInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilChangeSetCreateComplete", + MaxAttempts: 120, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathWaiterMatch, Argument: "Status", + Expected: "CREATE_COMPLETE", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "Status", + Expected: "FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ValidationError", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeChangeSetInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeChangeSetRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilStackCreateComplete uses the AWS CloudFormation API operation +// DescribeStacks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *CloudFormation) WaitUntilStackCreateComplete(input *DescribeStacksInput) error { + return c.WaitUntilStackCreateCompleteWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilStackCreateCompleteWithContext is an extended version of WaitUntilStackCreateComplete. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) WaitUntilStackCreateCompleteWithContext(ctx aws.Context, input *DescribeStacksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilStackCreateComplete", + MaxAttempts: 120, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "CREATE_COMPLETE", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "CREATE_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "DELETE_COMPLETE", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "DELETE_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "ROLLBACK_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "ROLLBACK_COMPLETE", + }, + { + State: request.FailureWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ValidationError", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeStacksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStacksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilStackDeleteComplete uses the AWS CloudFormation API operation +// DescribeStacks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *CloudFormation) WaitUntilStackDeleteComplete(input *DescribeStacksInput) error { + return c.WaitUntilStackDeleteCompleteWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilStackDeleteCompleteWithContext is an extended version of WaitUntilStackDeleteComplete. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) WaitUntilStackDeleteCompleteWithContext(ctx aws.Context, input *DescribeStacksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilStackDeleteComplete", + MaxAttempts: 120, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "DELETE_COMPLETE", + }, + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ValidationError", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "DELETE_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "CREATE_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "ROLLBACK_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "UPDATE_ROLLBACK_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "UPDATE_ROLLBACK_IN_PROGRESS", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeStacksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStacksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilStackExists uses the AWS CloudFormation API operation +// DescribeStacks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *CloudFormation) WaitUntilStackExists(input *DescribeStacksInput) error { + return c.WaitUntilStackExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilStackExistsWithContext is an extended version of WaitUntilStackExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) WaitUntilStackExistsWithContext(ctx aws.Context, input *DescribeStacksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilStackExists", + MaxAttempts: 20, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 200, + }, + { + State: request.RetryWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ValidationError", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeStacksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStacksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilStackUpdateComplete uses the AWS CloudFormation API operation +// DescribeStacks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *CloudFormation) WaitUntilStackUpdateComplete(input *DescribeStacksInput) error { + return c.WaitUntilStackUpdateCompleteWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilStackUpdateCompleteWithContext is an extended version of WaitUntilStackUpdateComplete. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) WaitUntilStackUpdateCompleteWithContext(ctx aws.Context, input *DescribeStacksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilStackUpdateComplete", + MaxAttempts: 120, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "UPDATE_COMPLETE", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "UPDATE_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "UPDATE_ROLLBACK_FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Stacks[].StackStatus", + Expected: "UPDATE_ROLLBACK_COMPLETE", + }, + { + State: request.FailureWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ValidationError", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeStacksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStacksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go new file mode 100644 index 00000000..bf9d7a4e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go @@ -0,0 +1,12998 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dynamodb + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opBatchGetItem = "BatchGetItem" + +// BatchGetItemRequest generates a "aws/request.Request" representing the +// client's request for the BatchGetItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchGetItem for more information on using the BatchGetItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchGetItemRequest method. +// req, resp := client.BatchGetItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/BatchGetItem +func (c *DynamoDB) BatchGetItemRequest(input *BatchGetItemInput) (req *request.Request, output *BatchGetItemOutput) { + op := &request.Operation{ + Name: opBatchGetItem, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"RequestItems"}, + OutputTokens: []string{"UnprocessedKeys"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &BatchGetItemInput{} + } + + output = &BatchGetItemOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchGetItem API operation for Amazon DynamoDB. +// +// The BatchGetItem operation returns the attributes of one or more items from +// one or more tables. You identify requested items by primary key. +// +// A single operation can retrieve up to 16 MB of data, which can contain as +// many as 100 items. BatchGetItem will return a partial result if the response +// size limit is exceeded, the table's provisioned throughput is exceeded, or +// an internal processing failure occurs. If a partial result is returned, the +// operation returns a value for UnprocessedKeys. You can use this value to +// retry the operation starting with the next item to get. +// +// If you request more than 100 items BatchGetItem will return a ValidationException +// with the message "Too many items requested for the BatchGetItem call". +// +// For example, if you ask to retrieve 100 items, but each individual item is +// 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB +// limit). It also returns an appropriate UnprocessedKeys value so you can get +// the next page of results. If desired, your application can include its own +// logic to assemble the pages of results into one data set. +// +// If none of the items can be processed due to insufficient provisioned throughput +// on all of the tables in the request, then BatchGetItem will return a ProvisionedThroughputExceededException. +// If at least one of the items is successfully processed, then BatchGetItem +// completes successfully, while returning the keys of the unread items in UnprocessedKeys. +// +// If DynamoDB returns any unprocessed items, you should retry the batch operation +// on those items. However, we strongly recommend that you use an exponential +// backoff algorithm. If you retry the batch operation immediately, the underlying +// read or write requests can still fail due to throttling on the individual +// tables. If you delay the batch operation using exponential backoff, the individual +// requests in the batch are much more likely to succeed. +// +// For more information, see Batch Operations and Error Handling (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations) +// in the Amazon DynamoDB Developer Guide. +// +// By default, BatchGetItem performs eventually consistent reads on every table +// in the request. If you want strongly consistent reads instead, you can set +// ConsistentRead to true for any or all tables. +// +// In order to minimize response latency, BatchGetItem retrieves items in parallel. +// +// When designing your application, keep in mind that DynamoDB does not return +// items in any particular order. To help parse the response by item, include +// the primary key values for the items in your request in the ProjectionExpression +// parameter. +// +// If a requested item does not exist, it is not returned in the result. Requests +// for nonexistent items consume the minimum read capacity units according to +// the type of read. For more information, see Capacity Units Calculations (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#CapacityUnitCalculations) +// in the Amazon DynamoDB Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation BatchGetItem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/BatchGetItem +func (c *DynamoDB) BatchGetItem(input *BatchGetItemInput) (*BatchGetItemOutput, error) { + req, out := c.BatchGetItemRequest(input) + return out, req.Send() +} + +// BatchGetItemWithContext is the same as BatchGetItem with the addition of +// the ability to pass a context and additional request options. +// +// See BatchGetItem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) BatchGetItemWithContext(ctx aws.Context, input *BatchGetItemInput, opts ...request.Option) (*BatchGetItemOutput, error) { + req, out := c.BatchGetItemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// BatchGetItemPages iterates over the pages of a BatchGetItem operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See BatchGetItem method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a BatchGetItem operation. +// pageNum := 0 +// err := client.BatchGetItemPages(params, +// func(page *BatchGetItemOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DynamoDB) BatchGetItemPages(input *BatchGetItemInput, fn func(*BatchGetItemOutput, bool) bool) error { + return c.BatchGetItemPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// BatchGetItemPagesWithContext same as BatchGetItemPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) BatchGetItemPagesWithContext(ctx aws.Context, input *BatchGetItemInput, fn func(*BatchGetItemOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *BatchGetItemInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.BatchGetItemRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*BatchGetItemOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opBatchWriteItem = "BatchWriteItem" + +// BatchWriteItemRequest generates a "aws/request.Request" representing the +// client's request for the BatchWriteItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchWriteItem for more information on using the BatchWriteItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchWriteItemRequest method. +// req, resp := client.BatchWriteItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/BatchWriteItem +func (c *DynamoDB) BatchWriteItemRequest(input *BatchWriteItemInput) (req *request.Request, output *BatchWriteItemOutput) { + op := &request.Operation{ + Name: opBatchWriteItem, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchWriteItemInput{} + } + + output = &BatchWriteItemOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchWriteItem API operation for Amazon DynamoDB. +// +// The BatchWriteItem operation puts or deletes multiple items in one or more +// tables. A single call to BatchWriteItem can write up to 16 MB of data, which +// can comprise as many as 25 put or delete requests. Individual items to be +// written can be as large as 400 KB. +// +// BatchWriteItem cannot update items. To update items, use the UpdateItem action. +// +// The individual PutItem and DeleteItem operations specified in BatchWriteItem +// are atomic; however BatchWriteItem as a whole is not. If any requested operations +// fail because the table's provisioned throughput is exceeded or an internal +// processing failure occurs, the failed operations are returned in the UnprocessedItems +// response parameter. You can investigate and optionally resend the requests. +// Typically, you would call BatchWriteItem in a loop. Each iteration would +// check for unprocessed items and submit a new BatchWriteItem request with +// those unprocessed items until all items have been processed. +// +// Note that if none of the items can be processed due to insufficient provisioned +// throughput on all of the tables in the request, then BatchWriteItem will +// return a ProvisionedThroughputExceededException. +// +// If DynamoDB returns any unprocessed items, you should retry the batch operation +// on those items. However, we strongly recommend that you use an exponential +// backoff algorithm. If you retry the batch operation immediately, the underlying +// read or write requests can still fail due to throttling on the individual +// tables. If you delay the batch operation using exponential backoff, the individual +// requests in the batch are much more likely to succeed. +// +// For more information, see Batch Operations and Error Handling (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations) +// in the Amazon DynamoDB Developer Guide. +// +// With BatchWriteItem, you can efficiently write or delete large amounts of +// data, such as from Amazon Elastic MapReduce (EMR), or copy data from another +// database into DynamoDB. In order to improve performance with these large-scale +// operations, BatchWriteItem does not behave in the same way as individual +// PutItem and DeleteItem calls would. For example, you cannot specify conditions +// on individual put and delete requests, and BatchWriteItem does not return +// deleted items in the response. +// +// If you use a programming language that supports concurrency, you can use +// threads to write items in parallel. Your application must include the necessary +// logic to manage the threads. With languages that don't support threading, +// you must update or delete the specified items one at a time. In both situations, +// BatchWriteItem performs the specified put and delete operations in parallel, +// giving you the power of the thread pool approach without having to introduce +// complexity into your application. +// +// Parallel processing reduces latency, but each specified put and delete request +// consumes the same number of write capacity units whether it is processed +// in parallel or not. Delete operations on nonexistent items consume one write +// capacity unit. +// +// If one or more of the following is true, DynamoDB rejects the entire batch +// write operation: +// +// * One or more tables specified in the BatchWriteItem request does not +// exist. +// +// * Primary key attributes specified on an item in the request do not match +// those in the corresponding table's primary key schema. +// +// * You try to perform multiple operations on the same item in the same +// BatchWriteItem request. For example, you cannot put and delete the same +// item in the same BatchWriteItem request. +// +// * Your request contains at least two items with identical hash and range +// keys (which essentially is two put operations). +// +// * There are more than 25 requests in the batch. +// +// * Any individual item in a batch exceeds 400 KB. +// +// * The total request size exceeds 16 MB. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation BatchWriteItem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeItemCollectionSizeLimitExceededException "ItemCollectionSizeLimitExceededException" +// An item collection is too large. This exception is only returned for tables +// that have one or more local secondary indexes. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/BatchWriteItem +func (c *DynamoDB) BatchWriteItem(input *BatchWriteItemInput) (*BatchWriteItemOutput, error) { + req, out := c.BatchWriteItemRequest(input) + return out, req.Send() +} + +// BatchWriteItemWithContext is the same as BatchWriteItem with the addition of +// the ability to pass a context and additional request options. +// +// See BatchWriteItem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) BatchWriteItemWithContext(ctx aws.Context, input *BatchWriteItemInput, opts ...request.Option) (*BatchWriteItemOutput, error) { + req, out := c.BatchWriteItemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateBackup = "CreateBackup" + +// CreateBackupRequest generates a "aws/request.Request" representing the +// client's request for the CreateBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateBackup for more information on using the CreateBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateBackupRequest method. +// req, resp := client.CreateBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateBackup +func (c *DynamoDB) CreateBackupRequest(input *CreateBackupInput) (req *request.Request, output *CreateBackupOutput) { + op := &request.Operation{ + Name: opCreateBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateBackupInput{} + } + + output = &CreateBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateBackup API operation for Amazon DynamoDB. +// +// Creates a backup for an existing table. +// +// Each time you create an On-Demand Backup, the entire table data is backed +// up. There is no limit to the number of on-demand backups that can be taken. +// +// When you create an On-Demand Backup, a time marker of the request is cataloged, +// and the backup is created asynchronously, by applying all changes until the +// time of the request to the last full table snapshot. Backup requests are +// processed instantaneously and become available for restore within minutes. +// +// You can call CreateBackup at a maximum rate of 50 times per second. +// +// All backups in DynamoDB work without consuming any provisioned throughput +// on the table. +// +// If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed +// to contain all data committed to the table up to 14:24:00, and data committed +// after 14:26:00 will not be. The backup may or may not contain data modifications +// made between 14:24:00 and 14:26:00. On-Demand Backup does not support causal +// consistency. +// +// Along with data, the following are also included on the backups: +// +// * Global secondary indexes (GSIs) +// +// * Local secondary indexes (LSIs) +// +// * Streams +// +// * Provisioned read and write capacity +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation CreateBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// * ErrCodeTableInUseException "TableInUseException" +// A target table with the specified name is either being created or deleted. +// +// * ErrCodeContinuousBackupsUnavailableException "ContinuousBackupsUnavailableException" +// Backups have not yet been enabled for this table. +// +// * ErrCodeBackupInUseException "BackupInUseException" +// There is another ongoing conflicting backup control plane operation on the +// table. The backups is either being created, deleted or restored to a table. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateBackup +func (c *DynamoDB) CreateBackup(input *CreateBackupInput) (*CreateBackupOutput, error) { + req, out := c.CreateBackupRequest(input) + return out, req.Send() +} + +// CreateBackupWithContext is the same as CreateBackup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) CreateBackupWithContext(ctx aws.Context, input *CreateBackupInput, opts ...request.Option) (*CreateBackupOutput, error) { + req, out := c.CreateBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateGlobalTable = "CreateGlobalTable" + +// CreateGlobalTableRequest generates a "aws/request.Request" representing the +// client's request for the CreateGlobalTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateGlobalTable for more information on using the CreateGlobalTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateGlobalTableRequest method. +// req, resp := client.CreateGlobalTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateGlobalTable +func (c *DynamoDB) CreateGlobalTableRequest(input *CreateGlobalTableInput) (req *request.Request, output *CreateGlobalTableOutput) { + op := &request.Operation{ + Name: opCreateGlobalTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateGlobalTableInput{} + } + + output = &CreateGlobalTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateGlobalTable API operation for Amazon DynamoDB. +// +// Creates a global table from an existing table. A global table creates a replication +// relationship between two or more DynamoDB tables with the same table name +// in the provided regions. +// +// Tables can only be added as the replicas of a global table group under the +// following conditions: +// +// * The tables must have the same name. +// +// * The tables must contain no items. +// +// * The tables must have the same hash key and sort key (if present). +// +// * The tables must have DynamoDB Streams enabled (NEW_AND_OLD_IMAGES). +// +// +// * The tables must have same provisioned and maximum write capacity units. +// +// +// If global secondary indexes are specified, then the following conditions +// must also be met: +// +// * The global secondary indexes must have the same name. +// +// * The global secondary indexes must have the same hash key and sort key +// (if present). +// +// * The global secondary indexes must have the same provisioned and maximum +// write capacity units. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation CreateGlobalTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeGlobalTableAlreadyExistsException "GlobalTableAlreadyExistsException" +// The specified global table already exists. +// +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateGlobalTable +func (c *DynamoDB) CreateGlobalTable(input *CreateGlobalTableInput) (*CreateGlobalTableOutput, error) { + req, out := c.CreateGlobalTableRequest(input) + return out, req.Send() +} + +// CreateGlobalTableWithContext is the same as CreateGlobalTable with the addition of +// the ability to pass a context and additional request options. +// +// See CreateGlobalTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) CreateGlobalTableWithContext(ctx aws.Context, input *CreateGlobalTableInput, opts ...request.Option) (*CreateGlobalTableOutput, error) { + req, out := c.CreateGlobalTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateTable = "CreateTable" + +// CreateTableRequest generates a "aws/request.Request" representing the +// client's request for the CreateTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateTable for more information on using the CreateTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateTableRequest method. +// req, resp := client.CreateTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateTable +func (c *DynamoDB) CreateTableRequest(input *CreateTableInput) (req *request.Request, output *CreateTableOutput) { + op := &request.Operation{ + Name: opCreateTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateTableInput{} + } + + output = &CreateTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateTable API operation for Amazon DynamoDB. +// +// The CreateTable operation adds a new table to your account. In an AWS account, +// table names must be unique within each region. That is, you can have two +// tables with same name if you create the tables in different regions. +// +// CreateTable is an asynchronous operation. Upon receiving a CreateTable request, +// DynamoDB immediately returns a response with a TableStatus of CREATING. After +// the table is created, DynamoDB sets the TableStatus to ACTIVE. You can perform +// read and write operations only on an ACTIVE table. +// +// You can optionally define secondary indexes on the new table, as part of +// the CreateTable operation. If you want to create multiple tables with secondary +// indexes on them, you must create the tables sequentially. Only one table +// with secondary indexes can be in the CREATING state at any given time. +// +// You can use the DescribeTable action to check the table status. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation CreateTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateTable +func (c *DynamoDB) CreateTable(input *CreateTableInput) (*CreateTableOutput, error) { + req, out := c.CreateTableRequest(input) + return out, req.Send() +} + +// CreateTableWithContext is the same as CreateTable with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) CreateTableWithContext(ctx aws.Context, input *CreateTableInput, opts ...request.Option) (*CreateTableOutput, error) { + req, out := c.CreateTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBackup = "DeleteBackup" + +// DeleteBackupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBackup for more information on using the DeleteBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBackupRequest method. +// req, resp := client.DeleteBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DeleteBackup +func (c *DynamoDB) DeleteBackupRequest(input *DeleteBackupInput) (req *request.Request, output *DeleteBackupOutput) { + op := &request.Operation{ + Name: opDeleteBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteBackupInput{} + } + + output = &DeleteBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteBackup API operation for Amazon DynamoDB. +// +// Deletes an existing backup of a table. +// +// You can call DeleteBackup at a maximum rate of 10 times per second. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DeleteBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBackupNotFoundException "BackupNotFoundException" +// Backup not found for the given BackupARN. +// +// * ErrCodeBackupInUseException "BackupInUseException" +// There is another ongoing conflicting backup control plane operation on the +// table. The backups is either being created, deleted or restored to a table. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DeleteBackup +func (c *DynamoDB) DeleteBackup(input *DeleteBackupInput) (*DeleteBackupOutput, error) { + req, out := c.DeleteBackupRequest(input) + return out, req.Send() +} + +// DeleteBackupWithContext is the same as DeleteBackup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DeleteBackupWithContext(ctx aws.Context, input *DeleteBackupInput, opts ...request.Option) (*DeleteBackupOutput, error) { + req, out := c.DeleteBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteItem = "DeleteItem" + +// DeleteItemRequest generates a "aws/request.Request" representing the +// client's request for the DeleteItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteItem for more information on using the DeleteItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteItemRequest method. +// req, resp := client.DeleteItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DeleteItem +func (c *DynamoDB) DeleteItemRequest(input *DeleteItemInput) (req *request.Request, output *DeleteItemOutput) { + op := &request.Operation{ + Name: opDeleteItem, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteItemInput{} + } + + output = &DeleteItemOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteItem API operation for Amazon DynamoDB. +// +// Deletes a single item in a table by primary key. You can perform a conditional +// delete operation that deletes the item if it exists, or if it has an expected +// attribute value. +// +// In addition to deleting an item, you can also return the item's attribute +// values in the same operation, using the ReturnValues parameter. +// +// Unless you specify conditions, the DeleteItem is an idempotent operation; +// running it multiple times on the same item or attribute does not result in +// an error response. +// +// Conditional deletes are useful for deleting items only if specific conditions +// are met. If those conditions are met, DynamoDB performs the delete. Otherwise, +// the item is not deleted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DeleteItem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConditionalCheckFailedException "ConditionalCheckFailedException" +// A condition specified in the operation could not be evaluated. +// +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeItemCollectionSizeLimitExceededException "ItemCollectionSizeLimitExceededException" +// An item collection is too large. This exception is only returned for tables +// that have one or more local secondary indexes. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DeleteItem +func (c *DynamoDB) DeleteItem(input *DeleteItemInput) (*DeleteItemOutput, error) { + req, out := c.DeleteItemRequest(input) + return out, req.Send() +} + +// DeleteItemWithContext is the same as DeleteItem with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteItem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DeleteItemWithContext(ctx aws.Context, input *DeleteItemInput, opts ...request.Option) (*DeleteItemOutput, error) { + req, out := c.DeleteItemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteTable = "DeleteTable" + +// DeleteTableRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteTable for more information on using the DeleteTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteTableRequest method. +// req, resp := client.DeleteTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DeleteTable +func (c *DynamoDB) DeleteTableRequest(input *DeleteTableInput) (req *request.Request, output *DeleteTableOutput) { + op := &request.Operation{ + Name: opDeleteTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteTableInput{} + } + + output = &DeleteTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteTable API operation for Amazon DynamoDB. +// +// The DeleteTable operation deletes a table and all of its items. After a DeleteTable +// request, the specified table is in the DELETING state until DynamoDB completes +// the deletion. If the table is in the ACTIVE state, you can delete it. If +// a table is in CREATING or UPDATING states, then DynamoDB returns a ResourceInUseException. +// If the specified table does not exist, DynamoDB returns a ResourceNotFoundException. +// If table is already in the DELETING state, no error is returned. +// +// DynamoDB might continue to accept data read and write operations, such as +// GetItem and PutItem, on a table in the DELETING state until the table deletion +// is complete. +// +// When you delete a table, any indexes on that table are also deleted. +// +// If you have DynamoDB Streams enabled on the table, then the corresponding +// stream on that table goes into the DISABLED state, and the stream is automatically +// deleted after 24 hours. +// +// Use the DescribeTable action to check the status of the table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DeleteTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DeleteTable +func (c *DynamoDB) DeleteTable(input *DeleteTableInput) (*DeleteTableOutput, error) { + req, out := c.DeleteTableRequest(input) + return out, req.Send() +} + +// DeleteTableWithContext is the same as DeleteTable with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DeleteTableWithContext(ctx aws.Context, input *DeleteTableInput, opts ...request.Option) (*DeleteTableOutput, error) { + req, out := c.DeleteTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeBackup = "DescribeBackup" + +// DescribeBackupRequest generates a "aws/request.Request" representing the +// client's request for the DescribeBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeBackup for more information on using the DescribeBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeBackupRequest method. +// req, resp := client.DescribeBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeBackup +func (c *DynamoDB) DescribeBackupRequest(input *DescribeBackupInput) (req *request.Request, output *DescribeBackupOutput) { + op := &request.Operation{ + Name: opDescribeBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeBackupInput{} + } + + output = &DescribeBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeBackup API operation for Amazon DynamoDB. +// +// Describes an existing backup of a table. +// +// You can call DescribeBackup at a maximum rate of 10 times per second. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBackupNotFoundException "BackupNotFoundException" +// Backup not found for the given BackupARN. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeBackup +func (c *DynamoDB) DescribeBackup(input *DescribeBackupInput) (*DescribeBackupOutput, error) { + req, out := c.DescribeBackupRequest(input) + return out, req.Send() +} + +// DescribeBackupWithContext is the same as DescribeBackup with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeBackupWithContext(ctx aws.Context, input *DescribeBackupInput, opts ...request.Option) (*DescribeBackupOutput, error) { + req, out := c.DescribeBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeContinuousBackups = "DescribeContinuousBackups" + +// DescribeContinuousBackupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeContinuousBackups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeContinuousBackups for more information on using the DescribeContinuousBackups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeContinuousBackupsRequest method. +// req, resp := client.DescribeContinuousBackupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeContinuousBackups +func (c *DynamoDB) DescribeContinuousBackupsRequest(input *DescribeContinuousBackupsInput) (req *request.Request, output *DescribeContinuousBackupsOutput) { + op := &request.Operation{ + Name: opDescribeContinuousBackups, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeContinuousBackupsInput{} + } + + output = &DescribeContinuousBackupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeContinuousBackups API operation for Amazon DynamoDB. +// +// Checks the status of continuous backups and point in time recovery on the +// specified table. Continuous backups are ENABLED on all tables at table creation. +// If point in time recovery is enabled, PointInTimeRecoveryStatus will be set +// to ENABLED. +// +// Once continuous backups and point in time recovery are enabled, you can restore +// to any point in time within EarliestRestorableDateTime and LatestRestorableDateTime. +// +// LatestRestorableDateTime is typically 5 minutes before the current time. +// You can restore your table to any point in time during the last 35 days. +// +// You can call DescribeContinuousBackups at a maximum rate of 10 times per +// second. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeContinuousBackups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeContinuousBackups +func (c *DynamoDB) DescribeContinuousBackups(input *DescribeContinuousBackupsInput) (*DescribeContinuousBackupsOutput, error) { + req, out := c.DescribeContinuousBackupsRequest(input) + return out, req.Send() +} + +// DescribeContinuousBackupsWithContext is the same as DescribeContinuousBackups with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeContinuousBackups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeContinuousBackupsWithContext(ctx aws.Context, input *DescribeContinuousBackupsInput, opts ...request.Option) (*DescribeContinuousBackupsOutput, error) { + req, out := c.DescribeContinuousBackupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeGlobalTable = "DescribeGlobalTable" + +// DescribeGlobalTableRequest generates a "aws/request.Request" representing the +// client's request for the DescribeGlobalTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeGlobalTable for more information on using the DescribeGlobalTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeGlobalTableRequest method. +// req, resp := client.DescribeGlobalTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeGlobalTable +func (c *DynamoDB) DescribeGlobalTableRequest(input *DescribeGlobalTableInput) (req *request.Request, output *DescribeGlobalTableOutput) { + op := &request.Operation{ + Name: opDescribeGlobalTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeGlobalTableInput{} + } + + output = &DescribeGlobalTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeGlobalTable API operation for Amazon DynamoDB. +// +// Returns information about the specified global table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeGlobalTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeGlobalTableNotFoundException "GlobalTableNotFoundException" +// The specified global table does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeGlobalTable +func (c *DynamoDB) DescribeGlobalTable(input *DescribeGlobalTableInput) (*DescribeGlobalTableOutput, error) { + req, out := c.DescribeGlobalTableRequest(input) + return out, req.Send() +} + +// DescribeGlobalTableWithContext is the same as DescribeGlobalTable with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeGlobalTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeGlobalTableWithContext(ctx aws.Context, input *DescribeGlobalTableInput, opts ...request.Option) (*DescribeGlobalTableOutput, error) { + req, out := c.DescribeGlobalTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeGlobalTableSettings = "DescribeGlobalTableSettings" + +// DescribeGlobalTableSettingsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeGlobalTableSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeGlobalTableSettings for more information on using the DescribeGlobalTableSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeGlobalTableSettingsRequest method. +// req, resp := client.DescribeGlobalTableSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeGlobalTableSettings +func (c *DynamoDB) DescribeGlobalTableSettingsRequest(input *DescribeGlobalTableSettingsInput) (req *request.Request, output *DescribeGlobalTableSettingsOutput) { + op := &request.Operation{ + Name: opDescribeGlobalTableSettings, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeGlobalTableSettingsInput{} + } + + output = &DescribeGlobalTableSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeGlobalTableSettings API operation for Amazon DynamoDB. +// +// Describes region specific settings for a global table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeGlobalTableSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeGlobalTableNotFoundException "GlobalTableNotFoundException" +// The specified global table does not exist. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeGlobalTableSettings +func (c *DynamoDB) DescribeGlobalTableSettings(input *DescribeGlobalTableSettingsInput) (*DescribeGlobalTableSettingsOutput, error) { + req, out := c.DescribeGlobalTableSettingsRequest(input) + return out, req.Send() +} + +// DescribeGlobalTableSettingsWithContext is the same as DescribeGlobalTableSettings with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeGlobalTableSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeGlobalTableSettingsWithContext(ctx aws.Context, input *DescribeGlobalTableSettingsInput, opts ...request.Option) (*DescribeGlobalTableSettingsOutput, error) { + req, out := c.DescribeGlobalTableSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeLimits = "DescribeLimits" + +// DescribeLimitsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeLimits operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeLimits for more information on using the DescribeLimits +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeLimitsRequest method. +// req, resp := client.DescribeLimitsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeLimits +func (c *DynamoDB) DescribeLimitsRequest(input *DescribeLimitsInput) (req *request.Request, output *DescribeLimitsOutput) { + op := &request.Operation{ + Name: opDescribeLimits, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeLimitsInput{} + } + + output = &DescribeLimitsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeLimits API operation for Amazon DynamoDB. +// +// Returns the current provisioned-capacity limits for your AWS account in a +// region, both for the region as a whole and for any one DynamoDB table that +// you create there. +// +// When you establish an AWS account, the account has initial limits on the +// maximum read capacity units and write capacity units that you can provision +// across all of your DynamoDB tables in a given region. Also, there are per-table +// limits that apply when you create a table there. For more information, see +// Limits (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) +// page in the Amazon DynamoDB Developer Guide. +// +// Although you can increase these limits by filing a case at AWS Support Center +// (https://console.aws.amazon.com/support/home#/), obtaining the increase is +// not instantaneous. The DescribeLimits action lets you write code to compare +// the capacity you are currently using to those limits imposed by your account +// so that you have enough time to apply for an increase before you hit a limit. +// +// For example, you could use one of the AWS SDKs to do the following: +// +// Call DescribeLimits for a particular region to obtain your current account +// limits on provisioned capacity there. +// +// Create a variable to hold the aggregate read capacity units provisioned for +// all your tables in that region, and one to hold the aggregate write capacity +// units. Zero them both. +// +// Call ListTables to obtain a list of all your DynamoDB tables. +// +// For each table name listed by ListTables, do the following: +// +// Call DescribeTable with the table name. +// +// Use the data returned by DescribeTable to add the read capacity units and +// write capacity units provisioned for the table itself to your variables. +// +// If the table has one or more global secondary indexes (GSIs), loop over these +// GSIs and add their provisioned capacity values to your variables as well. +// +// Report the account limits for that region returned by DescribeLimits, along +// with the total current provisioned capacity levels you have calculated. +// +// This will let you see whether you are getting close to your account-level +// limits. +// +// The per-table limits apply only when you are creating a new table. They restrict +// the sum of the provisioned capacity of the new table itself and all its global +// secondary indexes. +// +// For existing tables and their GSIs, DynamoDB will not let you increase provisioned +// capacity extremely rapidly, but the only upper limit that applies is that +// the aggregate provisioned capacity over all your tables and GSIs cannot exceed +// either of the per-account limits. +// +// DescribeLimits should only be called periodically. You can expect throttling +// errors if you call it more than once in a minute. +// +// The DescribeLimits Request element has no content. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeLimits for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeLimits +func (c *DynamoDB) DescribeLimits(input *DescribeLimitsInput) (*DescribeLimitsOutput, error) { + req, out := c.DescribeLimitsRequest(input) + return out, req.Send() +} + +// DescribeLimitsWithContext is the same as DescribeLimits with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLimits for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeLimitsWithContext(ctx aws.Context, input *DescribeLimitsInput, opts ...request.Option) (*DescribeLimitsOutput, error) { + req, out := c.DescribeLimitsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeTable = "DescribeTable" + +// DescribeTableRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTable for more information on using the DescribeTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTableRequest method. +// req, resp := client.DescribeTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeTable +func (c *DynamoDB) DescribeTableRequest(input *DescribeTableInput) (req *request.Request, output *DescribeTableOutput) { + op := &request.Operation{ + Name: opDescribeTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeTableInput{} + } + + output = &DescribeTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeTable API operation for Amazon DynamoDB. +// +// Returns information about the table, including the current status of the +// table, when it was created, the primary key schema, and any indexes on the +// table. +// +// If you issue a DescribeTable request immediately after a CreateTable request, +// DynamoDB might return a ResourceNotFoundException. This is because DescribeTable +// uses an eventually consistent query, and the metadata for your table might +// not be available at that moment. Wait for a few seconds, and then try the +// DescribeTable request again. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeTable +func (c *DynamoDB) DescribeTable(input *DescribeTableInput) (*DescribeTableOutput, error) { + req, out := c.DescribeTableRequest(input) + return out, req.Send() +} + +// DescribeTableWithContext is the same as DescribeTable with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeTableWithContext(ctx aws.Context, input *DescribeTableInput, opts ...request.Option) (*DescribeTableOutput, error) { + req, out := c.DescribeTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeTimeToLive = "DescribeTimeToLive" + +// DescribeTimeToLiveRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTimeToLive operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTimeToLive for more information on using the DescribeTimeToLive +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTimeToLiveRequest method. +// req, resp := client.DescribeTimeToLiveRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeTimeToLive +func (c *DynamoDB) DescribeTimeToLiveRequest(input *DescribeTimeToLiveInput) (req *request.Request, output *DescribeTimeToLiveOutput) { + op := &request.Operation{ + Name: opDescribeTimeToLive, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeTimeToLiveInput{} + } + + output = &DescribeTimeToLiveOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeTimeToLive API operation for Amazon DynamoDB. +// +// Gives a description of the Time to Live (TTL) status on the specified table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeTimeToLive for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeTimeToLive +func (c *DynamoDB) DescribeTimeToLive(input *DescribeTimeToLiveInput) (*DescribeTimeToLiveOutput, error) { + req, out := c.DescribeTimeToLiveRequest(input) + return out, req.Send() +} + +// DescribeTimeToLiveWithContext is the same as DescribeTimeToLive with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTimeToLive for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeTimeToLiveWithContext(ctx aws.Context, input *DescribeTimeToLiveInput, opts ...request.Option) (*DescribeTimeToLiveOutput, error) { + req, out := c.DescribeTimeToLiveRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetItem = "GetItem" + +// GetItemRequest generates a "aws/request.Request" representing the +// client's request for the GetItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetItem for more information on using the GetItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetItemRequest method. +// req, resp := client.GetItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/GetItem +func (c *DynamoDB) GetItemRequest(input *GetItemInput) (req *request.Request, output *GetItemOutput) { + op := &request.Operation{ + Name: opGetItem, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetItemInput{} + } + + output = &GetItemOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetItem API operation for Amazon DynamoDB. +// +// The GetItem operation returns a set of attributes for the item with the given +// primary key. If there is no matching item, GetItem does not return any data +// and there will be no Item element in the response. +// +// GetItem provides an eventually consistent read by default. If your application +// requires a strongly consistent read, set ConsistentRead to true. Although +// a strongly consistent read might take more time than an eventually consistent +// read, it always returns the last updated value. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation GetItem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/GetItem +func (c *DynamoDB) GetItem(input *GetItemInput) (*GetItemOutput, error) { + req, out := c.GetItemRequest(input) + return out, req.Send() +} + +// GetItemWithContext is the same as GetItem with the addition of +// the ability to pass a context and additional request options. +// +// See GetItem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) GetItemWithContext(ctx aws.Context, input *GetItemInput, opts ...request.Option) (*GetItemOutput, error) { + req, out := c.GetItemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListBackups = "ListBackups" + +// ListBackupsRequest generates a "aws/request.Request" representing the +// client's request for the ListBackups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListBackups for more information on using the ListBackups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListBackupsRequest method. +// req, resp := client.ListBackupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListBackups +func (c *DynamoDB) ListBackupsRequest(input *ListBackupsInput) (req *request.Request, output *ListBackupsOutput) { + op := &request.Operation{ + Name: opListBackups, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListBackupsInput{} + } + + output = &ListBackupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListBackups API operation for Amazon DynamoDB. +// +// List backups associated with an AWS account. To list backups for a given +// table, specify TableName. ListBackups returns a paginated list of results +// with at most 1MB worth of items in a page. You can also specify a limit for +// the maximum number of entries to be returned in a page. +// +// In the request, start time is inclusive but end time is exclusive. Note that +// these limits are for the time at which the original backup was requested. +// +// You can call ListBackups a maximum of 5 times per second. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation ListBackups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListBackups +func (c *DynamoDB) ListBackups(input *ListBackupsInput) (*ListBackupsOutput, error) { + req, out := c.ListBackupsRequest(input) + return out, req.Send() +} + +// ListBackupsWithContext is the same as ListBackups with the addition of +// the ability to pass a context and additional request options. +// +// See ListBackups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ListBackupsWithContext(ctx aws.Context, input *ListBackupsInput, opts ...request.Option) (*ListBackupsOutput, error) { + req, out := c.ListBackupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListGlobalTables = "ListGlobalTables" + +// ListGlobalTablesRequest generates a "aws/request.Request" representing the +// client's request for the ListGlobalTables operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGlobalTables for more information on using the ListGlobalTables +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGlobalTablesRequest method. +// req, resp := client.ListGlobalTablesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListGlobalTables +func (c *DynamoDB) ListGlobalTablesRequest(input *ListGlobalTablesInput) (req *request.Request, output *ListGlobalTablesOutput) { + op := &request.Operation{ + Name: opListGlobalTables, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListGlobalTablesInput{} + } + + output = &ListGlobalTablesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGlobalTables API operation for Amazon DynamoDB. +// +// Lists all global tables that have a replica in the specified region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation ListGlobalTables for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListGlobalTables +func (c *DynamoDB) ListGlobalTables(input *ListGlobalTablesInput) (*ListGlobalTablesOutput, error) { + req, out := c.ListGlobalTablesRequest(input) + return out, req.Send() +} + +// ListGlobalTablesWithContext is the same as ListGlobalTables with the addition of +// the ability to pass a context and additional request options. +// +// See ListGlobalTables for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ListGlobalTablesWithContext(ctx aws.Context, input *ListGlobalTablesInput, opts ...request.Option) (*ListGlobalTablesOutput, error) { + req, out := c.ListGlobalTablesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListTables = "ListTables" + +// ListTablesRequest generates a "aws/request.Request" representing the +// client's request for the ListTables operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTables for more information on using the ListTables +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTablesRequest method. +// req, resp := client.ListTablesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListTables +func (c *DynamoDB) ListTablesRequest(input *ListTablesInput) (req *request.Request, output *ListTablesOutput) { + op := &request.Operation{ + Name: opListTables, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"ExclusiveStartTableName"}, + OutputTokens: []string{"LastEvaluatedTableName"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListTablesInput{} + } + + output = &ListTablesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTables API operation for Amazon DynamoDB. +// +// Returns an array of table names associated with the current account and endpoint. +// The output from ListTables is paginated, with each page returning a maximum +// of 100 table names. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation ListTables for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListTables +func (c *DynamoDB) ListTables(input *ListTablesInput) (*ListTablesOutput, error) { + req, out := c.ListTablesRequest(input) + return out, req.Send() +} + +// ListTablesWithContext is the same as ListTables with the addition of +// the ability to pass a context and additional request options. +// +// See ListTables for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ListTablesWithContext(ctx aws.Context, input *ListTablesInput, opts ...request.Option) (*ListTablesOutput, error) { + req, out := c.ListTablesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListTablesPages iterates over the pages of a ListTables operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListTables method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListTables operation. +// pageNum := 0 +// err := client.ListTablesPages(params, +// func(page *ListTablesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DynamoDB) ListTablesPages(input *ListTablesInput, fn func(*ListTablesOutput, bool) bool) error { + return c.ListTablesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListTablesPagesWithContext same as ListTablesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ListTablesPagesWithContext(ctx aws.Context, input *ListTablesInput, fn func(*ListTablesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListTablesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListTablesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListTablesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListTagsOfResource = "ListTagsOfResource" + +// ListTagsOfResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsOfResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsOfResource for more information on using the ListTagsOfResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsOfResourceRequest method. +// req, resp := client.ListTagsOfResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListTagsOfResource +func (c *DynamoDB) ListTagsOfResourceRequest(input *ListTagsOfResourceInput) (req *request.Request, output *ListTagsOfResourceOutput) { + op := &request.Operation{ + Name: opListTagsOfResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsOfResourceInput{} + } + + output = &ListTagsOfResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagsOfResource API operation for Amazon DynamoDB. +// +// List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource +// up to 10 times per second, per account. +// +// For an overview on tagging DynamoDB resources, see Tagging for DynamoDB (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tagging.html) +// in the Amazon DynamoDB Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation ListTagsOfResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/ListTagsOfResource +func (c *DynamoDB) ListTagsOfResource(input *ListTagsOfResourceInput) (*ListTagsOfResourceOutput, error) { + req, out := c.ListTagsOfResourceRequest(input) + return out, req.Send() +} + +// ListTagsOfResourceWithContext is the same as ListTagsOfResource with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsOfResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ListTagsOfResourceWithContext(ctx aws.Context, input *ListTagsOfResourceInput, opts ...request.Option) (*ListTagsOfResourceOutput, error) { + req, out := c.ListTagsOfResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutItem = "PutItem" + +// PutItemRequest generates a "aws/request.Request" representing the +// client's request for the PutItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutItem for more information on using the PutItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutItemRequest method. +// req, resp := client.PutItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/PutItem +func (c *DynamoDB) PutItemRequest(input *PutItemInput) (req *request.Request, output *PutItemOutput) { + op := &request.Operation{ + Name: opPutItem, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutItemInput{} + } + + output = &PutItemOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutItem API operation for Amazon DynamoDB. +// +// Creates a new item, or replaces an old item with a new item. If an item that +// has the same primary key as the new item already exists in the specified +// table, the new item completely replaces the existing item. You can perform +// a conditional put operation (add a new item if one with the specified primary +// key doesn't exist), or replace an existing item if it has certain attribute +// values. You can return the item's attribute values in the same operation, +// using the ReturnValues parameter. +// +// This topic provides general information about the PutItem API. +// +// For information on how to call the PutItem API using the AWS SDK in specific +// languages, see the following: +// +// PutItem in the AWS Command Line Interface (http://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for .NET (http://docs.aws.amazon.com/goto/DotNetSDKV3/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for C++ (http://docs.aws.amazon.com/goto/SdkForCpp/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for Go (http://docs.aws.amazon.com/goto/SdkForGoV1/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for Java (http://docs.aws.amazon.com/goto/SdkForJava/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for JavaScript (http://docs.aws.amazon.com/goto/AWSJavaScriptSDK/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for PHP V3 (http://docs.aws.amazon.com/goto/SdkForPHPV3/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for Python (http://docs.aws.amazon.com/goto/boto3/dynamodb-2012-08-10/PutItem) +// +// PutItem in the AWS SDK for Ruby V2 (http://docs.aws.amazon.com/goto/SdkForRubyV2/dynamodb-2012-08-10/PutItem) +// +// When you add an item, the primary key attribute(s) are the only required +// attributes. Attribute values cannot be null. String and Binary type attributes +// must have lengths greater than zero. Set type attributes cannot be empty. +// Requests with empty values will be rejected with a ValidationException exception. +// +// To prevent a new item from replacing an existing item, use a conditional +// expression that contains the attribute_not_exists function with the name +// of the attribute being used as the partition key for the table. Since every +// record must contain that attribute, the attribute_not_exists function will +// only succeed if no matching item exists. +// +// For more information about PutItem, see Working with Items (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html) +// in the Amazon DynamoDB Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation PutItem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConditionalCheckFailedException "ConditionalCheckFailedException" +// A condition specified in the operation could not be evaluated. +// +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeItemCollectionSizeLimitExceededException "ItemCollectionSizeLimitExceededException" +// An item collection is too large. This exception is only returned for tables +// that have one or more local secondary indexes. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/PutItem +func (c *DynamoDB) PutItem(input *PutItemInput) (*PutItemOutput, error) { + req, out := c.PutItemRequest(input) + return out, req.Send() +} + +// PutItemWithContext is the same as PutItem with the addition of +// the ability to pass a context and additional request options. +// +// See PutItem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) PutItemWithContext(ctx aws.Context, input *PutItemInput, opts ...request.Option) (*PutItemOutput, error) { + req, out := c.PutItemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opQuery = "Query" + +// QueryRequest generates a "aws/request.Request" representing the +// client's request for the Query operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See Query for more information on using the Query +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the QueryRequest method. +// req, resp := client.QueryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Query +func (c *DynamoDB) QueryRequest(input *QueryInput) (req *request.Request, output *QueryOutput) { + op := &request.Operation{ + Name: opQuery, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"ExclusiveStartKey"}, + OutputTokens: []string{"LastEvaluatedKey"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &QueryInput{} + } + + output = &QueryOutput{} + req = c.newRequest(op, input, output) + return +} + +// Query API operation for Amazon DynamoDB. +// +// The Query operation finds items based on primary key values. You can query +// any table or secondary index that has a composite primary key (a partition +// key and a sort key). +// +// Use the KeyConditionExpression parameter to provide a specific value for +// the partition key. The Query operation will return all of the items from +// the table or index with that partition key value. You can optionally narrow +// the scope of the Query operation by specifying a sort key value and a comparison +// operator in KeyConditionExpression. To further refine the Query results, +// you can optionally provide a FilterExpression. A FilterExpression determines +// which items within the results should be returned to you. All of the other +// results are discarded. +// +// A Query operation always returns a result set. If no matching items are found, +// the result set will be empty. Queries that do not return results consume +// the minimum number of read capacity units for that type of read operation. +// +// DynamoDB calculates the number of read capacity units consumed based on item +// size, not on the amount of data that is returned to an application. The number +// of capacity units consumed will be the same whether you request all of the +// attributes (the default behavior) or just some of them (using a projection +// expression). The number will also be the same whether or not you use a FilterExpression. +// +// Query results are always sorted by the sort key value. If the data type of +// the sort key is Number, the results are returned in numeric order; otherwise, +// the results are returned in order of UTF-8 bytes. By default, the sort order +// is ascending. To reverse the order, set the ScanIndexForward parameter to +// false. +// +// A single Query operation will read up to the maximum number of items set +// (if using the Limit parameter) or a maximum of 1 MB of data and then apply +// any filtering to the results using FilterExpression. If LastEvaluatedKey +// is present in the response, you will need to paginate the result set. For +// more information, see Paginating the Results (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#Query.Pagination) +// in the Amazon DynamoDB Developer Guide. +// +// FilterExpression is applied after a Query finishes, but before the results +// are returned. A FilterExpression cannot contain partition key or sort key +// attributes. You need to specify those attributes in the KeyConditionExpression. +// +// A Query operation can return an empty result set and a LastEvaluatedKey if +// all the items read for the page of results are filtered out. +// +// You can query a table, a local secondary index, or a global secondary index. +// For a query on a table or on a local secondary index, you can set the ConsistentRead +// parameter to true and obtain a strongly consistent result. Global secondary +// indexes support eventually consistent reads only, so do not specify ConsistentRead +// when querying a global secondary index. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation Query for usage and error information. +// +// Returned Error Codes: +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Query +func (c *DynamoDB) Query(input *QueryInput) (*QueryOutput, error) { + req, out := c.QueryRequest(input) + return out, req.Send() +} + +// QueryWithContext is the same as Query with the addition of +// the ability to pass a context and additional request options. +// +// See Query for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) QueryWithContext(ctx aws.Context, input *QueryInput, opts ...request.Option) (*QueryOutput, error) { + req, out := c.QueryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// QueryPages iterates over the pages of a Query operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See Query method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a Query operation. +// pageNum := 0 +// err := client.QueryPages(params, +// func(page *QueryOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DynamoDB) QueryPages(input *QueryInput, fn func(*QueryOutput, bool) bool) error { + return c.QueryPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// QueryPagesWithContext same as QueryPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) QueryPagesWithContext(ctx aws.Context, input *QueryInput, fn func(*QueryOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *QueryInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.QueryRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*QueryOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opRestoreTableFromBackup = "RestoreTableFromBackup" + +// RestoreTableFromBackupRequest generates a "aws/request.Request" representing the +// client's request for the RestoreTableFromBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreTableFromBackup for more information on using the RestoreTableFromBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreTableFromBackupRequest method. +// req, resp := client.RestoreTableFromBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/RestoreTableFromBackup +func (c *DynamoDB) RestoreTableFromBackupRequest(input *RestoreTableFromBackupInput) (req *request.Request, output *RestoreTableFromBackupOutput) { + op := &request.Operation{ + Name: opRestoreTableFromBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreTableFromBackupInput{} + } + + output = &RestoreTableFromBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreTableFromBackup API operation for Amazon DynamoDB. +// +// Creates a new table from an existing backup. Any number of users can execute +// up to 4 concurrent restores (any type of restore) in a given account. +// +// You can call RestoreTableFromBackup at a maximum rate of 10 times per second. +// +// You must manually set up the following on the restored table: +// +// * Auto scaling policies +// +// * IAM policies +// +// * Cloudwatch metrics and alarms +// +// * Tags +// +// * Stream settings +// +// * Time to Live (TTL) settings +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation RestoreTableFromBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTableAlreadyExistsException "TableAlreadyExistsException" +// A target table with the specified name already exists. +// +// * ErrCodeTableInUseException "TableInUseException" +// A target table with the specified name is either being created or deleted. +// +// * ErrCodeBackupNotFoundException "BackupNotFoundException" +// Backup not found for the given BackupARN. +// +// * ErrCodeBackupInUseException "BackupInUseException" +// There is another ongoing conflicting backup control plane operation on the +// table. The backups is either being created, deleted or restored to a table. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/RestoreTableFromBackup +func (c *DynamoDB) RestoreTableFromBackup(input *RestoreTableFromBackupInput) (*RestoreTableFromBackupOutput, error) { + req, out := c.RestoreTableFromBackupRequest(input) + return out, req.Send() +} + +// RestoreTableFromBackupWithContext is the same as RestoreTableFromBackup with the addition of +// the ability to pass a context and additional request options. +// +// See RestoreTableFromBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) RestoreTableFromBackupWithContext(ctx aws.Context, input *RestoreTableFromBackupInput, opts ...request.Option) (*RestoreTableFromBackupOutput, error) { + req, out := c.RestoreTableFromBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRestoreTableToPointInTime = "RestoreTableToPointInTime" + +// RestoreTableToPointInTimeRequest generates a "aws/request.Request" representing the +// client's request for the RestoreTableToPointInTime operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreTableToPointInTime for more information on using the RestoreTableToPointInTime +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreTableToPointInTimeRequest method. +// req, resp := client.RestoreTableToPointInTimeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/RestoreTableToPointInTime +func (c *DynamoDB) RestoreTableToPointInTimeRequest(input *RestoreTableToPointInTimeInput) (req *request.Request, output *RestoreTableToPointInTimeOutput) { + op := &request.Operation{ + Name: opRestoreTableToPointInTime, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreTableToPointInTimeInput{} + } + + output = &RestoreTableToPointInTimeOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreTableToPointInTime API operation for Amazon DynamoDB. +// +// Restores the specified table to the specified point in time within EarliestRestorableDateTime +// and LatestRestorableDateTime. You can restore your table to any point in +// time during the last 35 days. Any number of users can execute up to 4 concurrent +// restores (any type of restore) in a given account. +// +// When you restore using point in time recovery, DynamoDB restores your table +// data to the state based on the selected date and time (day:hour:minute:second) +// to a new table. +// +// Along with data, the following are also included on the new restored table +// using point in time recovery: +// +// * Global secondary indexes (GSIs) +// +// * Local secondary indexes (LSIs) +// +// * Provisioned read and write capacity +// +// * Encryption settings +// +// All these settings come from the current settings of the source table at +// the time of restore. +// +// You must manually set up the following on the restored table: +// +// * Auto scaling policies +// +// * IAM policies +// +// * Cloudwatch metrics and alarms +// +// * Tags +// +// * Stream settings +// +// * Time to Live (TTL) settings +// +// * Point in time recovery settings +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation RestoreTableToPointInTime for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTableAlreadyExistsException "TableAlreadyExistsException" +// A target table with the specified name already exists. +// +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// * ErrCodeTableInUseException "TableInUseException" +// A target table with the specified name is either being created or deleted. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInvalidRestoreTimeException "InvalidRestoreTimeException" +// An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime +// and LatestRestorableDateTime. +// +// * ErrCodePointInTimeRecoveryUnavailableException "PointInTimeRecoveryUnavailableException" +// Point in time recovery has not yet been enabled for this source table. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/RestoreTableToPointInTime +func (c *DynamoDB) RestoreTableToPointInTime(input *RestoreTableToPointInTimeInput) (*RestoreTableToPointInTimeOutput, error) { + req, out := c.RestoreTableToPointInTimeRequest(input) + return out, req.Send() +} + +// RestoreTableToPointInTimeWithContext is the same as RestoreTableToPointInTime with the addition of +// the ability to pass a context and additional request options. +// +// See RestoreTableToPointInTime for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) RestoreTableToPointInTimeWithContext(ctx aws.Context, input *RestoreTableToPointInTimeInput, opts ...request.Option) (*RestoreTableToPointInTimeOutput, error) { + req, out := c.RestoreTableToPointInTimeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opScan = "Scan" + +// ScanRequest generates a "aws/request.Request" representing the +// client's request for the Scan operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See Scan for more information on using the Scan +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ScanRequest method. +// req, resp := client.ScanRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Scan +func (c *DynamoDB) ScanRequest(input *ScanInput) (req *request.Request, output *ScanOutput) { + op := &request.Operation{ + Name: opScan, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"ExclusiveStartKey"}, + OutputTokens: []string{"LastEvaluatedKey"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &ScanInput{} + } + + output = &ScanOutput{} + req = c.newRequest(op, input, output) + return +} + +// Scan API operation for Amazon DynamoDB. +// +// The Scan operation returns one or more items and item attributes by accessing +// every item in a table or a secondary index. To have DynamoDB return fewer +// items, you can provide a FilterExpression operation. +// +// If the total number of scanned items exceeds the maximum data set size limit +// of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey +// value to continue the scan in a subsequent operation. The results also include +// the number of items exceeding the limit. A scan can result in no table data +// meeting the filter criteria. +// +// A single Scan operation will read up to the maximum number of items set (if +// using the Limit parameter) or a maximum of 1 MB of data and then apply any +// filtering to the results using FilterExpression. If LastEvaluatedKey is present +// in the response, you will need to paginate the result set. For more information, +// see Paginating the Results (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.Pagination) +// in the Amazon DynamoDB Developer Guide. +// +// Scan operations proceed sequentially; however, for faster performance on +// a large table or secondary index, applications can request a parallel Scan +// operation by providing the Segment and TotalSegments parameters. For more +// information, see Parallel Scan (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan) +// in the Amazon DynamoDB Developer Guide. +// +// Scan uses eventually consistent reads when accessing the data in a table; +// therefore, the result set might not include the changes to data in the table +// immediately before the operation began. If you need a consistent copy of +// the data, as of the time that the Scan begins, you can set the ConsistentRead +// parameter to true. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation Scan for usage and error information. +// +// Returned Error Codes: +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Scan +func (c *DynamoDB) Scan(input *ScanInput) (*ScanOutput, error) { + req, out := c.ScanRequest(input) + return out, req.Send() +} + +// ScanWithContext is the same as Scan with the addition of +// the ability to pass a context and additional request options. +// +// See Scan for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ScanWithContext(ctx aws.Context, input *ScanInput, opts ...request.Option) (*ScanOutput, error) { + req, out := c.ScanRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ScanPages iterates over the pages of a Scan operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See Scan method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a Scan operation. +// pageNum := 0 +// err := client.ScanPages(params, +// func(page *ScanOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DynamoDB) ScanPages(input *ScanInput, fn func(*ScanOutput, bool) bool) error { + return c.ScanPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ScanPagesWithContext same as ScanPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ScanPagesWithContext(ctx aws.Context, input *ScanInput, fn func(*ScanOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ScanInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ScanRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ScanOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opTagResource = "TagResource" + +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagResource for more information on using the TagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/TagResource +func (c *DynamoDB) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { + op := &request.Operation{ + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TagResourceInput{} + } + + output = &TagResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// TagResource API operation for Amazon DynamoDB. +// +// Associate a set of tags with an Amazon DynamoDB resource. You can then activate +// these user-defined tags so that they appear on the Billing and Cost Management +// console for cost allocation tracking. You can call TagResource up to 5 times +// per second, per account. +// +// For an overview on tagging DynamoDB resources, see Tagging for DynamoDB (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tagging.html) +// in the Amazon DynamoDB Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation TagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/TagResource +func (c *DynamoDB) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + return out, req.Send() +} + +// TagResourceWithContext is the same as TagResource with the addition of +// the ability to pass a context and additional request options. +// +// See TagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagResource = "UntagResource" + +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UntagResource +func (c *DynamoDB) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagResourceInput{} + } + + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UntagResource API operation for Amazon DynamoDB. +// +// Removes the association of tags from an Amazon DynamoDB resource. You can +// call UntagResource up to 5 times per second, per account. +// +// For an overview on tagging DynamoDB resources, see Tagging for DynamoDB (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tagging.html) +// in the Amazon DynamoDB Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UntagResource +func (c *DynamoDB) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() +} + +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateContinuousBackups = "UpdateContinuousBackups" + +// UpdateContinuousBackupsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateContinuousBackups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateContinuousBackups for more information on using the UpdateContinuousBackups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateContinuousBackupsRequest method. +// req, resp := client.UpdateContinuousBackupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateContinuousBackups +func (c *DynamoDB) UpdateContinuousBackupsRequest(input *UpdateContinuousBackupsInput) (req *request.Request, output *UpdateContinuousBackupsOutput) { + op := &request.Operation{ + Name: opUpdateContinuousBackups, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateContinuousBackupsInput{} + } + + output = &UpdateContinuousBackupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateContinuousBackups API operation for Amazon DynamoDB. +// +// UpdateContinuousBackups enables or disables point in time recovery for the +// specified table. A successful UpdateContinuousBackups call returns the current +// ContinuousBackupsDescription. Continuous backups are ENABLED on all tables +// at table creation. If point in time recovery is enabled, PointInTimeRecoveryStatus +// will be set to ENABLED. +// +// Once continuous backups and point in time recovery are enabled, you can restore +// to any point in time within EarliestRestorableDateTime and LatestRestorableDateTime. +// +// LatestRestorableDateTime is typically 5 minutes before the current time. +// You can restore your table to any point in time during the last 35 days.. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateContinuousBackups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// * ErrCodeContinuousBackupsUnavailableException "ContinuousBackupsUnavailableException" +// Backups have not yet been enabled for this table. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateContinuousBackups +func (c *DynamoDB) UpdateContinuousBackups(input *UpdateContinuousBackupsInput) (*UpdateContinuousBackupsOutput, error) { + req, out := c.UpdateContinuousBackupsRequest(input) + return out, req.Send() +} + +// UpdateContinuousBackupsWithContext is the same as UpdateContinuousBackups with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateContinuousBackups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateContinuousBackupsWithContext(ctx aws.Context, input *UpdateContinuousBackupsInput, opts ...request.Option) (*UpdateContinuousBackupsOutput, error) { + req, out := c.UpdateContinuousBackupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGlobalTable = "UpdateGlobalTable" + +// UpdateGlobalTableRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGlobalTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGlobalTable for more information on using the UpdateGlobalTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGlobalTableRequest method. +// req, resp := client.UpdateGlobalTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTable +func (c *DynamoDB) UpdateGlobalTableRequest(input *UpdateGlobalTableInput) (req *request.Request, output *UpdateGlobalTableOutput) { + op := &request.Operation{ + Name: opUpdateGlobalTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateGlobalTableInput{} + } + + output = &UpdateGlobalTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGlobalTable API operation for Amazon DynamoDB. +// +// Adds or removes replicas in the specified global table. The global table +// must already exist to be able to use this operation. Any replica to be added +// must be empty, must have the same name as the global table, must have the +// same key schema, and must have DynamoDB Streams enabled and must have same +// provisioned and maximum write capacity units. +// +// Although you can use UpdateGlobalTable to add replicas and remove replicas +// in a single request, for simplicity we recommend that you issue separate +// requests for adding or removing replicas. +// +// If global secondary indexes are specified, then the following conditions +// must also be met: +// +// * The global secondary indexes must have the same name. +// +// * The global secondary indexes must have the same hash key and sort key +// (if present). +// +// * The global secondary indexes must have the same provisioned and maximum +// write capacity units. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateGlobalTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeGlobalTableNotFoundException "GlobalTableNotFoundException" +// The specified global table does not exist. +// +// * ErrCodeReplicaAlreadyExistsException "ReplicaAlreadyExistsException" +// The specified replica is already part of the global table. +// +// * ErrCodeReplicaNotFoundException "ReplicaNotFoundException" +// The specified replica is no longer part of the global table. +// +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTable +func (c *DynamoDB) UpdateGlobalTable(input *UpdateGlobalTableInput) (*UpdateGlobalTableOutput, error) { + req, out := c.UpdateGlobalTableRequest(input) + return out, req.Send() +} + +// UpdateGlobalTableWithContext is the same as UpdateGlobalTable with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGlobalTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateGlobalTableWithContext(ctx aws.Context, input *UpdateGlobalTableInput, opts ...request.Option) (*UpdateGlobalTableOutput, error) { + req, out := c.UpdateGlobalTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGlobalTableSettings = "UpdateGlobalTableSettings" + +// UpdateGlobalTableSettingsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGlobalTableSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGlobalTableSettings for more information on using the UpdateGlobalTableSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGlobalTableSettingsRequest method. +// req, resp := client.UpdateGlobalTableSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTableSettings +func (c *DynamoDB) UpdateGlobalTableSettingsRequest(input *UpdateGlobalTableSettingsInput) (req *request.Request, output *UpdateGlobalTableSettingsOutput) { + op := &request.Operation{ + Name: opUpdateGlobalTableSettings, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateGlobalTableSettingsInput{} + } + + output = &UpdateGlobalTableSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGlobalTableSettings API operation for Amazon DynamoDB. +// +// Updates settings for a global table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateGlobalTableSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeGlobalTableNotFoundException "GlobalTableNotFoundException" +// The specified global table does not exist. +// +// * ErrCodeReplicaNotFoundException "ReplicaNotFoundException" +// The specified replica is no longer part of the global table. +// +// * ErrCodeIndexNotFoundException "IndexNotFoundException" +// The operation tried to access a nonexistent index. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTableSettings +func (c *DynamoDB) UpdateGlobalTableSettings(input *UpdateGlobalTableSettingsInput) (*UpdateGlobalTableSettingsOutput, error) { + req, out := c.UpdateGlobalTableSettingsRequest(input) + return out, req.Send() +} + +// UpdateGlobalTableSettingsWithContext is the same as UpdateGlobalTableSettings with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGlobalTableSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateGlobalTableSettingsWithContext(ctx aws.Context, input *UpdateGlobalTableSettingsInput, opts ...request.Option) (*UpdateGlobalTableSettingsOutput, error) { + req, out := c.UpdateGlobalTableSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateItem = "UpdateItem" + +// UpdateItemRequest generates a "aws/request.Request" representing the +// client's request for the UpdateItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateItem for more information on using the UpdateItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateItemRequest method. +// req, resp := client.UpdateItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateItem +func (c *DynamoDB) UpdateItemRequest(input *UpdateItemInput) (req *request.Request, output *UpdateItemOutput) { + op := &request.Operation{ + Name: opUpdateItem, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateItemInput{} + } + + output = &UpdateItemOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateItem API operation for Amazon DynamoDB. +// +// Edits an existing item's attributes, or adds a new item to the table if it +// does not already exist. You can put, delete, or add attribute values. You +// can also perform a conditional update on an existing item (insert a new attribute +// name-value pair if it doesn't exist, or replace an existing name-value pair +// if it has certain expected attribute values). +// +// You can also return the item's attribute values in the same UpdateItem operation +// using the ReturnValues parameter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateItem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConditionalCheckFailedException "ConditionalCheckFailedException" +// A condition specified in the operation could not be evaluated. +// +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeItemCollectionSizeLimitExceededException "ItemCollectionSizeLimitExceededException" +// An item collection is too large. This exception is only returned for tables +// that have one or more local secondary indexes. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateItem +func (c *DynamoDB) UpdateItem(input *UpdateItemInput) (*UpdateItemOutput, error) { + req, out := c.UpdateItemRequest(input) + return out, req.Send() +} + +// UpdateItemWithContext is the same as UpdateItem with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateItem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateItemWithContext(ctx aws.Context, input *UpdateItemInput, opts ...request.Option) (*UpdateItemOutput, error) { + req, out := c.UpdateItemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateTable = "UpdateTable" + +// UpdateTableRequest generates a "aws/request.Request" representing the +// client's request for the UpdateTable operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateTable for more information on using the UpdateTable +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateTableRequest method. +// req, resp := client.UpdateTableRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateTable +func (c *DynamoDB) UpdateTableRequest(input *UpdateTableInput) (req *request.Request, output *UpdateTableOutput) { + op := &request.Operation{ + Name: opUpdateTable, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateTableInput{} + } + + output = &UpdateTableOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateTable API operation for Amazon DynamoDB. +// +// Modifies the provisioned throughput settings, global secondary indexes, or +// DynamoDB Streams settings for a given table. +// +// You can only perform one of the following operations at once: +// +// * Modify the provisioned throughput settings of the table. +// +// * Enable or disable Streams on the table. +// +// * Remove a global secondary index from the table. +// +// * Create a new global secondary index on the table. Once the index begins +// backfilling, you can use UpdateTable to perform other operations. +// +// UpdateTable is an asynchronous operation; while it is executing, the table +// status changes from ACTIVE to UPDATING. While it is UPDATING, you cannot +// issue another UpdateTable request. When the table returns to the ACTIVE state, +// the UpdateTable operation is complete. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateTable for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateTable +func (c *DynamoDB) UpdateTable(input *UpdateTableInput) (*UpdateTableOutput, error) { + req, out := c.UpdateTableRequest(input) + return out, req.Send() +} + +// UpdateTableWithContext is the same as UpdateTable with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateTable for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateTableWithContext(ctx aws.Context, input *UpdateTableInput, opts ...request.Option) (*UpdateTableOutput, error) { + req, out := c.UpdateTableRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateTimeToLive = "UpdateTimeToLive" + +// UpdateTimeToLiveRequest generates a "aws/request.Request" representing the +// client's request for the UpdateTimeToLive operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateTimeToLive for more information on using the UpdateTimeToLive +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateTimeToLiveRequest method. +// req, resp := client.UpdateTimeToLiveRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateTimeToLive +func (c *DynamoDB) UpdateTimeToLiveRequest(input *UpdateTimeToLiveInput) (req *request.Request, output *UpdateTimeToLiveOutput) { + op := &request.Operation{ + Name: opUpdateTimeToLive, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateTimeToLiveInput{} + } + + output = &UpdateTimeToLiveOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateTimeToLive API operation for Amazon DynamoDB. +// +// The UpdateTimeToLive method will enable or disable TTL for the specified +// table. A successful UpdateTimeToLive call returns the current TimeToLiveSpecification; +// it may take up to one hour for the change to fully process. Any additional +// UpdateTimeToLive calls for the same table during this one hour duration result +// in a ValidationException. +// +// TTL compares the current time in epoch time format to the time stored in +// the TTL attribute of an item. If the epoch time value stored in the attribute +// is less than the current time, the item is marked as expired and subsequently +// deleted. +// +// The epoch time format is the number of seconds elapsed since 12:00:00 AM +// January 1st, 1970 UTC. +// +// DynamoDB deletes expired items on a best-effort basis to ensure availability +// of throughput for other data operations. +// +// DynamoDB typically deletes expired items within two days of expiration. The +// exact duration within which an item gets deleted after expiration is specific +// to the nature of the workload. Items that have expired and not been deleted +// will still show up in reads, queries, and scans. +// +// As items are deleted, they are removed from any Local Secondary Index and +// Global Secondary Index immediately in the same eventually consistent way +// as a standard delete operation. +// +// For more information, see Time To Live (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) +// in the Amazon DynamoDB Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateTimeToLive for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Up to 50 CreateBackup operations are allowed per second, per account. There +// is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateTimeToLive +func (c *DynamoDB) UpdateTimeToLive(input *UpdateTimeToLiveInput) (*UpdateTimeToLiveOutput, error) { + req, out := c.UpdateTimeToLiveRequest(input) + return out, req.Send() +} + +// UpdateTimeToLiveWithContext is the same as UpdateTimeToLive with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateTimeToLive for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateTimeToLiveWithContext(ctx aws.Context, input *UpdateTimeToLiveInput, opts ...request.Option) (*UpdateTimeToLiveOutput, error) { + req, out := c.UpdateTimeToLiveRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Represents an attribute for describing the key schema for the table and indexes. +type AttributeDefinition struct { + _ struct{} `type:"structure"` + + // A name for the attribute. + // + // AttributeName is a required field + AttributeName *string `min:"1" type:"string" required:"true"` + + // The data type for the attribute, where: + // + // * S - the attribute is of type String + // + // * N - the attribute is of type Number + // + // * B - the attribute is of type Binary + // + // AttributeType is a required field + AttributeType *string `type:"string" required:"true" enum:"ScalarAttributeType"` +} + +// String returns the string representation +func (s AttributeDefinition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributeDefinition) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttributeDefinition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttributeDefinition"} + if s.AttributeName == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeName")) + } + if s.AttributeName != nil && len(*s.AttributeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributeName", 1)) + } + if s.AttributeType == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeName sets the AttributeName field's value. +func (s *AttributeDefinition) SetAttributeName(v string) *AttributeDefinition { + s.AttributeName = &v + return s +} + +// SetAttributeType sets the AttributeType field's value. +func (s *AttributeDefinition) SetAttributeType(v string) *AttributeDefinition { + s.AttributeType = &v + return s +} + +// Represents the data for an attribute. +// +// Each attribute value is described as a name-value pair. The name is the data +// type, and the value is the data itself. +// +// For more information, see Data Types (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes) +// in the Amazon DynamoDB Developer Guide. +type AttributeValue struct { + _ struct{} `type:"structure"` + + // An attribute of type Binary. For example: + // + // "B": "dGhpcyB0ZXh0IGlzIGJhc2U2NC1lbmNvZGVk" + // + // B is automatically base64 encoded/decoded by the SDK. + B []byte `type:"blob"` + + // An attribute of type Boolean. For example: + // + // "BOOL": true + BOOL *bool `type:"boolean"` + + // An attribute of type Binary Set. For example: + // + // "BS": ["U3Vubnk=", "UmFpbnk=", "U25vd3k="] + BS [][]byte `type:"list"` + + // An attribute of type List. For example: + // + // "L": ["Cookies", "Coffee", 3.14159] + L []*AttributeValue `type:"list"` + + // An attribute of type Map. For example: + // + // "M": {"Name": {"S": "Joe"}, "Age": {"N": "35"}} + M map[string]*AttributeValue `type:"map"` + + // An attribute of type Number. For example: + // + // "N": "123.45" + // + // Numbers are sent across the network to DynamoDB as strings, to maximize compatibility + // across languages and libraries. However, DynamoDB treats them as number type + // attributes for mathematical operations. + N *string `type:"string"` + + // An attribute of type Number Set. For example: + // + // "NS": ["42.2", "-19", "7.5", "3.14"] + // + // Numbers are sent across the network to DynamoDB as strings, to maximize compatibility + // across languages and libraries. However, DynamoDB treats them as number type + // attributes for mathematical operations. + NS []*string `type:"list"` + + // An attribute of type Null. For example: + // + // "NULL": true + NULL *bool `type:"boolean"` + + // An attribute of type String. For example: + // + // "S": "Hello" + S *string `type:"string"` + + // An attribute of type String Set. For example: + // + // "SS": ["Giraffe", "Hippo" ,"Zebra"] + SS []*string `type:"list"` +} + +// String returns the string representation +func (s AttributeValue) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributeValue) GoString() string { + return s.String() +} + +// SetB sets the B field's value. +func (s *AttributeValue) SetB(v []byte) *AttributeValue { + s.B = v + return s +} + +// SetBOOL sets the BOOL field's value. +func (s *AttributeValue) SetBOOL(v bool) *AttributeValue { + s.BOOL = &v + return s +} + +// SetBS sets the BS field's value. +func (s *AttributeValue) SetBS(v [][]byte) *AttributeValue { + s.BS = v + return s +} + +// SetL sets the L field's value. +func (s *AttributeValue) SetL(v []*AttributeValue) *AttributeValue { + s.L = v + return s +} + +// SetM sets the M field's value. +func (s *AttributeValue) SetM(v map[string]*AttributeValue) *AttributeValue { + s.M = v + return s +} + +// SetN sets the N field's value. +func (s *AttributeValue) SetN(v string) *AttributeValue { + s.N = &v + return s +} + +// SetNS sets the NS field's value. +func (s *AttributeValue) SetNS(v []*string) *AttributeValue { + s.NS = v + return s +} + +// SetNULL sets the NULL field's value. +func (s *AttributeValue) SetNULL(v bool) *AttributeValue { + s.NULL = &v + return s +} + +// SetS sets the S field's value. +func (s *AttributeValue) SetS(v string) *AttributeValue { + s.S = &v + return s +} + +// SetSS sets the SS field's value. +func (s *AttributeValue) SetSS(v []*string) *AttributeValue { + s.SS = v + return s +} + +// For the UpdateItem operation, represents the attributes to be modified, the +// action to perform on each, and the new value for each. +// +// You cannot use UpdateItem to update any primary key attributes. Instead, +// you will need to delete the item, and then use PutItem to create a new item +// with new attributes. +// +// Attribute values cannot be null; string and binary type attributes must have +// lengths greater than zero; and set type attributes must not be empty. Requests +// with empty values will be rejected with a ValidationException exception. +type AttributeValueUpdate struct { + _ struct{} `type:"structure"` + + // Specifies how to perform the update. Valid values are PUT (default), DELETE, + // and ADD. The behavior depends on whether the specified primary key already + // exists in the table. + // + // If an item with the specified Key is found in the table: + // + // * PUT - Adds the specified attribute to the item. If the attribute already + // exists, it is replaced by the new value. + // + // * DELETE - If no value is specified, the attribute and its value are removed + // from the item. The data type of the specified value must match the existing + // value's data type. + // + // If a set of values is specified, then those values are subtracted from the + // old set. For example, if the attribute value was the set [a,b,c] and the + // DELETE action specified [a,c], then the final attribute value would be + // [b]. Specifying an empty set is an error. + // + // * ADD - If the attribute does not already exist, then the attribute and + // its values are added to the item. If the attribute does exist, then the + // behavior of ADD depends on the data type of the attribute: + // + // If the existing attribute is a number, and if Value is also a number, then + // the Value is mathematically added to the existing attribute. If Value + // is a negative number, then it is subtracted from the existing attribute. + // + // If you use ADD to increment or decrement a number value for an item that + // doesn't exist before the update, DynamoDB uses 0 as the initial value. + // + // In addition, if you use ADD to update an existing item, and intend to increment + // or decrement an attribute value which does not yet exist, DynamoDB uses + // 0 as the initial value. For example, suppose that the item you want to + // update does not yet have an attribute named itemcount, but you decide + // to ADD the number 3 to this attribute anyway, even though it currently + // does not exist. DynamoDB will create the itemcount attribute, set its + // initial value to 0, and finally add 3 to it. The result will be a new + // itemcount attribute in the item, with a value of 3. + // + // If the existing data type is a set, and if the Value is also a set, then + // the Value is added to the existing set. (This is a set operation, not + // mathematical addition.) For example, if the attribute value was the set + // [1,2], and the ADD action specified [3], then the final attribute value + // would be [1,2,3]. An error occurs if an Add action is specified for a + // set attribute and the attribute type specified does not match the existing + // set type. + // + // Both sets must have the same primitive data type. For example, if the existing + // data type is a set of strings, the Value must also be a set of strings. + // The same holds true for number sets and binary sets. + // + // This action is only valid for an existing attribute whose data type is number + // or is a set. Do not use ADD for any other data types. + // + // If no item with the specified Key is found: + // + // * PUT - DynamoDB creates a new item with the specified primary key, and + // then adds the attribute. + // + // * DELETE - Nothing happens; there is no attribute to delete. + // + // * ADD - DynamoDB creates an item with the supplied primary key and number + // (or set of numbers) for the attribute value. The only data types allowed + // are number and number set; no other data types can be specified. + Action *string `type:"string" enum:"AttributeAction"` + + // Represents the data for an attribute. + // + // Each attribute value is described as a name-value pair. The name is the data + // type, and the value is the data itself. + // + // For more information, see Data Types (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes) + // in the Amazon DynamoDB Developer Guide. + Value *AttributeValue `type:"structure"` +} + +// String returns the string representation +func (s AttributeValueUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributeValueUpdate) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *AttributeValueUpdate) SetAction(v string) *AttributeValueUpdate { + s.Action = &v + return s +} + +// SetValue sets the Value field's value. +func (s *AttributeValueUpdate) SetValue(v *AttributeValue) *AttributeValueUpdate { + s.Value = v + return s +} + +// Contains the description of the backup created for the table. +type BackupDescription struct { + _ struct{} `type:"structure"` + + // Contains the details of the backup created for the table. + BackupDetails *BackupDetails `type:"structure"` + + // Contains the details of the table when the backup was created. + SourceTableDetails *SourceTableDetails `type:"structure"` + + // Contains the details of the features enabled on the table when the backup + // was created. For example, LSIs, GSIs, streams, TTL. + SourceTableFeatureDetails *SourceTableFeatureDetails `type:"structure"` +} + +// String returns the string representation +func (s BackupDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BackupDescription) GoString() string { + return s.String() +} + +// SetBackupDetails sets the BackupDetails field's value. +func (s *BackupDescription) SetBackupDetails(v *BackupDetails) *BackupDescription { + s.BackupDetails = v + return s +} + +// SetSourceTableDetails sets the SourceTableDetails field's value. +func (s *BackupDescription) SetSourceTableDetails(v *SourceTableDetails) *BackupDescription { + s.SourceTableDetails = v + return s +} + +// SetSourceTableFeatureDetails sets the SourceTableFeatureDetails field's value. +func (s *BackupDescription) SetSourceTableFeatureDetails(v *SourceTableFeatureDetails) *BackupDescription { + s.SourceTableFeatureDetails = v + return s +} + +// Contains the details of the backup created for the table. +type BackupDetails struct { + _ struct{} `type:"structure"` + + // ARN associated with the backup. + // + // BackupArn is a required field + BackupArn *string `min:"37" type:"string" required:"true"` + + // Time at which the backup was created. This is the request time of the backup. + // + // BackupCreationDateTime is a required field + BackupCreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // Name of the requested backup. + // + // BackupName is a required field + BackupName *string `min:"3" type:"string" required:"true"` + + // Size of the backup in bytes. + BackupSizeBytes *int64 `type:"long"` + + // Backup can be in one of the following states: CREATING, ACTIVE, DELETED. + // + // BackupStatus is a required field + BackupStatus *string `type:"string" required:"true" enum:"BackupStatus"` +} + +// String returns the string representation +func (s BackupDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BackupDetails) GoString() string { + return s.String() +} + +// SetBackupArn sets the BackupArn field's value. +func (s *BackupDetails) SetBackupArn(v string) *BackupDetails { + s.BackupArn = &v + return s +} + +// SetBackupCreationDateTime sets the BackupCreationDateTime field's value. +func (s *BackupDetails) SetBackupCreationDateTime(v time.Time) *BackupDetails { + s.BackupCreationDateTime = &v + return s +} + +// SetBackupName sets the BackupName field's value. +func (s *BackupDetails) SetBackupName(v string) *BackupDetails { + s.BackupName = &v + return s +} + +// SetBackupSizeBytes sets the BackupSizeBytes field's value. +func (s *BackupDetails) SetBackupSizeBytes(v int64) *BackupDetails { + s.BackupSizeBytes = &v + return s +} + +// SetBackupStatus sets the BackupStatus field's value. +func (s *BackupDetails) SetBackupStatus(v string) *BackupDetails { + s.BackupStatus = &v + return s +} + +// Contains details for the backup. +type BackupSummary struct { + _ struct{} `type:"structure"` + + // ARN associated with the backup. + BackupArn *string `min:"37" type:"string"` + + // Time at which the backup was created. + BackupCreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Name of the specified backup. + BackupName *string `min:"3" type:"string"` + + // Size of the backup in bytes. + BackupSizeBytes *int64 `type:"long"` + + // Backup can be in one of the following states: CREATING, ACTIVE, DELETED. + BackupStatus *string `type:"string" enum:"BackupStatus"` + + // ARN associated with the table. + TableArn *string `type:"string"` + + // Unique identifier for the table. + TableId *string `type:"string"` + + // Name of the table. + TableName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s BackupSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BackupSummary) GoString() string { + return s.String() +} + +// SetBackupArn sets the BackupArn field's value. +func (s *BackupSummary) SetBackupArn(v string) *BackupSummary { + s.BackupArn = &v + return s +} + +// SetBackupCreationDateTime sets the BackupCreationDateTime field's value. +func (s *BackupSummary) SetBackupCreationDateTime(v time.Time) *BackupSummary { + s.BackupCreationDateTime = &v + return s +} + +// SetBackupName sets the BackupName field's value. +func (s *BackupSummary) SetBackupName(v string) *BackupSummary { + s.BackupName = &v + return s +} + +// SetBackupSizeBytes sets the BackupSizeBytes field's value. +func (s *BackupSummary) SetBackupSizeBytes(v int64) *BackupSummary { + s.BackupSizeBytes = &v + return s +} + +// SetBackupStatus sets the BackupStatus field's value. +func (s *BackupSummary) SetBackupStatus(v string) *BackupSummary { + s.BackupStatus = &v + return s +} + +// SetTableArn sets the TableArn field's value. +func (s *BackupSummary) SetTableArn(v string) *BackupSummary { + s.TableArn = &v + return s +} + +// SetTableId sets the TableId field's value. +func (s *BackupSummary) SetTableId(v string) *BackupSummary { + s.TableId = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *BackupSummary) SetTableName(v string) *BackupSummary { + s.TableName = &v + return s +} + +// Represents the input of a BatchGetItem operation. +type BatchGetItemInput struct { + _ struct{} `type:"structure"` + + // A map of one or more table names and, for each table, a map that describes + // one or more items to retrieve from that table. Each table name can be used + // only once per BatchGetItem request. + // + // Each element in the map of items to retrieve consists of the following: + // + // * ConsistentRead - If true, a strongly consistent read is used; if false + // (the default), an eventually consistent read is used. + // + // * ExpressionAttributeNames - One or more substitution tokens for attribute + // names in the ProjectionExpression parameter. The following are some use + // cases for using ExpressionAttributeNames: + // + // To access an attribute whose name conflicts with a DynamoDB reserved word. + // + // To create a placeholder for repeating occurrences of an attribute name in + // an expression. + // + // To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could + // specify the following for ExpressionAttributeNames: + // + // {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + // + // * Keys - An array of primary key attribute values that define specific + // items in the table. For each primary key, you must provide all of the + // key attributes. For example, with a simple primary key, you only need + // to provide the partition key value. For a composite key, you must provide + // both the partition key value and the sort key value. + // + // * ProjectionExpression - A string that identifies one or more attributes + // to retrieve from the table. These attributes can include scalars, sets, + // or elements of a JSON document. The attributes in the expression must + // be separated by commas. + // + // If no attribute names are specified, then all attributes will be returned. + // If any of the requested attributes are not found, they will not appear + // in the result. + // + // For more information, see Accessing Item Attributes (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + // + // * AttributesToGet - This is a legacy parameter. Use ProjectionExpression + // instead. For more information, see AttributesToGet (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.AttributesToGet.html) + // in the Amazon DynamoDB Developer Guide. + // + // RequestItems is a required field + RequestItems map[string]*KeysAndAttributes `min:"1" type:"map" required:"true"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` +} + +// String returns the string representation +func (s BatchGetItemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchGetItemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchGetItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchGetItemInput"} + if s.RequestItems == nil { + invalidParams.Add(request.NewErrParamRequired("RequestItems")) + } + if s.RequestItems != nil && len(s.RequestItems) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RequestItems", 1)) + } + if s.RequestItems != nil { + for i, v := range s.RequestItems { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RequestItems", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRequestItems sets the RequestItems field's value. +func (s *BatchGetItemInput) SetRequestItems(v map[string]*KeysAndAttributes) *BatchGetItemInput { + s.RequestItems = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *BatchGetItemInput) SetReturnConsumedCapacity(v string) *BatchGetItemInput { + s.ReturnConsumedCapacity = &v + return s +} + +// Represents the output of a BatchGetItem operation. +type BatchGetItemOutput struct { + _ struct{} `type:"structure"` + + // The read capacity units consumed by the entire BatchGetItem operation. + // + // Each element consists of: + // + // * TableName - The table that consumed the provisioned throughput. + // + // * CapacityUnits - The total number of capacity units consumed. + ConsumedCapacity []*ConsumedCapacity `type:"list"` + + // A map of table name to a list of items. Each object in Responses consists + // of a table name, along with a map of attribute data consisting of the data + // type and attribute value. + Responses map[string][]map[string]*AttributeValue `type:"map"` + + // A map of tables and their respective keys that were not processed with the + // current response. The UnprocessedKeys value is in the same form as RequestItems, + // so the value can be provided directly to a subsequent BatchGetItem operation. + // For more information, see RequestItems in the Request Parameters section. + // + // Each element consists of: + // + // * Keys - An array of primary key attribute values that define specific + // items in the table. + // + // * ProjectionExpression - One or more attributes to be retrieved from the + // table or index. By default, all attributes are returned. If a requested + // attribute is not found, it does not appear in the result. + // + // * ConsistentRead - The consistency of a read operation. If set to true, + // then a strongly consistent read is used; otherwise, an eventually consistent + // read is used. + // + // If there are no unprocessed keys remaining, the response contains an empty + // UnprocessedKeys map. + UnprocessedKeys map[string]*KeysAndAttributes `min:"1" type:"map"` +} + +// String returns the string representation +func (s BatchGetItemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchGetItemOutput) GoString() string { + return s.String() +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *BatchGetItemOutput) SetConsumedCapacity(v []*ConsumedCapacity) *BatchGetItemOutput { + s.ConsumedCapacity = v + return s +} + +// SetResponses sets the Responses field's value. +func (s *BatchGetItemOutput) SetResponses(v map[string][]map[string]*AttributeValue) *BatchGetItemOutput { + s.Responses = v + return s +} + +// SetUnprocessedKeys sets the UnprocessedKeys field's value. +func (s *BatchGetItemOutput) SetUnprocessedKeys(v map[string]*KeysAndAttributes) *BatchGetItemOutput { + s.UnprocessedKeys = v + return s +} + +// Represents the input of a BatchWriteItem operation. +type BatchWriteItemInput struct { + _ struct{} `type:"structure"` + + // A map of one or more table names and, for each table, a list of operations + // to be performed (DeleteRequest or PutRequest). Each element in the map consists + // of the following: + // + // * DeleteRequest - Perform a DeleteItem operation on the specified item. + // The item to be deleted is identified by a Key subelement: + // + // Key - A map of primary key attribute values that uniquely identify the item. + // Each entry in this map consists of an attribute name and an attribute + // value. For each primary key, you must provide all of the key attributes. + // For example, with a simple primary key, you only need to provide a value + // for the partition key. For a composite primary key, you must provide values + // for both the partition key and the sort key. + // + // * PutRequest - Perform a PutItem operation on the specified item. The + // item to be put is identified by an Item subelement: + // + // Item - A map of attributes and their values. Each entry in this map consists + // of an attribute name and an attribute value. Attribute values must not + // be null; string and binary type attributes must have lengths greater than + // zero; and set type attributes must not be empty. Requests that contain + // empty values will be rejected with a ValidationException exception. + // + // If you specify any attributes that are part of an index key, then the data + // types for those attributes must match those of the schema in the table's + // attribute definition. + // + // RequestItems is a required field + RequestItems map[string][]*WriteRequest `min:"1" type:"map" required:"true"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // Determines whether item collection metrics are returned. If set to SIZE, + // the response includes statistics about item collections, if any, that were + // modified during the operation are returned in the response. If set to NONE + // (the default), no statistics are returned. + ReturnItemCollectionMetrics *string `type:"string" enum:"ReturnItemCollectionMetrics"` +} + +// String returns the string representation +func (s BatchWriteItemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchWriteItemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchWriteItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchWriteItemInput"} + if s.RequestItems == nil { + invalidParams.Add(request.NewErrParamRequired("RequestItems")) + } + if s.RequestItems != nil && len(s.RequestItems) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RequestItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRequestItems sets the RequestItems field's value. +func (s *BatchWriteItemInput) SetRequestItems(v map[string][]*WriteRequest) *BatchWriteItemInput { + s.RequestItems = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *BatchWriteItemInput) SetReturnConsumedCapacity(v string) *BatchWriteItemInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetReturnItemCollectionMetrics sets the ReturnItemCollectionMetrics field's value. +func (s *BatchWriteItemInput) SetReturnItemCollectionMetrics(v string) *BatchWriteItemInput { + s.ReturnItemCollectionMetrics = &v + return s +} + +// Represents the output of a BatchWriteItem operation. +type BatchWriteItemOutput struct { + _ struct{} `type:"structure"` + + // The capacity units consumed by the entire BatchWriteItem operation. + // + // Each element consists of: + // + // * TableName - The table that consumed the provisioned throughput. + // + // * CapacityUnits - The total number of capacity units consumed. + ConsumedCapacity []*ConsumedCapacity `type:"list"` + + // A list of tables that were processed by BatchWriteItem and, for each table, + // information about any item collections that were affected by individual DeleteItem + // or PutItem operations. + // + // Each entry consists of the following subelements: + // + // * ItemCollectionKey - The partition key value of the item collection. + // This is the same as the partition key value of the item. + // + // * SizeEstimateRangeGB - An estimate of item collection size, expressed + // in GB. This is a two-element array containing a lower bound and an upper + // bound for the estimate. The estimate includes the size of all the items + // in the table, plus the size of all attributes projected into all of the + // local secondary indexes on the table. Use this estimate to measure whether + // a local secondary index is approaching its size limit. + // + // The estimate is subject to change over time; therefore, do not rely on the + // precision or accuracy of the estimate. + ItemCollectionMetrics map[string][]*ItemCollectionMetrics `type:"map"` + + // A map of tables and requests against those tables that were not processed. + // The UnprocessedItems value is in the same form as RequestItems, so you can + // provide this value directly to a subsequent BatchGetItem operation. For more + // information, see RequestItems in the Request Parameters section. + // + // Each UnprocessedItems entry consists of a table name and, for that table, + // a list of operations to perform (DeleteRequest or PutRequest). + // + // * DeleteRequest - Perform a DeleteItem operation on the specified item. + // The item to be deleted is identified by a Key subelement: + // + // Key - A map of primary key attribute values that uniquely identify the item. + // Each entry in this map consists of an attribute name and an attribute + // value. + // + // * PutRequest - Perform a PutItem operation on the specified item. The + // item to be put is identified by an Item subelement: + // + // Item - A map of attributes and their values. Each entry in this map consists + // of an attribute name and an attribute value. Attribute values must not + // be null; string and binary type attributes must have lengths greater than + // zero; and set type attributes must not be empty. Requests that contain + // empty values will be rejected with a ValidationException exception. + // + // If you specify any attributes that are part of an index key, then the data + // types for those attributes must match those of the schema in the table's + // attribute definition. + // + // If there are no unprocessed items remaining, the response contains an empty + // UnprocessedItems map. + UnprocessedItems map[string][]*WriteRequest `min:"1" type:"map"` +} + +// String returns the string representation +func (s BatchWriteItemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchWriteItemOutput) GoString() string { + return s.String() +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *BatchWriteItemOutput) SetConsumedCapacity(v []*ConsumedCapacity) *BatchWriteItemOutput { + s.ConsumedCapacity = v + return s +} + +// SetItemCollectionMetrics sets the ItemCollectionMetrics field's value. +func (s *BatchWriteItemOutput) SetItemCollectionMetrics(v map[string][]*ItemCollectionMetrics) *BatchWriteItemOutput { + s.ItemCollectionMetrics = v + return s +} + +// SetUnprocessedItems sets the UnprocessedItems field's value. +func (s *BatchWriteItemOutput) SetUnprocessedItems(v map[string][]*WriteRequest) *BatchWriteItemOutput { + s.UnprocessedItems = v + return s +} + +// Represents the amount of provisioned throughput capacity consumed on a table +// or an index. +type Capacity struct { + _ struct{} `type:"structure"` + + // The total number of capacity units consumed on a table or an index. + CapacityUnits *float64 `type:"double"` +} + +// String returns the string representation +func (s Capacity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Capacity) GoString() string { + return s.String() +} + +// SetCapacityUnits sets the CapacityUnits field's value. +func (s *Capacity) SetCapacityUnits(v float64) *Capacity { + s.CapacityUnits = &v + return s +} + +// Represents the selection criteria for a Query or Scan operation: +// +// * For a Query operation, Condition is used for specifying the KeyConditions +// to use when querying a table or an index. For KeyConditions, only the +// following comparison operators are supported: +// +// EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN +// +// Condition is also used in a QueryFilter, which evaluates the query results +// and returns only the desired values. +// +// * For a Scan operation, Condition is used in a ScanFilter, which evaluates +// the scan results and returns only the desired values. +type Condition struct { + _ struct{} `type:"structure"` + + // One or more values to evaluate against the supplied attribute. The number + // of values in the list depends on the ComparisonOperator being used. + // + // For type Number, value comparisons are numeric. + // + // String value comparisons for greater than, equals, or less than are based + // on ASCII character code values. For example, a is greater than A, and a is + // greater than B. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters + // (http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters). + // + // For Binary, DynamoDB treats each byte of the binary data as unsigned when + // it compares binary values. + AttributeValueList []*AttributeValue `type:"list"` + + // A comparator for evaluating attributes. For example, equals, greater than, + // less than, etc. + // + // The following comparison operators are available: + // + // EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | + // BEGINS_WITH | IN | BETWEEN + // + // The following are descriptions of each comparison operator. + // + // * EQ : Equal. EQ is supported for all data types, including lists and + // maps. + // + // AttributeValueList can contain only one AttributeValue element of type String, + // Number, Binary, String Set, Number Set, or Binary Set. If an item contains + // an AttributeValue element of a different type than the one provided in + // the request, the value does not match. For example, {"S":"6"} does not + // equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}. + // + // * NE : Not equal. NE is supported for all data types, including lists + // and maps. + // + // * AttributeValueList can contain only one AttributeValue of type String, + // Number, Binary, String Set, Number Set, or Binary Set. If an item contains + // an AttributeValue of a different type than the one provided in the request, + // the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. + // Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}. + // + // * LE : Less than or equal. + // + // AttributeValueList can contain only one AttributeValue element of type String, + // Number, or Binary (not a set type). If an item contains an AttributeValue + // element of a different type than the one provided in the request, the value + // does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} + // does not compare to {"NS":["6", "2", "1"]}. + // + // LT: Less than. + // + // AttributeValueListcan contain only one AttributeValueof type String, Number, or Binary (not a set type). If an item contains an + // AttributeValueelement of a different type than the one provided in the request, the value + // does not match. For example, {"S":"6"}does not equal {"N":"6"}. Also, {"N":"6"}does not compare to {"NS":["6", "2", "1"]} + // + // ComparisonOperator is a required field + ComparisonOperator *string `type:"string" required:"true" enum:"ComparisonOperator"` +} + +// String returns the string representation +func (s Condition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Condition) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Condition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Condition"} + if s.ComparisonOperator == nil { + invalidParams.Add(request.NewErrParamRequired("ComparisonOperator")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeValueList sets the AttributeValueList field's value. +func (s *Condition) SetAttributeValueList(v []*AttributeValue) *Condition { + s.AttributeValueList = v + return s +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *Condition) SetComparisonOperator(v string) *Condition { + s.ComparisonOperator = &v + return s +} + +// The capacity units consumed by an operation. The data returned includes the +// total provisioned throughput consumed, along with statistics for the table +// and any indexes involved in the operation. ConsumedCapacity is only returned +// if the request asked for it. For more information, see Provisioned Throughput +// (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) +// in the Amazon DynamoDB Developer Guide. +type ConsumedCapacity struct { + _ struct{} `type:"structure"` + + // The total number of capacity units consumed by the operation. + CapacityUnits *float64 `type:"double"` + + // The amount of throughput consumed on each global index affected by the operation. + GlobalSecondaryIndexes map[string]*Capacity `type:"map"` + + // The amount of throughput consumed on each local index affected by the operation. + LocalSecondaryIndexes map[string]*Capacity `type:"map"` + + // The amount of throughput consumed on the table affected by the operation. + Table *Capacity `type:"structure"` + + // The name of the table that was affected by the operation. + TableName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s ConsumedCapacity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConsumedCapacity) GoString() string { + return s.String() +} + +// SetCapacityUnits sets the CapacityUnits field's value. +func (s *ConsumedCapacity) SetCapacityUnits(v float64) *ConsumedCapacity { + s.CapacityUnits = &v + return s +} + +// SetGlobalSecondaryIndexes sets the GlobalSecondaryIndexes field's value. +func (s *ConsumedCapacity) SetGlobalSecondaryIndexes(v map[string]*Capacity) *ConsumedCapacity { + s.GlobalSecondaryIndexes = v + return s +} + +// SetLocalSecondaryIndexes sets the LocalSecondaryIndexes field's value. +func (s *ConsumedCapacity) SetLocalSecondaryIndexes(v map[string]*Capacity) *ConsumedCapacity { + s.LocalSecondaryIndexes = v + return s +} + +// SetTable sets the Table field's value. +func (s *ConsumedCapacity) SetTable(v *Capacity) *ConsumedCapacity { + s.Table = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ConsumedCapacity) SetTableName(v string) *ConsumedCapacity { + s.TableName = &v + return s +} + +// Represents the continuous backups and point in time recovery settings on +// the table. +type ContinuousBackupsDescription struct { + _ struct{} `type:"structure"` + + // ContinuousBackupsStatus can be one of the following states : ENABLED, DISABLED + // + // ContinuousBackupsStatus is a required field + ContinuousBackupsStatus *string `type:"string" required:"true" enum:"ContinuousBackupsStatus"` + + // The description of the point in time recovery settings applied to the table. + PointInTimeRecoveryDescription *PointInTimeRecoveryDescription `type:"structure"` +} + +// String returns the string representation +func (s ContinuousBackupsDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContinuousBackupsDescription) GoString() string { + return s.String() +} + +// SetContinuousBackupsStatus sets the ContinuousBackupsStatus field's value. +func (s *ContinuousBackupsDescription) SetContinuousBackupsStatus(v string) *ContinuousBackupsDescription { + s.ContinuousBackupsStatus = &v + return s +} + +// SetPointInTimeRecoveryDescription sets the PointInTimeRecoveryDescription field's value. +func (s *ContinuousBackupsDescription) SetPointInTimeRecoveryDescription(v *PointInTimeRecoveryDescription) *ContinuousBackupsDescription { + s.PointInTimeRecoveryDescription = v + return s +} + +type CreateBackupInput struct { + _ struct{} `type:"structure"` + + // Specified name for the backup. + // + // BackupName is a required field + BackupName *string `min:"3" type:"string" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateBackupInput"} + if s.BackupName == nil { + invalidParams.Add(request.NewErrParamRequired("BackupName")) + } + if s.BackupName != nil && len(*s.BackupName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("BackupName", 3)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupName sets the BackupName field's value. +func (s *CreateBackupInput) SetBackupName(v string) *CreateBackupInput { + s.BackupName = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *CreateBackupInput) SetTableName(v string) *CreateBackupInput { + s.TableName = &v + return s +} + +type CreateBackupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of the backup created for the table. + BackupDetails *BackupDetails `type:"structure"` +} + +// String returns the string representation +func (s CreateBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBackupOutput) GoString() string { + return s.String() +} + +// SetBackupDetails sets the BackupDetails field's value. +func (s *CreateBackupOutput) SetBackupDetails(v *BackupDetails) *CreateBackupOutput { + s.BackupDetails = v + return s +} + +// Represents a new global secondary index to be added to an existing table. +type CreateGlobalSecondaryIndexAction struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index to be created. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The key schema for the global secondary index. + // + // KeySchema is a required field + KeySchema []*KeySchemaElement `min:"1" type:"list" required:"true"` + + // Represents attributes that are copied (projected) from the table into an + // index. These are in addition to the primary key attributes and index key + // attributes, which are automatically projected. + // + // Projection is a required field + Projection *Projection `type:"structure" required:"true"` + + // Represents the provisioned throughput settings for the specified global secondary + // index. + // + // For current minimum and maximum provisioned throughput values, see Limits + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) + // in the Amazon DynamoDB Developer Guide. + // + // ProvisionedThroughput is a required field + ProvisionedThroughput *ProvisionedThroughput `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateGlobalSecondaryIndexAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGlobalSecondaryIndexAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateGlobalSecondaryIndexAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateGlobalSecondaryIndexAction"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.KeySchema == nil { + invalidParams.Add(request.NewErrParamRequired("KeySchema")) + } + if s.KeySchema != nil && len(s.KeySchema) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KeySchema", 1)) + } + if s.Projection == nil { + invalidParams.Add(request.NewErrParamRequired("Projection")) + } + if s.ProvisionedThroughput == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisionedThroughput")) + } + if s.KeySchema != nil { + for i, v := range s.KeySchema { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "KeySchema", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Projection != nil { + if err := s.Projection.Validate(); err != nil { + invalidParams.AddNested("Projection", err.(request.ErrInvalidParams)) + } + } + if s.ProvisionedThroughput != nil { + if err := s.ProvisionedThroughput.Validate(); err != nil { + invalidParams.AddNested("ProvisionedThroughput", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *CreateGlobalSecondaryIndexAction) SetIndexName(v string) *CreateGlobalSecondaryIndexAction { + s.IndexName = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *CreateGlobalSecondaryIndexAction) SetKeySchema(v []*KeySchemaElement) *CreateGlobalSecondaryIndexAction { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *CreateGlobalSecondaryIndexAction) SetProjection(v *Projection) *CreateGlobalSecondaryIndexAction { + s.Projection = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *CreateGlobalSecondaryIndexAction) SetProvisionedThroughput(v *ProvisionedThroughput) *CreateGlobalSecondaryIndexAction { + s.ProvisionedThroughput = v + return s +} + +type CreateGlobalTableInput struct { + _ struct{} `type:"structure"` + + // The global table name. + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` + + // The regions where the global table needs to be created. + // + // ReplicationGroup is a required field + ReplicationGroup []*Replica `type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateGlobalTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGlobalTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateGlobalTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateGlobalTableInput"} + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + if s.ReplicationGroup == nil { + invalidParams.Add(request.NewErrParamRequired("ReplicationGroup")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *CreateGlobalTableInput) SetGlobalTableName(v string) *CreateGlobalTableInput { + s.GlobalTableName = &v + return s +} + +// SetReplicationGroup sets the ReplicationGroup field's value. +func (s *CreateGlobalTableInput) SetReplicationGroup(v []*Replica) *CreateGlobalTableInput { + s.ReplicationGroup = v + return s +} + +type CreateGlobalTableOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of the global table. + GlobalTableDescription *GlobalTableDescription `type:"structure"` +} + +// String returns the string representation +func (s CreateGlobalTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGlobalTableOutput) GoString() string { + return s.String() +} + +// SetGlobalTableDescription sets the GlobalTableDescription field's value. +func (s *CreateGlobalTableOutput) SetGlobalTableDescription(v *GlobalTableDescription) *CreateGlobalTableOutput { + s.GlobalTableDescription = v + return s +} + +// Represents a replica to be added. +type CreateReplicaAction struct { + _ struct{} `type:"structure"` + + // The region of the replica to be added. + // + // RegionName is a required field + RegionName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateReplicaAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateReplicaAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateReplicaAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateReplicaAction"} + if s.RegionName == nil { + invalidParams.Add(request.NewErrParamRequired("RegionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRegionName sets the RegionName field's value. +func (s *CreateReplicaAction) SetRegionName(v string) *CreateReplicaAction { + s.RegionName = &v + return s +} + +// Represents the input of a CreateTable operation. +type CreateTableInput struct { + _ struct{} `type:"structure"` + + // An array of attributes that describe the key schema for the table and indexes. + // + // AttributeDefinitions is a required field + AttributeDefinitions []*AttributeDefinition `type:"list" required:"true"` + + // One or more global secondary indexes (the maximum is five) to be created + // on the table. Each global secondary index in the array includes the following: + // + // * IndexName - The name of the global secondary index. Must be unique only + // for this table. + // + // * KeySchema - Specifies the key schema for the global secondary index. + // + // * Projection - Specifies attributes that are copied (projected) from the + // table into the index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. Each attribute + // specification is composed of: + // + // * ProjectionType - One of the following: + // + // KEYS_ONLY - Only the index and primary keys are projected into the index. + // + // INCLUDE - Only the specified table attributes are projected into the index. + // The list of projected attributes are in NonKeyAttributes. + // + // ALL - All of the table attributes are projected into the index. + // + // NonKeyAttributes - A list of one or more non-key attribute names that are + // projected into the secondary index. The total count of attributes provided + // in NonKeyAttributes, summed across all of the secondary indexes, must + // not exceed 20. If you project the same attribute into two different indexes, + // this counts as two distinct attributes when determining the total. + // + // * ProvisionedThroughput - The provisioned throughput settings for the + // global secondary index, consisting of read and write capacity units. + GlobalSecondaryIndexes []*GlobalSecondaryIndex `type:"list"` + + // Specifies the attributes that make up the primary key for a table or an index. + // The attributes in KeySchema must also be defined in the AttributeDefinitions + // array. For more information, see Data Model (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html) + // in the Amazon DynamoDB Developer Guide. + // + // Each KeySchemaElement in the array is composed of: + // + // * AttributeName - The name of this key attribute. + // + // * KeyType - The role that the key attribute will assume: + // + // HASH - partition key + // + // RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + // + // For a simple primary key (partition key), you must provide exactly one element + // with a KeyType of HASH. + // + // For a composite primary key (partition key and sort key), you must provide + // exactly two elements, in this order: The first element must have a KeyType + // of HASH, and the second element must have a KeyType of RANGE. + // + // For more information, see Specifying the Primary Key (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#WorkingWithTables.primary.key) + // in the Amazon DynamoDB Developer Guide. + // + // KeySchema is a required field + KeySchema []*KeySchemaElement `min:"1" type:"list" required:"true"` + + // One or more local secondary indexes (the maximum is five) to be created on + // the table. Each index is scoped to a given partition key value. There is + // a 10 GB size limit per partition key value; otherwise, the size of a local + // secondary index is unconstrained. + // + // Each local secondary index in the array includes the following: + // + // * IndexName - The name of the local secondary index. Must be unique only + // for this table. + // + // * KeySchema - Specifies the key schema for the local secondary index. + // The key schema must begin with the same partition key as the table. + // + // * Projection - Specifies attributes that are copied (projected) from the + // table into the index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. Each attribute + // specification is composed of: + // + // * ProjectionType - One of the following: + // + // KEYS_ONLY - Only the index and primary keys are projected into the index. + // + // INCLUDE - Only the specified table attributes are projected into the index. + // The list of projected attributes are in NonKeyAttributes. + // + // ALL - All of the table attributes are projected into the index. + // + // NonKeyAttributes - A list of one or more non-key attribute names that are + // projected into the secondary index. The total count of attributes provided + // in NonKeyAttributes, summed across all of the secondary indexes, must + // not exceed 20. If you project the same attribute into two different indexes, + // this counts as two distinct attributes when determining the total. + LocalSecondaryIndexes []*LocalSecondaryIndex `type:"list"` + + // Represents the provisioned throughput settings for a specified table or index. + // The settings can be modified using the UpdateTable operation. + // + // For current minimum and maximum provisioned throughput values, see Limits + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) + // in the Amazon DynamoDB Developer Guide. + // + // ProvisionedThroughput is a required field + ProvisionedThroughput *ProvisionedThroughput `type:"structure" required:"true"` + + // Represents the settings used to enable server-side encryption. + SSESpecification *SSESpecification `type:"structure"` + + // The settings for DynamoDB Streams on the table. These settings consist of: + // + // * StreamEnabled - Indicates whether Streams is to be enabled (true) or + // disabled (false). + // + // * StreamViewType - When an item in the table is modified, StreamViewType + // determines what information is written to the table's stream. Valid values + // for StreamViewType are: + // + // KEYS_ONLY - Only the key attributes of the modified item are written to the + // stream. + // + // NEW_IMAGE - The entire item, as it appears after it was modified, is written + // to the stream. + // + // OLD_IMAGE - The entire item, as it appeared before it was modified, is written + // to the stream. + // + // NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are + // written to the stream. + StreamSpecification *StreamSpecification `type:"structure"` + + // The name of the table to create. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTableInput"} + if s.AttributeDefinitions == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeDefinitions")) + } + if s.KeySchema == nil { + invalidParams.Add(request.NewErrParamRequired("KeySchema")) + } + if s.KeySchema != nil && len(s.KeySchema) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KeySchema", 1)) + } + if s.ProvisionedThroughput == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisionedThroughput")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.AttributeDefinitions != nil { + for i, v := range s.AttributeDefinitions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AttributeDefinitions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.GlobalSecondaryIndexes != nil { + for i, v := range s.GlobalSecondaryIndexes { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "GlobalSecondaryIndexes", i), err.(request.ErrInvalidParams)) + } + } + } + if s.KeySchema != nil { + for i, v := range s.KeySchema { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "KeySchema", i), err.(request.ErrInvalidParams)) + } + } + } + if s.LocalSecondaryIndexes != nil { + for i, v := range s.LocalSecondaryIndexes { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LocalSecondaryIndexes", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ProvisionedThroughput != nil { + if err := s.ProvisionedThroughput.Validate(); err != nil { + invalidParams.AddNested("ProvisionedThroughput", err.(request.ErrInvalidParams)) + } + } + if s.SSESpecification != nil { + if err := s.SSESpecification.Validate(); err != nil { + invalidParams.AddNested("SSESpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeDefinitions sets the AttributeDefinitions field's value. +func (s *CreateTableInput) SetAttributeDefinitions(v []*AttributeDefinition) *CreateTableInput { + s.AttributeDefinitions = v + return s +} + +// SetGlobalSecondaryIndexes sets the GlobalSecondaryIndexes field's value. +func (s *CreateTableInput) SetGlobalSecondaryIndexes(v []*GlobalSecondaryIndex) *CreateTableInput { + s.GlobalSecondaryIndexes = v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *CreateTableInput) SetKeySchema(v []*KeySchemaElement) *CreateTableInput { + s.KeySchema = v + return s +} + +// SetLocalSecondaryIndexes sets the LocalSecondaryIndexes field's value. +func (s *CreateTableInput) SetLocalSecondaryIndexes(v []*LocalSecondaryIndex) *CreateTableInput { + s.LocalSecondaryIndexes = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *CreateTableInput) SetProvisionedThroughput(v *ProvisionedThroughput) *CreateTableInput { + s.ProvisionedThroughput = v + return s +} + +// SetSSESpecification sets the SSESpecification field's value. +func (s *CreateTableInput) SetSSESpecification(v *SSESpecification) *CreateTableInput { + s.SSESpecification = v + return s +} + +// SetStreamSpecification sets the StreamSpecification field's value. +func (s *CreateTableInput) SetStreamSpecification(v *StreamSpecification) *CreateTableInput { + s.StreamSpecification = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *CreateTableInput) SetTableName(v string) *CreateTableInput { + s.TableName = &v + return s +} + +// Represents the output of a CreateTable operation. +type CreateTableOutput struct { + _ struct{} `type:"structure"` + + // Represents the properties of the table. + TableDescription *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s CreateTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTableOutput) GoString() string { + return s.String() +} + +// SetTableDescription sets the TableDescription field's value. +func (s *CreateTableOutput) SetTableDescription(v *TableDescription) *CreateTableOutput { + s.TableDescription = v + return s +} + +type DeleteBackupInput struct { + _ struct{} `type:"structure"` + + // The ARN associated with the backup. + // + // BackupArn is a required field + BackupArn *string `min:"37" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBackupInput"} + if s.BackupArn == nil { + invalidParams.Add(request.NewErrParamRequired("BackupArn")) + } + if s.BackupArn != nil && len(*s.BackupArn) < 37 { + invalidParams.Add(request.NewErrParamMinLen("BackupArn", 37)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupArn sets the BackupArn field's value. +func (s *DeleteBackupInput) SetBackupArn(v string) *DeleteBackupInput { + s.BackupArn = &v + return s +} + +type DeleteBackupOutput struct { + _ struct{} `type:"structure"` + + // Contains the description of the backup created for the table. + BackupDescription *BackupDescription `type:"structure"` +} + +// String returns the string representation +func (s DeleteBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBackupOutput) GoString() string { + return s.String() +} + +// SetBackupDescription sets the BackupDescription field's value. +func (s *DeleteBackupOutput) SetBackupDescription(v *BackupDescription) *DeleteBackupOutput { + s.BackupDescription = v + return s +} + +// Represents a global secondary index to be deleted from an existing table. +type DeleteGlobalSecondaryIndexAction struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index to be deleted. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteGlobalSecondaryIndexAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGlobalSecondaryIndexAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteGlobalSecondaryIndexAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteGlobalSecondaryIndexAction"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *DeleteGlobalSecondaryIndexAction) SetIndexName(v string) *DeleteGlobalSecondaryIndexAction { + s.IndexName = &v + return s +} + +// Represents the input of a DeleteItem operation. +type DeleteItemInput struct { + _ struct{} `type:"structure"` + + // A condition that must be satisfied in order for a conditional DeleteItem + // to succeed. + // + // An expression can contain any of the following: + // + // * Functions: attribute_exists | attribute_not_exists | attribute_type + // | contains | begins_with | size + // + // These function names are case-sensitive. + // + // * Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN + // + // * Logical operators: AND | OR | NOT + // + // For more information on condition expressions, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ConditionExpression *string `type:"string"` + + // This is a legacy parameter. Use ConditionExpression instead. For more information, + // see ConditionalOperator (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ConditionalOperator.html) + // in the Amazon DynamoDB Developer Guide. + ConditionalOperator *string `type:"string" enum:"ConditionalOperator"` + + // This is a legacy parameter. Use ConditionExpression instead. For more information, + // see Expected (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.Expected.html) + // in the Amazon DynamoDB Developer Guide. + Expected map[string]*ExpectedAttributeValue `type:"map"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // One or more values that can be substituted in an expression. + // + // Use the : (colon) character in an expression to dereference an attribute + // value. For example, suppose that you wanted to check whether the value of + // the ProductStatus attribute was one of the following: + // + // Available | Backordered | Discontinued + // + // You would first need to specify ExpressionAttributeValues as follows: + // + // { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} + // } + // + // You could then use these values in an expression, such as this: + // + // ProductStatus IN (:avail, :back, :disc) + // + // For more information on expression attribute values, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeValues map[string]*AttributeValue `type:"map"` + + // A map of attribute names to AttributeValue objects, representing the primary + // key of the item to delete. + // + // For the primary key, you must provide all of the attributes. For example, + // with a simple primary key, you only need to provide a value for the partition + // key. For a composite primary key, you must provide values for both the partition + // key and the sort key. + // + // Key is a required field + Key map[string]*AttributeValue `type:"map" required:"true"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // Determines whether item collection metrics are returned. If set to SIZE, + // the response includes statistics about item collections, if any, that were + // modified during the operation are returned in the response. If set to NONE + // (the default), no statistics are returned. + ReturnItemCollectionMetrics *string `type:"string" enum:"ReturnItemCollectionMetrics"` + + // Use ReturnValues if you want to get the item attributes as they appeared + // before they were deleted. For DeleteItem, the valid values are: + // + // * NONE - If ReturnValues is not specified, or if its value is NONE, then + // nothing is returned. (This setting is the default for ReturnValues.) + // + // * ALL_OLD - The content of the old item is returned. + // + // The ReturnValues parameter is used by several DynamoDB operations; however, + // DeleteItem does not recognize any values other than NONE or ALL_OLD. + ReturnValues *string `type:"string" enum:"ReturnValue"` + + // The name of the table from which to delete the item. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteItemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteItemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteItemInput"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConditionExpression sets the ConditionExpression field's value. +func (s *DeleteItemInput) SetConditionExpression(v string) *DeleteItemInput { + s.ConditionExpression = &v + return s +} + +// SetConditionalOperator sets the ConditionalOperator field's value. +func (s *DeleteItemInput) SetConditionalOperator(v string) *DeleteItemInput { + s.ConditionalOperator = &v + return s +} + +// SetExpected sets the Expected field's value. +func (s *DeleteItemInput) SetExpected(v map[string]*ExpectedAttributeValue) *DeleteItemInput { + s.Expected = v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *DeleteItemInput) SetExpressionAttributeNames(v map[string]*string) *DeleteItemInput { + s.ExpressionAttributeNames = v + return s +} + +// SetExpressionAttributeValues sets the ExpressionAttributeValues field's value. +func (s *DeleteItemInput) SetExpressionAttributeValues(v map[string]*AttributeValue) *DeleteItemInput { + s.ExpressionAttributeValues = v + return s +} + +// SetKey sets the Key field's value. +func (s *DeleteItemInput) SetKey(v map[string]*AttributeValue) *DeleteItemInput { + s.Key = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *DeleteItemInput) SetReturnConsumedCapacity(v string) *DeleteItemInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetReturnItemCollectionMetrics sets the ReturnItemCollectionMetrics field's value. +func (s *DeleteItemInput) SetReturnItemCollectionMetrics(v string) *DeleteItemInput { + s.ReturnItemCollectionMetrics = &v + return s +} + +// SetReturnValues sets the ReturnValues field's value. +func (s *DeleteItemInput) SetReturnValues(v string) *DeleteItemInput { + s.ReturnValues = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *DeleteItemInput) SetTableName(v string) *DeleteItemInput { + s.TableName = &v + return s +} + +// Represents the output of a DeleteItem operation. +type DeleteItemOutput struct { + _ struct{} `type:"structure"` + + // A map of attribute names to AttributeValue objects, representing the item + // as it appeared before the DeleteItem operation. This map appears in the response + // only if ReturnValues was specified as ALL_OLD in the request. + Attributes map[string]*AttributeValue `type:"map"` + + // The capacity units consumed by the DeleteItem operation. The data returned + // includes the total provisioned throughput consumed, along with statistics + // for the table and any indexes involved in the operation. ConsumedCapacity + // is only returned if the ReturnConsumedCapacity parameter was specified. For + // more information, see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // Information about item collections, if any, that were affected by the DeleteItem + // operation. ItemCollectionMetrics is only returned if the ReturnItemCollectionMetrics + // parameter was specified. If the table does not have any local secondary indexes, + // this information is not returned in the response. + // + // Each ItemCollectionMetrics element consists of: + // + // * ItemCollectionKey - The partition key value of the item collection. + // This is the same as the partition key value of the item itself. + // + // * SizeEstimateRangeGB - An estimate of item collection size, in gigabytes. + // This value is a two-element array containing a lower bound and an upper + // bound for the estimate. The estimate includes the size of all the items + // in the table, plus the size of all attributes projected into all of the + // local secondary indexes on that table. Use this estimate to measure whether + // a local secondary index is approaching its size limit. + // + // The estimate is subject to change over time; therefore, do not rely on the + // precision or accuracy of the estimate. + ItemCollectionMetrics *ItemCollectionMetrics `type:"structure"` +} + +// String returns the string representation +func (s DeleteItemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteItemOutput) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *DeleteItemOutput) SetAttributes(v map[string]*AttributeValue) *DeleteItemOutput { + s.Attributes = v + return s +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *DeleteItemOutput) SetConsumedCapacity(v *ConsumedCapacity) *DeleteItemOutput { + s.ConsumedCapacity = v + return s +} + +// SetItemCollectionMetrics sets the ItemCollectionMetrics field's value. +func (s *DeleteItemOutput) SetItemCollectionMetrics(v *ItemCollectionMetrics) *DeleteItemOutput { + s.ItemCollectionMetrics = v + return s +} + +// Represents a replica to be removed. +type DeleteReplicaAction struct { + _ struct{} `type:"structure"` + + // The region of the replica to be removed. + // + // RegionName is a required field + RegionName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteReplicaAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteReplicaAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteReplicaAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteReplicaAction"} + if s.RegionName == nil { + invalidParams.Add(request.NewErrParamRequired("RegionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRegionName sets the RegionName field's value. +func (s *DeleteReplicaAction) SetRegionName(v string) *DeleteReplicaAction { + s.RegionName = &v + return s +} + +// Represents a request to perform a DeleteItem operation on an item. +type DeleteRequest struct { + _ struct{} `type:"structure"` + + // A map of attribute name to attribute values, representing the primary key + // of the item to delete. All of the table's primary key attributes must be + // specified, and their data types must match those of the table's key schema. + // + // Key is a required field + Key map[string]*AttributeValue `type:"map" required:"true"` +} + +// String returns the string representation +func (s DeleteRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRequest) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *DeleteRequest) SetKey(v map[string]*AttributeValue) *DeleteRequest { + s.Key = v + return s +} + +// Represents the input of a DeleteTable operation. +type DeleteTableInput struct { + _ struct{} `type:"structure"` + + // The name of the table to delete. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTableInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTableName sets the TableName field's value. +func (s *DeleteTableInput) SetTableName(v string) *DeleteTableInput { + s.TableName = &v + return s +} + +// Represents the output of a DeleteTable operation. +type DeleteTableOutput struct { + _ struct{} `type:"structure"` + + // Represents the properties of a table. + TableDescription *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s DeleteTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTableOutput) GoString() string { + return s.String() +} + +// SetTableDescription sets the TableDescription field's value. +func (s *DeleteTableOutput) SetTableDescription(v *TableDescription) *DeleteTableOutput { + s.TableDescription = v + return s +} + +type DescribeBackupInput struct { + _ struct{} `type:"structure"` + + // The ARN associated with the backup. + // + // BackupArn is a required field + BackupArn *string `min:"37" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeBackupInput"} + if s.BackupArn == nil { + invalidParams.Add(request.NewErrParamRequired("BackupArn")) + } + if s.BackupArn != nil && len(*s.BackupArn) < 37 { + invalidParams.Add(request.NewErrParamMinLen("BackupArn", 37)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupArn sets the BackupArn field's value. +func (s *DescribeBackupInput) SetBackupArn(v string) *DescribeBackupInput { + s.BackupArn = &v + return s +} + +type DescribeBackupOutput struct { + _ struct{} `type:"structure"` + + // Contains the description of the backup created for the table. + BackupDescription *BackupDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBackupOutput) GoString() string { + return s.String() +} + +// SetBackupDescription sets the BackupDescription field's value. +func (s *DescribeBackupOutput) SetBackupDescription(v *BackupDescription) *DescribeBackupOutput { + s.BackupDescription = v + return s +} + +type DescribeContinuousBackupsInput struct { + _ struct{} `type:"structure"` + + // Name of the table for which the customer wants to check the continuous backups + // and point in time recovery settings. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeContinuousBackupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeContinuousBackupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeContinuousBackupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeContinuousBackupsInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTableName sets the TableName field's value. +func (s *DescribeContinuousBackupsInput) SetTableName(v string) *DescribeContinuousBackupsInput { + s.TableName = &v + return s +} + +type DescribeContinuousBackupsOutput struct { + _ struct{} `type:"structure"` + + // ContinuousBackupsDescription can be one of the following : ENABLED, DISABLED. + ContinuousBackupsDescription *ContinuousBackupsDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeContinuousBackupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeContinuousBackupsOutput) GoString() string { + return s.String() +} + +// SetContinuousBackupsDescription sets the ContinuousBackupsDescription field's value. +func (s *DescribeContinuousBackupsOutput) SetContinuousBackupsDescription(v *ContinuousBackupsDescription) *DescribeContinuousBackupsOutput { + s.ContinuousBackupsDescription = v + return s +} + +type DescribeGlobalTableInput struct { + _ struct{} `type:"structure"` + + // The name of the global table. + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeGlobalTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGlobalTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeGlobalTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeGlobalTableInput"} + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *DescribeGlobalTableInput) SetGlobalTableName(v string) *DescribeGlobalTableInput { + s.GlobalTableName = &v + return s +} + +type DescribeGlobalTableOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of the global table. + GlobalTableDescription *GlobalTableDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeGlobalTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGlobalTableOutput) GoString() string { + return s.String() +} + +// SetGlobalTableDescription sets the GlobalTableDescription field's value. +func (s *DescribeGlobalTableOutput) SetGlobalTableDescription(v *GlobalTableDescription) *DescribeGlobalTableOutput { + s.GlobalTableDescription = v + return s +} + +type DescribeGlobalTableSettingsInput struct { + _ struct{} `type:"structure"` + + // The name of the global table to describe. + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeGlobalTableSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGlobalTableSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeGlobalTableSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeGlobalTableSettingsInput"} + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *DescribeGlobalTableSettingsInput) SetGlobalTableName(v string) *DescribeGlobalTableSettingsInput { + s.GlobalTableName = &v + return s +} + +type DescribeGlobalTableSettingsOutput struct { + _ struct{} `type:"structure"` + + // The name of the global table. + GlobalTableName *string `min:"3" type:"string"` + + // The region specific settings for the global table. + ReplicaSettings []*ReplicaSettingsDescription `type:"list"` +} + +// String returns the string representation +func (s DescribeGlobalTableSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGlobalTableSettingsOutput) GoString() string { + return s.String() +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *DescribeGlobalTableSettingsOutput) SetGlobalTableName(v string) *DescribeGlobalTableSettingsOutput { + s.GlobalTableName = &v + return s +} + +// SetReplicaSettings sets the ReplicaSettings field's value. +func (s *DescribeGlobalTableSettingsOutput) SetReplicaSettings(v []*ReplicaSettingsDescription) *DescribeGlobalTableSettingsOutput { + s.ReplicaSettings = v + return s +} + +// Represents the input of a DescribeLimits operation. Has no content. +type DescribeLimitsInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeLimitsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLimitsInput) GoString() string { + return s.String() +} + +// Represents the output of a DescribeLimits operation. +type DescribeLimitsOutput struct { + _ struct{} `type:"structure"` + + // The maximum total read capacity units that your account allows you to provision + // across all of your tables in this region. + AccountMaxReadCapacityUnits *int64 `min:"1" type:"long"` + + // The maximum total write capacity units that your account allows you to provision + // across all of your tables in this region. + AccountMaxWriteCapacityUnits *int64 `min:"1" type:"long"` + + // The maximum read capacity units that your account allows you to provision + // for a new table that you are creating in this region, including the read + // capacity units provisioned for its global secondary indexes (GSIs). + TableMaxReadCapacityUnits *int64 `min:"1" type:"long"` + + // The maximum write capacity units that your account allows you to provision + // for a new table that you are creating in this region, including the write + // capacity units provisioned for its global secondary indexes (GSIs). + TableMaxWriteCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s DescribeLimitsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLimitsOutput) GoString() string { + return s.String() +} + +// SetAccountMaxReadCapacityUnits sets the AccountMaxReadCapacityUnits field's value. +func (s *DescribeLimitsOutput) SetAccountMaxReadCapacityUnits(v int64) *DescribeLimitsOutput { + s.AccountMaxReadCapacityUnits = &v + return s +} + +// SetAccountMaxWriteCapacityUnits sets the AccountMaxWriteCapacityUnits field's value. +func (s *DescribeLimitsOutput) SetAccountMaxWriteCapacityUnits(v int64) *DescribeLimitsOutput { + s.AccountMaxWriteCapacityUnits = &v + return s +} + +// SetTableMaxReadCapacityUnits sets the TableMaxReadCapacityUnits field's value. +func (s *DescribeLimitsOutput) SetTableMaxReadCapacityUnits(v int64) *DescribeLimitsOutput { + s.TableMaxReadCapacityUnits = &v + return s +} + +// SetTableMaxWriteCapacityUnits sets the TableMaxWriteCapacityUnits field's value. +func (s *DescribeLimitsOutput) SetTableMaxWriteCapacityUnits(v int64) *DescribeLimitsOutput { + s.TableMaxWriteCapacityUnits = &v + return s +} + +// Represents the input of a DescribeTable operation. +type DescribeTableInput struct { + _ struct{} `type:"structure"` + + // The name of the table to describe. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTableInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTableName sets the TableName field's value. +func (s *DescribeTableInput) SetTableName(v string) *DescribeTableInput { + s.TableName = &v + return s +} + +// Represents the output of a DescribeTable operation. +type DescribeTableOutput struct { + _ struct{} `type:"structure"` + + // The properties of the table. + Table *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTableOutput) GoString() string { + return s.String() +} + +// SetTable sets the Table field's value. +func (s *DescribeTableOutput) SetTable(v *TableDescription) *DescribeTableOutput { + s.Table = v + return s +} + +type DescribeTimeToLiveInput struct { + _ struct{} `type:"structure"` + + // The name of the table to be described. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeTimeToLiveInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTimeToLiveInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTimeToLiveInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTimeToLiveInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTableName sets the TableName field's value. +func (s *DescribeTimeToLiveInput) SetTableName(v string) *DescribeTimeToLiveInput { + s.TableName = &v + return s +} + +type DescribeTimeToLiveOutput struct { + _ struct{} `type:"structure"` + + // The description of the Time to Live (TTL) status on the specified table. + TimeToLiveDescription *TimeToLiveDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeTimeToLiveOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTimeToLiveOutput) GoString() string { + return s.String() +} + +// SetTimeToLiveDescription sets the TimeToLiveDescription field's value. +func (s *DescribeTimeToLiveOutput) SetTimeToLiveDescription(v *TimeToLiveDescription) *DescribeTimeToLiveOutput { + s.TimeToLiveDescription = v + return s +} + +// Represents a condition to be compared with an attribute value. This condition +// can be used with DeleteItem, PutItem or UpdateItem operations; if the comparison +// evaluates to true, the operation succeeds; if not, the operation fails. You +// can use ExpectedAttributeValue in one of two different ways: +// +// * Use AttributeValueList to specify one or more values to compare against +// an attribute. Use ComparisonOperator to specify how you want to perform +// the comparison. If the comparison evaluates to true, then the conditional +// operation succeeds. +// +// * Use Value to specify a value that DynamoDB will compare against an attribute. +// If the values match, then ExpectedAttributeValue evaluates to true and +// the conditional operation succeeds. Optionally, you can also set Exists +// to false, indicating that you do not expect to find the attribute value +// in the table. In this case, the conditional operation succeeds only if +// the comparison evaluates to false. +// +// Value and Exists are incompatible with AttributeValueList and ComparisonOperator. +// Note that if you use both sets of parameters at once, DynamoDB will return +// a ValidationException exception. +type ExpectedAttributeValue struct { + _ struct{} `type:"structure"` + + // One or more values to evaluate against the supplied attribute. The number + // of values in the list depends on the ComparisonOperator being used. + // + // For type Number, value comparisons are numeric. + // + // String value comparisons for greater than, equals, or less than are based + // on ASCII character code values. For example, a is greater than A, and a is + // greater than B. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters + // (http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters). + // + // For Binary, DynamoDB treats each byte of the binary data as unsigned when + // it compares binary values. + // + // For information on specifying data types in JSON, see JSON Data Format (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataFormat.html) + // in the Amazon DynamoDB Developer Guide. + AttributeValueList []*AttributeValue `type:"list"` + + // A comparator for evaluating attributes in the AttributeValueList. For example, + // equals, greater than, less than, etc. + // + // The following comparison operators are available: + // + // EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | + // BEGINS_WITH | IN | BETWEEN + // + // The following are descriptions of each comparison operator. + // + // * EQ : Equal. EQ is supported for all data types, including lists and + // maps. + // + // AttributeValueList can contain only one AttributeValue element of type String, + // Number, Binary, String Set, Number Set, or Binary Set. If an item contains + // an AttributeValue element of a different type than the one provided in + // the request, the value does not match. For example, {"S":"6"} does not + // equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}. + // + // * NE : Not equal. NE is supported for all data types, including lists + // and maps. + // + // * AttributeValueList can contain only one AttributeValue of type String, + // Number, Binary, String Set, Number Set, or Binary Set. If an item contains + // an AttributeValue of a different type than the one provided in the request, + // the value does not match. For example, {"S":"6"} does not equal {"N":"6"}. + // Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}. + // + // * LE : Less than or equal. + // + // AttributeValueList can contain only one AttributeValue element of type String, + // Number, or Binary (not a set type). If an item contains an AttributeValue + // element of a different type than the one provided in the request, the value + // does not match. For example, {"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} + // does not compare to {"NS":["6", "2", "1"]}. + // + // LT: Less than. + // + // AttributeValueListcan contain only one AttributeValueof type String, Number, or Binary (not a set type). If an item contains an + // AttributeValueelement of a different type than the one provided in the request, the value + // does not match. For example, {"S":"6"}does not equal {"N":"6"}. Also, {"N":"6"}does not compare to {"NS":["6", "2", "1"]} + ComparisonOperator *string `type:"string" enum:"ComparisonOperator"` + + // Causes DynamoDB to evaluate the value before attempting a conditional operation: + // + // * If Exists is true, DynamoDB will check to see if that attribute value + // already exists in the table. If it is found, then the operation succeeds. + // If it is not found, the operation fails with a ConditionalCheckFailedException. + // + // * If Exists is false, DynamoDB assumes that the attribute value does not + // exist in the table. If in fact the value does not exist, then the assumption + // is valid and the operation succeeds. If the value is found, despite the + // assumption that it does not exist, the operation fails with a ConditionalCheckFailedException. + // + // The default setting for Exists is true. If you supply a Value all by itself, + // DynamoDB assumes the attribute exists: You don't have to set Exists to true, + // because it is implied. + // + // DynamoDB returns a ValidationException if: + // + // * Exists is true but there is no Value to check. (You expect a value to + // exist, but don't specify what that value is.) + // + // * Exists is false but you also provide a Value. (You cannot expect an + // attribute to have a value, while also expecting it not to exist.) + Exists *bool `type:"boolean"` + + // Represents the data for the expected attribute. + // + // Each attribute value is described as a name-value pair. The name is the data + // type, and the value is the data itself. + // + // For more information, see Data Types (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes) + // in the Amazon DynamoDB Developer Guide. + Value *AttributeValue `type:"structure"` +} + +// String returns the string representation +func (s ExpectedAttributeValue) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExpectedAttributeValue) GoString() string { + return s.String() +} + +// SetAttributeValueList sets the AttributeValueList field's value. +func (s *ExpectedAttributeValue) SetAttributeValueList(v []*AttributeValue) *ExpectedAttributeValue { + s.AttributeValueList = v + return s +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *ExpectedAttributeValue) SetComparisonOperator(v string) *ExpectedAttributeValue { + s.ComparisonOperator = &v + return s +} + +// SetExists sets the Exists field's value. +func (s *ExpectedAttributeValue) SetExists(v bool) *ExpectedAttributeValue { + s.Exists = &v + return s +} + +// SetValue sets the Value field's value. +func (s *ExpectedAttributeValue) SetValue(v *AttributeValue) *ExpectedAttributeValue { + s.Value = v + return s +} + +// Represents the input of a GetItem operation. +type GetItemInput struct { + _ struct{} `type:"structure"` + + // This is a legacy parameter. Use ProjectionExpression instead. For more information, + // see AttributesToGet (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.AttributesToGet.html) + // in the Amazon DynamoDB Developer Guide. + AttributesToGet []*string `min:"1" type:"list"` + + // Determines the read consistency model: If set to true, then the operation + // uses strongly consistent reads; otherwise, the operation uses eventually + // consistent reads. + ConsistentRead *bool `type:"boolean"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // A map of attribute names to AttributeValue objects, representing the primary + // key of the item to retrieve. + // + // For the primary key, you must provide all of the attributes. For example, + // with a simple primary key, you only need to provide a value for the partition + // key. For a composite primary key, you must provide values for both the partition + // key and the sort key. + // + // Key is a required field + Key map[string]*AttributeValue `type:"map" required:"true"` + + // A string that identifies one or more attributes to retrieve from the table. + // These attributes can include scalars, sets, or elements of a JSON document. + // The attributes in the expression must be separated by commas. + // + // If no attribute names are specified, then all attributes will be returned. + // If any of the requested attributes are not found, they will not appear in + // the result. + // + // For more information, see Accessing Item Attributes (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ProjectionExpression *string `type:"string"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // The name of the table containing the requested item. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetItemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetItemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetItemInput"} + if s.AttributesToGet != nil && len(s.AttributesToGet) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributesToGet", 1)) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributesToGet sets the AttributesToGet field's value. +func (s *GetItemInput) SetAttributesToGet(v []*string) *GetItemInput { + s.AttributesToGet = v + return s +} + +// SetConsistentRead sets the ConsistentRead field's value. +func (s *GetItemInput) SetConsistentRead(v bool) *GetItemInput { + s.ConsistentRead = &v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *GetItemInput) SetExpressionAttributeNames(v map[string]*string) *GetItemInput { + s.ExpressionAttributeNames = v + return s +} + +// SetKey sets the Key field's value. +func (s *GetItemInput) SetKey(v map[string]*AttributeValue) *GetItemInput { + s.Key = v + return s +} + +// SetProjectionExpression sets the ProjectionExpression field's value. +func (s *GetItemInput) SetProjectionExpression(v string) *GetItemInput { + s.ProjectionExpression = &v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *GetItemInput) SetReturnConsumedCapacity(v string) *GetItemInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *GetItemInput) SetTableName(v string) *GetItemInput { + s.TableName = &v + return s +} + +// Represents the output of a GetItem operation. +type GetItemOutput struct { + _ struct{} `type:"structure"` + + // The capacity units consumed by the GetItem operation. The data returned includes + // the total provisioned throughput consumed, along with statistics for the + // table and any indexes involved in the operation. ConsumedCapacity is only + // returned if the ReturnConsumedCapacity parameter was specified. For more + // information, see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // A map of attribute names to AttributeValue objects, as specified by ProjectionExpression. + Item map[string]*AttributeValue `type:"map"` +} + +// String returns the string representation +func (s GetItemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetItemOutput) GoString() string { + return s.String() +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *GetItemOutput) SetConsumedCapacity(v *ConsumedCapacity) *GetItemOutput { + s.ConsumedCapacity = v + return s +} + +// SetItem sets the Item field's value. +func (s *GetItemOutput) SetItem(v map[string]*AttributeValue) *GetItemOutput { + s.Item = v + return s +} + +// Represents the properties of a global secondary index. +type GlobalSecondaryIndex struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The complete key schema for a global secondary index, which consists of one + // or more pairs of attribute names and key types: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + // + // KeySchema is a required field + KeySchema []*KeySchemaElement `min:"1" type:"list" required:"true"` + + // Represents attributes that are copied (projected) from the table into the + // global secondary index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. + // + // Projection is a required field + Projection *Projection `type:"structure" required:"true"` + + // Represents the provisioned throughput settings for the specified global secondary + // index. + // + // For current minimum and maximum provisioned throughput values, see Limits + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) + // in the Amazon DynamoDB Developer Guide. + // + // ProvisionedThroughput is a required field + ProvisionedThroughput *ProvisionedThroughput `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GlobalSecondaryIndex) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalSecondaryIndex) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GlobalSecondaryIndex) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlobalSecondaryIndex"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.KeySchema == nil { + invalidParams.Add(request.NewErrParamRequired("KeySchema")) + } + if s.KeySchema != nil && len(s.KeySchema) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KeySchema", 1)) + } + if s.Projection == nil { + invalidParams.Add(request.NewErrParamRequired("Projection")) + } + if s.ProvisionedThroughput == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisionedThroughput")) + } + if s.KeySchema != nil { + for i, v := range s.KeySchema { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "KeySchema", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Projection != nil { + if err := s.Projection.Validate(); err != nil { + invalidParams.AddNested("Projection", err.(request.ErrInvalidParams)) + } + } + if s.ProvisionedThroughput != nil { + if err := s.ProvisionedThroughput.Validate(); err != nil { + invalidParams.AddNested("ProvisionedThroughput", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *GlobalSecondaryIndex) SetIndexName(v string) *GlobalSecondaryIndex { + s.IndexName = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *GlobalSecondaryIndex) SetKeySchema(v []*KeySchemaElement) *GlobalSecondaryIndex { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *GlobalSecondaryIndex) SetProjection(v *Projection) *GlobalSecondaryIndex { + s.Projection = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *GlobalSecondaryIndex) SetProvisionedThroughput(v *ProvisionedThroughput) *GlobalSecondaryIndex { + s.ProvisionedThroughput = v + return s +} + +// Represents the properties of a global secondary index. +type GlobalSecondaryIndexDescription struct { + _ struct{} `type:"structure"` + + // Indicates whether the index is currently backfilling. Backfilling is the + // process of reading items from the table and determining whether they can + // be added to the index. (Not all items will qualify: For example, a partition + // key cannot have any duplicate values.) If an item can be added to the index, + // DynamoDB will do so. After all items have been processed, the backfilling + // operation is complete and Backfilling is false. + // + // For indexes that were created during a CreateTable operation, the Backfilling + // attribute does not appear in the DescribeTable output. + Backfilling *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) that uniquely identifies the index. + IndexArn *string `type:"string"` + + // The name of the global secondary index. + IndexName *string `min:"3" type:"string"` + + // The total size of the specified index, in bytes. DynamoDB updates this value + // approximately every six hours. Recent changes might not be reflected in this + // value. + IndexSizeBytes *int64 `type:"long"` + + // The current state of the global secondary index: + // + // * CREATING - The index is being created. + // + // * UPDATING - The index is being updated. + // + // * DELETING - The index is being deleted. + // + // * ACTIVE - The index is ready for use. + IndexStatus *string `type:"string" enum:"IndexStatus"` + + // The number of items in the specified index. DynamoDB updates this value approximately + // every six hours. Recent changes might not be reflected in this value. + ItemCount *int64 `type:"long"` + + // The complete key schema for a global secondary index, which consists of one + // or more pairs of attribute names and key types: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + KeySchema []*KeySchemaElement `min:"1" type:"list"` + + // Represents attributes that are copied (projected) from the table into the + // global secondary index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. + Projection *Projection `type:"structure"` + + // Represents the provisioned throughput settings for the specified global secondary + // index. + // + // For current minimum and maximum provisioned throughput values, see Limits + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) + // in the Amazon DynamoDB Developer Guide. + ProvisionedThroughput *ProvisionedThroughputDescription `type:"structure"` +} + +// String returns the string representation +func (s GlobalSecondaryIndexDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalSecondaryIndexDescription) GoString() string { + return s.String() +} + +// SetBackfilling sets the Backfilling field's value. +func (s *GlobalSecondaryIndexDescription) SetBackfilling(v bool) *GlobalSecondaryIndexDescription { + s.Backfilling = &v + return s +} + +// SetIndexArn sets the IndexArn field's value. +func (s *GlobalSecondaryIndexDescription) SetIndexArn(v string) *GlobalSecondaryIndexDescription { + s.IndexArn = &v + return s +} + +// SetIndexName sets the IndexName field's value. +func (s *GlobalSecondaryIndexDescription) SetIndexName(v string) *GlobalSecondaryIndexDescription { + s.IndexName = &v + return s +} + +// SetIndexSizeBytes sets the IndexSizeBytes field's value. +func (s *GlobalSecondaryIndexDescription) SetIndexSizeBytes(v int64) *GlobalSecondaryIndexDescription { + s.IndexSizeBytes = &v + return s +} + +// SetIndexStatus sets the IndexStatus field's value. +func (s *GlobalSecondaryIndexDescription) SetIndexStatus(v string) *GlobalSecondaryIndexDescription { + s.IndexStatus = &v + return s +} + +// SetItemCount sets the ItemCount field's value. +func (s *GlobalSecondaryIndexDescription) SetItemCount(v int64) *GlobalSecondaryIndexDescription { + s.ItemCount = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *GlobalSecondaryIndexDescription) SetKeySchema(v []*KeySchemaElement) *GlobalSecondaryIndexDescription { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *GlobalSecondaryIndexDescription) SetProjection(v *Projection) *GlobalSecondaryIndexDescription { + s.Projection = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *GlobalSecondaryIndexDescription) SetProvisionedThroughput(v *ProvisionedThroughputDescription) *GlobalSecondaryIndexDescription { + s.ProvisionedThroughput = v + return s +} + +// Represents the properties of a global secondary index for the table when +// the backup was created. +type GlobalSecondaryIndexInfo struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. + IndexName *string `min:"3" type:"string"` + + // The complete key schema for a global secondary index, which consists of one + // or more pairs of attribute names and key types: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + KeySchema []*KeySchemaElement `min:"1" type:"list"` + + // Represents attributes that are copied (projected) from the table into the + // global secondary index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. + Projection *Projection `type:"structure"` + + // Represents the provisioned throughput settings for the specified global secondary + // index. + ProvisionedThroughput *ProvisionedThroughput `type:"structure"` +} + +// String returns the string representation +func (s GlobalSecondaryIndexInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalSecondaryIndexInfo) GoString() string { + return s.String() +} + +// SetIndexName sets the IndexName field's value. +func (s *GlobalSecondaryIndexInfo) SetIndexName(v string) *GlobalSecondaryIndexInfo { + s.IndexName = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *GlobalSecondaryIndexInfo) SetKeySchema(v []*KeySchemaElement) *GlobalSecondaryIndexInfo { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *GlobalSecondaryIndexInfo) SetProjection(v *Projection) *GlobalSecondaryIndexInfo { + s.Projection = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *GlobalSecondaryIndexInfo) SetProvisionedThroughput(v *ProvisionedThroughput) *GlobalSecondaryIndexInfo { + s.ProvisionedThroughput = v + return s +} + +// Represents one of the following: +// +// * A new global secondary index to be added to an existing table. +// +// * New provisioned throughput parameters for an existing global secondary +// index. +// +// * An existing global secondary index to be removed from an existing table. +type GlobalSecondaryIndexUpdate struct { + _ struct{} `type:"structure"` + + // The parameters required for creating a global secondary index on an existing + // table: + // + // * IndexName + // + // * KeySchema + // + // * AttributeDefinitions + // + // * Projection + // + // * ProvisionedThroughput + Create *CreateGlobalSecondaryIndexAction `type:"structure"` + + // The name of an existing global secondary index to be removed. + Delete *DeleteGlobalSecondaryIndexAction `type:"structure"` + + // The name of an existing global secondary index, along with new provisioned + // throughput settings to be applied to that index. + Update *UpdateGlobalSecondaryIndexAction `type:"structure"` +} + +// String returns the string representation +func (s GlobalSecondaryIndexUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalSecondaryIndexUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GlobalSecondaryIndexUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlobalSecondaryIndexUpdate"} + if s.Create != nil { + if err := s.Create.Validate(); err != nil { + invalidParams.AddNested("Create", err.(request.ErrInvalidParams)) + } + } + if s.Delete != nil { + if err := s.Delete.Validate(); err != nil { + invalidParams.AddNested("Delete", err.(request.ErrInvalidParams)) + } + } + if s.Update != nil { + if err := s.Update.Validate(); err != nil { + invalidParams.AddNested("Update", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCreate sets the Create field's value. +func (s *GlobalSecondaryIndexUpdate) SetCreate(v *CreateGlobalSecondaryIndexAction) *GlobalSecondaryIndexUpdate { + s.Create = v + return s +} + +// SetDelete sets the Delete field's value. +func (s *GlobalSecondaryIndexUpdate) SetDelete(v *DeleteGlobalSecondaryIndexAction) *GlobalSecondaryIndexUpdate { + s.Delete = v + return s +} + +// SetUpdate sets the Update field's value. +func (s *GlobalSecondaryIndexUpdate) SetUpdate(v *UpdateGlobalSecondaryIndexAction) *GlobalSecondaryIndexUpdate { + s.Update = v + return s +} + +// Represents the properties of a global table. +type GlobalTable struct { + _ struct{} `type:"structure"` + + // The global table name. + GlobalTableName *string `min:"3" type:"string"` + + // The regions where the global table has replicas. + ReplicationGroup []*Replica `type:"list"` +} + +// String returns the string representation +func (s GlobalTable) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalTable) GoString() string { + return s.String() +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *GlobalTable) SetGlobalTableName(v string) *GlobalTable { + s.GlobalTableName = &v + return s +} + +// SetReplicationGroup sets the ReplicationGroup field's value. +func (s *GlobalTable) SetReplicationGroup(v []*Replica) *GlobalTable { + s.ReplicationGroup = v + return s +} + +// Contains details about the global table. +type GlobalTableDescription struct { + _ struct{} `type:"structure"` + + // The creation time of the global table. + CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The unique identifier of the global table. + GlobalTableArn *string `type:"string"` + + // The global table name. + GlobalTableName *string `min:"3" type:"string"` + + // The current state of the global table: + // + // * CREATING - The global table is being created. + // + // * UPDATING - The global table is being updated. + // + // * DELETING - The global table is being deleted. + // + // * ACTIVE - The global table is ready for use. + GlobalTableStatus *string `type:"string" enum:"GlobalTableStatus"` + + // The regions where the global table has replicas. + ReplicationGroup []*ReplicaDescription `type:"list"` +} + +// String returns the string representation +func (s GlobalTableDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalTableDescription) GoString() string { + return s.String() +} + +// SetCreationDateTime sets the CreationDateTime field's value. +func (s *GlobalTableDescription) SetCreationDateTime(v time.Time) *GlobalTableDescription { + s.CreationDateTime = &v + return s +} + +// SetGlobalTableArn sets the GlobalTableArn field's value. +func (s *GlobalTableDescription) SetGlobalTableArn(v string) *GlobalTableDescription { + s.GlobalTableArn = &v + return s +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *GlobalTableDescription) SetGlobalTableName(v string) *GlobalTableDescription { + s.GlobalTableName = &v + return s +} + +// SetGlobalTableStatus sets the GlobalTableStatus field's value. +func (s *GlobalTableDescription) SetGlobalTableStatus(v string) *GlobalTableDescription { + s.GlobalTableStatus = &v + return s +} + +// SetReplicationGroup sets the ReplicationGroup field's value. +func (s *GlobalTableDescription) SetReplicationGroup(v []*ReplicaDescription) *GlobalTableDescription { + s.ReplicationGroup = v + return s +} + +// Represents the settings of a global secondary index for a global table that +// will be modified. +type GlobalTableGlobalSecondaryIndexSettingsUpdate struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + ProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s GlobalTableGlobalSecondaryIndexSettingsUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalTableGlobalSecondaryIndexSettingsUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlobalTableGlobalSecondaryIndexSettingsUpdate"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.ProvisionedWriteCapacityUnits != nil && *s.ProvisionedWriteCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ProvisionedWriteCapacityUnits", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) SetIndexName(v string) *GlobalTableGlobalSecondaryIndexSettingsUpdate { + s.IndexName = &v + return s +} + +// SetProvisionedWriteCapacityUnits sets the ProvisionedWriteCapacityUnits field's value. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) SetProvisionedWriteCapacityUnits(v int64) *GlobalTableGlobalSecondaryIndexSettingsUpdate { + s.ProvisionedWriteCapacityUnits = &v + return s +} + +// Information about item collections, if any, that were affected by the operation. +// ItemCollectionMetrics is only returned if the request asked for it. If the +// table does not have any local secondary indexes, this information is not +// returned in the response. +type ItemCollectionMetrics struct { + _ struct{} `type:"structure"` + + // The partition key value of the item collection. This value is the same as + // the partition key value of the item. + ItemCollectionKey map[string]*AttributeValue `type:"map"` + + // An estimate of item collection size, in gigabytes. This value is a two-element + // array containing a lower bound and an upper bound for the estimate. The estimate + // includes the size of all the items in the table, plus the size of all attributes + // projected into all of the local secondary indexes on that table. Use this + // estimate to measure whether a local secondary index is approaching its size + // limit. + // + // The estimate is subject to change over time; therefore, do not rely on the + // precision or accuracy of the estimate. + SizeEstimateRangeGB []*float64 `type:"list"` +} + +// String returns the string representation +func (s ItemCollectionMetrics) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ItemCollectionMetrics) GoString() string { + return s.String() +} + +// SetItemCollectionKey sets the ItemCollectionKey field's value. +func (s *ItemCollectionMetrics) SetItemCollectionKey(v map[string]*AttributeValue) *ItemCollectionMetrics { + s.ItemCollectionKey = v + return s +} + +// SetSizeEstimateRangeGB sets the SizeEstimateRangeGB field's value. +func (s *ItemCollectionMetrics) SetSizeEstimateRangeGB(v []*float64) *ItemCollectionMetrics { + s.SizeEstimateRangeGB = v + return s +} + +// Represents a single element of a key schema. A key schema specifies the attributes +// that make up the primary key of a table, or the key attributes of an index. +// +// A KeySchemaElement represents exactly one attribute of the primary key. For +// example, a simple primary key would be represented by one KeySchemaElement +// (for the partition key). A composite primary key would require one KeySchemaElement +// for the partition key, and another KeySchemaElement for the sort key. +// +// A KeySchemaElement must be a scalar, top-level attribute (not a nested attribute). +// The data type must be one of String, Number, or Binary. The attribute cannot +// be nested within a List or a Map. +type KeySchemaElement struct { + _ struct{} `type:"structure"` + + // The name of a key attribute. + // + // AttributeName is a required field + AttributeName *string `min:"1" type:"string" required:"true"` + + // The role that this key attribute will assume: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + // + // KeyType is a required field + KeyType *string `type:"string" required:"true" enum:"KeyType"` +} + +// String returns the string representation +func (s KeySchemaElement) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KeySchemaElement) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KeySchemaElement) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KeySchemaElement"} + if s.AttributeName == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeName")) + } + if s.AttributeName != nil && len(*s.AttributeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributeName", 1)) + } + if s.KeyType == nil { + invalidParams.Add(request.NewErrParamRequired("KeyType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeName sets the AttributeName field's value. +func (s *KeySchemaElement) SetAttributeName(v string) *KeySchemaElement { + s.AttributeName = &v + return s +} + +// SetKeyType sets the KeyType field's value. +func (s *KeySchemaElement) SetKeyType(v string) *KeySchemaElement { + s.KeyType = &v + return s +} + +// Represents a set of primary keys and, for each key, the attributes to retrieve +// from the table. +// +// For each primary key, you must provide all of the key attributes. For example, +// with a simple primary key, you only need to provide the partition key. For +// a composite primary key, you must provide both the partition key and the +// sort key. +type KeysAndAttributes struct { + _ struct{} `type:"structure"` + + // This is a legacy parameter. Use ProjectionExpression instead. For more information, + // see Legacy Conditional Parameters (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.html) + // in the Amazon DynamoDB Developer Guide. + AttributesToGet []*string `min:"1" type:"list"` + + // The consistency of a read operation. If set to true, then a strongly consistent + // read is used; otherwise, an eventually consistent read is used. + ConsistentRead *bool `type:"boolean"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // The primary key attribute values that define the items and the attributes + // associated with the items. + // + // Keys is a required field + Keys []map[string]*AttributeValue `min:"1" type:"list" required:"true"` + + // A string that identifies one or more attributes to retrieve from the table. + // These attributes can include scalars, sets, or elements of a JSON document. + // The attributes in the ProjectionExpression must be separated by commas. + // + // If no attribute names are specified, then all attributes will be returned. + // If any of the requested attributes are not found, they will not appear in + // the result. + // + // For more information, see Accessing Item Attributes (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ProjectionExpression *string `type:"string"` +} + +// String returns the string representation +func (s KeysAndAttributes) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KeysAndAttributes) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KeysAndAttributes) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KeysAndAttributes"} + if s.AttributesToGet != nil && len(s.AttributesToGet) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributesToGet", 1)) + } + if s.Keys == nil { + invalidParams.Add(request.NewErrParamRequired("Keys")) + } + if s.Keys != nil && len(s.Keys) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Keys", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributesToGet sets the AttributesToGet field's value. +func (s *KeysAndAttributes) SetAttributesToGet(v []*string) *KeysAndAttributes { + s.AttributesToGet = v + return s +} + +// SetConsistentRead sets the ConsistentRead field's value. +func (s *KeysAndAttributes) SetConsistentRead(v bool) *KeysAndAttributes { + s.ConsistentRead = &v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *KeysAndAttributes) SetExpressionAttributeNames(v map[string]*string) *KeysAndAttributes { + s.ExpressionAttributeNames = v + return s +} + +// SetKeys sets the Keys field's value. +func (s *KeysAndAttributes) SetKeys(v []map[string]*AttributeValue) *KeysAndAttributes { + s.Keys = v + return s +} + +// SetProjectionExpression sets the ProjectionExpression field's value. +func (s *KeysAndAttributes) SetProjectionExpression(v string) *KeysAndAttributes { + s.ProjectionExpression = &v + return s +} + +type ListBackupsInput struct { + _ struct{} `type:"structure"` + + // LastEvaluatedBackupARN returned by the previous ListBackups call. + ExclusiveStartBackupArn *string `min:"37" type:"string"` + + // Maximum number of backups to return at once. + Limit *int64 `min:"1" type:"integer"` + + // The backups from the table specified by TableName are listed. + TableName *string `min:"3" type:"string"` + + // Only backups created after this time are listed. TimeRangeLowerBound is inclusive. + TimeRangeLowerBound *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Only backups created before this time are listed. TimeRangeUpperBound is + // exclusive. + TimeRangeUpperBound *time.Time `type:"timestamp" timestampFormat:"unix"` +} + +// String returns the string representation +func (s ListBackupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBackupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListBackupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListBackupsInput"} + if s.ExclusiveStartBackupArn != nil && len(*s.ExclusiveStartBackupArn) < 37 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartBackupArn", 37)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExclusiveStartBackupArn sets the ExclusiveStartBackupArn field's value. +func (s *ListBackupsInput) SetExclusiveStartBackupArn(v string) *ListBackupsInput { + s.ExclusiveStartBackupArn = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListBackupsInput) SetLimit(v int64) *ListBackupsInput { + s.Limit = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ListBackupsInput) SetTableName(v string) *ListBackupsInput { + s.TableName = &v + return s +} + +// SetTimeRangeLowerBound sets the TimeRangeLowerBound field's value. +func (s *ListBackupsInput) SetTimeRangeLowerBound(v time.Time) *ListBackupsInput { + s.TimeRangeLowerBound = &v + return s +} + +// SetTimeRangeUpperBound sets the TimeRangeUpperBound field's value. +func (s *ListBackupsInput) SetTimeRangeUpperBound(v time.Time) *ListBackupsInput { + s.TimeRangeUpperBound = &v + return s +} + +type ListBackupsOutput struct { + _ struct{} `type:"structure"` + + // List of BackupSummary objects. + BackupSummaries []*BackupSummary `type:"list"` + + // Last evaluated BackupARN. + LastEvaluatedBackupArn *string `min:"37" type:"string"` +} + +// String returns the string representation +func (s ListBackupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBackupsOutput) GoString() string { + return s.String() +} + +// SetBackupSummaries sets the BackupSummaries field's value. +func (s *ListBackupsOutput) SetBackupSummaries(v []*BackupSummary) *ListBackupsOutput { + s.BackupSummaries = v + return s +} + +// SetLastEvaluatedBackupArn sets the LastEvaluatedBackupArn field's value. +func (s *ListBackupsOutput) SetLastEvaluatedBackupArn(v string) *ListBackupsOutput { + s.LastEvaluatedBackupArn = &v + return s +} + +type ListGlobalTablesInput struct { + _ struct{} `type:"structure"` + + // The first global table name that this operation will evaluate. + ExclusiveStartGlobalTableName *string `min:"3" type:"string"` + + // The maximum number of table names to return. + Limit *int64 `min:"1" type:"integer"` + + // Lists the global tables in a specific region. + RegionName *string `type:"string"` +} + +// String returns the string representation +func (s ListGlobalTablesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGlobalTablesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGlobalTablesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGlobalTablesInput"} + if s.ExclusiveStartGlobalTableName != nil && len(*s.ExclusiveStartGlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartGlobalTableName", 3)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExclusiveStartGlobalTableName sets the ExclusiveStartGlobalTableName field's value. +func (s *ListGlobalTablesInput) SetExclusiveStartGlobalTableName(v string) *ListGlobalTablesInput { + s.ExclusiveStartGlobalTableName = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListGlobalTablesInput) SetLimit(v int64) *ListGlobalTablesInput { + s.Limit = &v + return s +} + +// SetRegionName sets the RegionName field's value. +func (s *ListGlobalTablesInput) SetRegionName(v string) *ListGlobalTablesInput { + s.RegionName = &v + return s +} + +type ListGlobalTablesOutput struct { + _ struct{} `type:"structure"` + + // List of global table names. + GlobalTables []*GlobalTable `type:"list"` + + // Last evaluated global table name. + LastEvaluatedGlobalTableName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s ListGlobalTablesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGlobalTablesOutput) GoString() string { + return s.String() +} + +// SetGlobalTables sets the GlobalTables field's value. +func (s *ListGlobalTablesOutput) SetGlobalTables(v []*GlobalTable) *ListGlobalTablesOutput { + s.GlobalTables = v + return s +} + +// SetLastEvaluatedGlobalTableName sets the LastEvaluatedGlobalTableName field's value. +func (s *ListGlobalTablesOutput) SetLastEvaluatedGlobalTableName(v string) *ListGlobalTablesOutput { + s.LastEvaluatedGlobalTableName = &v + return s +} + +// Represents the input of a ListTables operation. +type ListTablesInput struct { + _ struct{} `type:"structure"` + + // The first table name that this operation will evaluate. Use the value that + // was returned for LastEvaluatedTableName in a previous operation, so that + // you can obtain the next page of results. + ExclusiveStartTableName *string `min:"3" type:"string"` + + // A maximum number of table names to return. If this parameter is not specified, + // the limit is 100. + Limit *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s ListTablesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTablesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTablesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTablesInput"} + if s.ExclusiveStartTableName != nil && len(*s.ExclusiveStartTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartTableName", 3)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExclusiveStartTableName sets the ExclusiveStartTableName field's value. +func (s *ListTablesInput) SetExclusiveStartTableName(v string) *ListTablesInput { + s.ExclusiveStartTableName = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListTablesInput) SetLimit(v int64) *ListTablesInput { + s.Limit = &v + return s +} + +// Represents the output of a ListTables operation. +type ListTablesOutput struct { + _ struct{} `type:"structure"` + + // The name of the last table in the current page of results. Use this value + // as the ExclusiveStartTableName in a new request to obtain the next page of + // results, until all the table names are returned. + // + // If you do not receive a LastEvaluatedTableName value in the response, this + // means that there are no more table names to be retrieved. + LastEvaluatedTableName *string `min:"3" type:"string"` + + // The names of the tables associated with the current account at the current + // endpoint. The maximum size of this array is 100. + // + // If LastEvaluatedTableName also appears in the output, you can use this value + // as the ExclusiveStartTableName parameter in a subsequent ListTables request + // and obtain the next page of results. + TableNames []*string `type:"list"` +} + +// String returns the string representation +func (s ListTablesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTablesOutput) GoString() string { + return s.String() +} + +// SetLastEvaluatedTableName sets the LastEvaluatedTableName field's value. +func (s *ListTablesOutput) SetLastEvaluatedTableName(v string) *ListTablesOutput { + s.LastEvaluatedTableName = &v + return s +} + +// SetTableNames sets the TableNames field's value. +func (s *ListTablesOutput) SetTableNames(v []*string) *ListTablesOutput { + s.TableNames = v + return s +} + +type ListTagsOfResourceInput struct { + _ struct{} `type:"structure"` + + // An optional string that, if supplied, must be copied from the output of a + // previous call to ListTagOfResource. When provided in this manner, this API + // fetches the next page of results. + NextToken *string `type:"string"` + + // The Amazon DynamoDB resource with tags to be listed. This value is an Amazon + // Resource Name (ARN). + // + // ResourceArn is a required field + ResourceArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsOfResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsOfResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsOfResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsOfResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.ResourceArn != nil && len(*s.ResourceArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsOfResourceInput) SetNextToken(v string) *ListTagsOfResourceInput { + s.NextToken = &v + return s +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *ListTagsOfResourceInput) SetResourceArn(v string) *ListTagsOfResourceInput { + s.ResourceArn = &v + return s +} + +type ListTagsOfResourceOutput struct { + _ struct{} `type:"structure"` + + // If this value is returned, there are additional results to be displayed. + // To retrieve them, call ListTagsOfResource again, with NextToken set to this + // value. + NextToken *string `type:"string"` + + // The tags currently associated with the Amazon DynamoDB resource. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s ListTagsOfResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsOfResourceOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsOfResourceOutput) SetNextToken(v string) *ListTagsOfResourceOutput { + s.NextToken = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ListTagsOfResourceOutput) SetTags(v []*Tag) *ListTagsOfResourceOutput { + s.Tags = v + return s +} + +// Represents the properties of a local secondary index. +type LocalSecondaryIndex struct { + _ struct{} `type:"structure"` + + // The name of the local secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The complete key schema for the local secondary index, consisting of one + // or more pairs of attribute names and key types: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + // + // KeySchema is a required field + KeySchema []*KeySchemaElement `min:"1" type:"list" required:"true"` + + // Represents attributes that are copied (projected) from the table into the + // local secondary index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. + // + // Projection is a required field + Projection *Projection `type:"structure" required:"true"` +} + +// String returns the string representation +func (s LocalSecondaryIndex) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LocalSecondaryIndex) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LocalSecondaryIndex) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LocalSecondaryIndex"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.KeySchema == nil { + invalidParams.Add(request.NewErrParamRequired("KeySchema")) + } + if s.KeySchema != nil && len(s.KeySchema) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KeySchema", 1)) + } + if s.Projection == nil { + invalidParams.Add(request.NewErrParamRequired("Projection")) + } + if s.KeySchema != nil { + for i, v := range s.KeySchema { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "KeySchema", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Projection != nil { + if err := s.Projection.Validate(); err != nil { + invalidParams.AddNested("Projection", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *LocalSecondaryIndex) SetIndexName(v string) *LocalSecondaryIndex { + s.IndexName = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *LocalSecondaryIndex) SetKeySchema(v []*KeySchemaElement) *LocalSecondaryIndex { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *LocalSecondaryIndex) SetProjection(v *Projection) *LocalSecondaryIndex { + s.Projection = v + return s +} + +// Represents the properties of a local secondary index. +type LocalSecondaryIndexDescription struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that uniquely identifies the index. + IndexArn *string `type:"string"` + + // Represents the name of the local secondary index. + IndexName *string `min:"3" type:"string"` + + // The total size of the specified index, in bytes. DynamoDB updates this value + // approximately every six hours. Recent changes might not be reflected in this + // value. + IndexSizeBytes *int64 `type:"long"` + + // The number of items in the specified index. DynamoDB updates this value approximately + // every six hours. Recent changes might not be reflected in this value. + ItemCount *int64 `type:"long"` + + // The complete key schema for the local secondary index, consisting of one + // or more pairs of attribute names and key types: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + KeySchema []*KeySchemaElement `min:"1" type:"list"` + + // Represents attributes that are copied (projected) from the table into the + // global secondary index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. + Projection *Projection `type:"structure"` +} + +// String returns the string representation +func (s LocalSecondaryIndexDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LocalSecondaryIndexDescription) GoString() string { + return s.String() +} + +// SetIndexArn sets the IndexArn field's value. +func (s *LocalSecondaryIndexDescription) SetIndexArn(v string) *LocalSecondaryIndexDescription { + s.IndexArn = &v + return s +} + +// SetIndexName sets the IndexName field's value. +func (s *LocalSecondaryIndexDescription) SetIndexName(v string) *LocalSecondaryIndexDescription { + s.IndexName = &v + return s +} + +// SetIndexSizeBytes sets the IndexSizeBytes field's value. +func (s *LocalSecondaryIndexDescription) SetIndexSizeBytes(v int64) *LocalSecondaryIndexDescription { + s.IndexSizeBytes = &v + return s +} + +// SetItemCount sets the ItemCount field's value. +func (s *LocalSecondaryIndexDescription) SetItemCount(v int64) *LocalSecondaryIndexDescription { + s.ItemCount = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *LocalSecondaryIndexDescription) SetKeySchema(v []*KeySchemaElement) *LocalSecondaryIndexDescription { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *LocalSecondaryIndexDescription) SetProjection(v *Projection) *LocalSecondaryIndexDescription { + s.Projection = v + return s +} + +// Represents the properties of a local secondary index for the table when the +// backup was created. +type LocalSecondaryIndexInfo struct { + _ struct{} `type:"structure"` + + // Represents the name of the local secondary index. + IndexName *string `min:"3" type:"string"` + + // The complete key schema for a local secondary index, which consists of one + // or more pairs of attribute names and key types: + // + // * HASH - partition key + // + // * RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + KeySchema []*KeySchemaElement `min:"1" type:"list"` + + // Represents attributes that are copied (projected) from the table into the + // global secondary index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. + Projection *Projection `type:"structure"` +} + +// String returns the string representation +func (s LocalSecondaryIndexInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LocalSecondaryIndexInfo) GoString() string { + return s.String() +} + +// SetIndexName sets the IndexName field's value. +func (s *LocalSecondaryIndexInfo) SetIndexName(v string) *LocalSecondaryIndexInfo { + s.IndexName = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *LocalSecondaryIndexInfo) SetKeySchema(v []*KeySchemaElement) *LocalSecondaryIndexInfo { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *LocalSecondaryIndexInfo) SetProjection(v *Projection) *LocalSecondaryIndexInfo { + s.Projection = v + return s +} + +// The description of the point in time settings applied to the table. +type PointInTimeRecoveryDescription struct { + _ struct{} `type:"structure"` + + // Specifies the earliest point in time you can restore your table to. It You + // can restore your table to any point in time during the last 35 days. + EarliestRestorableDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // LatestRestorableDateTime is typically 5 minutes before the current time. + LatestRestorableDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The current state of point in time recovery: + // + // * ENABLING - Point in time recovery is being enabled. + // + // * ENABLED - Point in time recovery is enabled. + // + // * DISABLED - Point in time recovery is disabled. + PointInTimeRecoveryStatus *string `type:"string" enum:"PointInTimeRecoveryStatus"` +} + +// String returns the string representation +func (s PointInTimeRecoveryDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PointInTimeRecoveryDescription) GoString() string { + return s.String() +} + +// SetEarliestRestorableDateTime sets the EarliestRestorableDateTime field's value. +func (s *PointInTimeRecoveryDescription) SetEarliestRestorableDateTime(v time.Time) *PointInTimeRecoveryDescription { + s.EarliestRestorableDateTime = &v + return s +} + +// SetLatestRestorableDateTime sets the LatestRestorableDateTime field's value. +func (s *PointInTimeRecoveryDescription) SetLatestRestorableDateTime(v time.Time) *PointInTimeRecoveryDescription { + s.LatestRestorableDateTime = &v + return s +} + +// SetPointInTimeRecoveryStatus sets the PointInTimeRecoveryStatus field's value. +func (s *PointInTimeRecoveryDescription) SetPointInTimeRecoveryStatus(v string) *PointInTimeRecoveryDescription { + s.PointInTimeRecoveryStatus = &v + return s +} + +// Represents the settings used to enable point in time recovery. +type PointInTimeRecoverySpecification struct { + _ struct{} `type:"structure"` + + // Indicates whether point in time recovery is enabled (true) or disabled (false) + // on the table. + // + // PointInTimeRecoveryEnabled is a required field + PointInTimeRecoveryEnabled *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s PointInTimeRecoverySpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PointInTimeRecoverySpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PointInTimeRecoverySpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PointInTimeRecoverySpecification"} + if s.PointInTimeRecoveryEnabled == nil { + invalidParams.Add(request.NewErrParamRequired("PointInTimeRecoveryEnabled")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPointInTimeRecoveryEnabled sets the PointInTimeRecoveryEnabled field's value. +func (s *PointInTimeRecoverySpecification) SetPointInTimeRecoveryEnabled(v bool) *PointInTimeRecoverySpecification { + s.PointInTimeRecoveryEnabled = &v + return s +} + +// Represents attributes that are copied (projected) from the table into an +// index. These are in addition to the primary key attributes and index key +// attributes, which are automatically projected. +type Projection struct { + _ struct{} `type:"structure"` + + // Represents the non-key attribute names which will be projected into the index. + // + // For local secondary indexes, the total count of NonKeyAttributes summed across + // all of the local secondary indexes, must not exceed 20. If you project the + // same attribute into two different indexes, this counts as two distinct attributes + // when determining the total. + NonKeyAttributes []*string `min:"1" type:"list"` + + // The set of attributes that are projected into the index: + // + // * KEYS_ONLY - Only the index and primary keys are projected into the index. + // + // * INCLUDE - Only the specified table attributes are projected into the + // index. The list of projected attributes are in NonKeyAttributes. + // + // * ALL - All of the table attributes are projected into the index. + ProjectionType *string `type:"string" enum:"ProjectionType"` +} + +// String returns the string representation +func (s Projection) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Projection) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Projection) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Projection"} + if s.NonKeyAttributes != nil && len(s.NonKeyAttributes) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NonKeyAttributes", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNonKeyAttributes sets the NonKeyAttributes field's value. +func (s *Projection) SetNonKeyAttributes(v []*string) *Projection { + s.NonKeyAttributes = v + return s +} + +// SetProjectionType sets the ProjectionType field's value. +func (s *Projection) SetProjectionType(v string) *Projection { + s.ProjectionType = &v + return s +} + +// Represents the provisioned throughput settings for a specified table or index. +// The settings can be modified using the UpdateTable operation. +// +// For current minimum and maximum provisioned throughput values, see Limits +// (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) +// in the Amazon DynamoDB Developer Guide. +type ProvisionedThroughput struct { + _ struct{} `type:"structure"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. For more information, see Specifying + // Read and Write Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + // + // ReadCapacityUnits is a required field + ReadCapacityUnits *int64 `min:"1" type:"long" required:"true"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. For more information, see Specifying Read and Write + // Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + // + // WriteCapacityUnits is a required field + WriteCapacityUnits *int64 `min:"1" type:"long" required:"true"` +} + +// String returns the string representation +func (s ProvisionedThroughput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisionedThroughput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ProvisionedThroughput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ProvisionedThroughput"} + if s.ReadCapacityUnits == nil { + invalidParams.Add(request.NewErrParamRequired("ReadCapacityUnits")) + } + if s.ReadCapacityUnits != nil && *s.ReadCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ReadCapacityUnits", 1)) + } + if s.WriteCapacityUnits == nil { + invalidParams.Add(request.NewErrParamRequired("WriteCapacityUnits")) + } + if s.WriteCapacityUnits != nil && *s.WriteCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("WriteCapacityUnits", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetReadCapacityUnits sets the ReadCapacityUnits field's value. +func (s *ProvisionedThroughput) SetReadCapacityUnits(v int64) *ProvisionedThroughput { + s.ReadCapacityUnits = &v + return s +} + +// SetWriteCapacityUnits sets the WriteCapacityUnits field's value. +func (s *ProvisionedThroughput) SetWriteCapacityUnits(v int64) *ProvisionedThroughput { + s.WriteCapacityUnits = &v + return s +} + +// Represents the provisioned throughput settings for the table, consisting +// of read and write capacity units, along with data about increases and decreases. +type ProvisionedThroughputDescription struct { + _ struct{} `type:"structure"` + + // The date and time of the last provisioned throughput decrease for this table. + LastDecreaseDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The date and time of the last provisioned throughput increase for this table. + LastIncreaseDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The number of provisioned throughput decreases for this table during this + // UTC calendar day. For current maximums on provisioned throughput decreases, + // see Limits (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) + // in the Amazon DynamoDB Developer Guide. + NumberOfDecreasesToday *int64 `min:"1" type:"long"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. Eventually consistent reads require + // less effort than strongly consistent reads, so a setting of 50 ReadCapacityUnits + // per second provides 100 eventually consistent ReadCapacityUnits per second. + ReadCapacityUnits *int64 `min:"1" type:"long"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + WriteCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s ProvisionedThroughputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisionedThroughputDescription) GoString() string { + return s.String() +} + +// SetLastDecreaseDateTime sets the LastDecreaseDateTime field's value. +func (s *ProvisionedThroughputDescription) SetLastDecreaseDateTime(v time.Time) *ProvisionedThroughputDescription { + s.LastDecreaseDateTime = &v + return s +} + +// SetLastIncreaseDateTime sets the LastIncreaseDateTime field's value. +func (s *ProvisionedThroughputDescription) SetLastIncreaseDateTime(v time.Time) *ProvisionedThroughputDescription { + s.LastIncreaseDateTime = &v + return s +} + +// SetNumberOfDecreasesToday sets the NumberOfDecreasesToday field's value. +func (s *ProvisionedThroughputDescription) SetNumberOfDecreasesToday(v int64) *ProvisionedThroughputDescription { + s.NumberOfDecreasesToday = &v + return s +} + +// SetReadCapacityUnits sets the ReadCapacityUnits field's value. +func (s *ProvisionedThroughputDescription) SetReadCapacityUnits(v int64) *ProvisionedThroughputDescription { + s.ReadCapacityUnits = &v + return s +} + +// SetWriteCapacityUnits sets the WriteCapacityUnits field's value. +func (s *ProvisionedThroughputDescription) SetWriteCapacityUnits(v int64) *ProvisionedThroughputDescription { + s.WriteCapacityUnits = &v + return s +} + +// Represents the input of a PutItem operation. +type PutItemInput struct { + _ struct{} `type:"structure"` + + // A condition that must be satisfied in order for a conditional PutItem operation + // to succeed. + // + // An expression can contain any of the following: + // + // * Functions: attribute_exists | attribute_not_exists | attribute_type + // | contains | begins_with | size + // + // These function names are case-sensitive. + // + // * Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN + // + // * Logical operators: AND | OR | NOT + // + // For more information on condition expressions, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ConditionExpression *string `type:"string"` + + // This is a legacy parameter. Use ConditionExpression instead. For more information, + // see ConditionalOperator (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ConditionalOperator.html) + // in the Amazon DynamoDB Developer Guide. + ConditionalOperator *string `type:"string" enum:"ConditionalOperator"` + + // This is a legacy parameter. Use ConditionExpression instead. For more information, + // see Expected (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.Expected.html) + // in the Amazon DynamoDB Developer Guide. + Expected map[string]*ExpectedAttributeValue `type:"map"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // One or more values that can be substituted in an expression. + // + // Use the : (colon) character in an expression to dereference an attribute + // value. For example, suppose that you wanted to check whether the value of + // the ProductStatus attribute was one of the following: + // + // Available | Backordered | Discontinued + // + // You would first need to specify ExpressionAttributeValues as follows: + // + // { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} + // } + // + // You could then use these values in an expression, such as this: + // + // ProductStatus IN (:avail, :back, :disc) + // + // For more information on expression attribute values, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeValues map[string]*AttributeValue `type:"map"` + + // A map of attribute name/value pairs, one for each attribute. Only the primary + // key attributes are required; you can optionally provide other attribute name-value + // pairs for the item. + // + // You must provide all of the attributes for the primary key. For example, + // with a simple primary key, you only need to provide a value for the partition + // key. For a composite primary key, you must provide both values for both the + // partition key and the sort key. + // + // If you specify any attributes that are part of an index key, then the data + // types for those attributes must match those of the schema in the table's + // attribute definition. + // + // For more information about primary keys, see Primary Key (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html#DataModelPrimaryKey) + // in the Amazon DynamoDB Developer Guide. + // + // Each element in the Item map is an AttributeValue object. + // + // Item is a required field + Item map[string]*AttributeValue `type:"map" required:"true"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // Determines whether item collection metrics are returned. If set to SIZE, + // the response includes statistics about item collections, if any, that were + // modified during the operation are returned in the response. If set to NONE + // (the default), no statistics are returned. + ReturnItemCollectionMetrics *string `type:"string" enum:"ReturnItemCollectionMetrics"` + + // Use ReturnValues if you want to get the item attributes as they appeared + // before they were updated with the PutItem request. For PutItem, the valid + // values are: + // + // * NONE - If ReturnValues is not specified, or if its value is NONE, then + // nothing is returned. (This setting is the default for ReturnValues.) + // + // * ALL_OLD - If PutItem overwrote an attribute name-value pair, then the + // content of the old item is returned. + // + // The ReturnValues parameter is used by several DynamoDB operations; however, + // PutItem does not recognize any values other than NONE or ALL_OLD. + ReturnValues *string `type:"string" enum:"ReturnValue"` + + // The name of the table to contain the item. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutItemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutItemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutItemInput"} + if s.Item == nil { + invalidParams.Add(request.NewErrParamRequired("Item")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConditionExpression sets the ConditionExpression field's value. +func (s *PutItemInput) SetConditionExpression(v string) *PutItemInput { + s.ConditionExpression = &v + return s +} + +// SetConditionalOperator sets the ConditionalOperator field's value. +func (s *PutItemInput) SetConditionalOperator(v string) *PutItemInput { + s.ConditionalOperator = &v + return s +} + +// SetExpected sets the Expected field's value. +func (s *PutItemInput) SetExpected(v map[string]*ExpectedAttributeValue) *PutItemInput { + s.Expected = v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *PutItemInput) SetExpressionAttributeNames(v map[string]*string) *PutItemInput { + s.ExpressionAttributeNames = v + return s +} + +// SetExpressionAttributeValues sets the ExpressionAttributeValues field's value. +func (s *PutItemInput) SetExpressionAttributeValues(v map[string]*AttributeValue) *PutItemInput { + s.ExpressionAttributeValues = v + return s +} + +// SetItem sets the Item field's value. +func (s *PutItemInput) SetItem(v map[string]*AttributeValue) *PutItemInput { + s.Item = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *PutItemInput) SetReturnConsumedCapacity(v string) *PutItemInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetReturnItemCollectionMetrics sets the ReturnItemCollectionMetrics field's value. +func (s *PutItemInput) SetReturnItemCollectionMetrics(v string) *PutItemInput { + s.ReturnItemCollectionMetrics = &v + return s +} + +// SetReturnValues sets the ReturnValues field's value. +func (s *PutItemInput) SetReturnValues(v string) *PutItemInput { + s.ReturnValues = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *PutItemInput) SetTableName(v string) *PutItemInput { + s.TableName = &v + return s +} + +// Represents the output of a PutItem operation. +type PutItemOutput struct { + _ struct{} `type:"structure"` + + // The attribute values as they appeared before the PutItem operation, but only + // if ReturnValues is specified as ALL_OLD in the request. Each element consists + // of an attribute name and an attribute value. + Attributes map[string]*AttributeValue `type:"map"` + + // The capacity units consumed by the PutItem operation. The data returned includes + // the total provisioned throughput consumed, along with statistics for the + // table and any indexes involved in the operation. ConsumedCapacity is only + // returned if the ReturnConsumedCapacity parameter was specified. For more + // information, see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // Information about item collections, if any, that were affected by the PutItem + // operation. ItemCollectionMetrics is only returned if the ReturnItemCollectionMetrics + // parameter was specified. If the table does not have any local secondary indexes, + // this information is not returned in the response. + // + // Each ItemCollectionMetrics element consists of: + // + // * ItemCollectionKey - The partition key value of the item collection. + // This is the same as the partition key value of the item itself. + // + // * SizeEstimateRangeGB - An estimate of item collection size, in gigabytes. + // This value is a two-element array containing a lower bound and an upper + // bound for the estimate. The estimate includes the size of all the items + // in the table, plus the size of all attributes projected into all of the + // local secondary indexes on that table. Use this estimate to measure whether + // a local secondary index is approaching its size limit. + // + // The estimate is subject to change over time; therefore, do not rely on the + // precision or accuracy of the estimate. + ItemCollectionMetrics *ItemCollectionMetrics `type:"structure"` +} + +// String returns the string representation +func (s PutItemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutItemOutput) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *PutItemOutput) SetAttributes(v map[string]*AttributeValue) *PutItemOutput { + s.Attributes = v + return s +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *PutItemOutput) SetConsumedCapacity(v *ConsumedCapacity) *PutItemOutput { + s.ConsumedCapacity = v + return s +} + +// SetItemCollectionMetrics sets the ItemCollectionMetrics field's value. +func (s *PutItemOutput) SetItemCollectionMetrics(v *ItemCollectionMetrics) *PutItemOutput { + s.ItemCollectionMetrics = v + return s +} + +// Represents a request to perform a PutItem operation on an item. +type PutRequest struct { + _ struct{} `type:"structure"` + + // A map of attribute name to attribute values, representing the primary key + // of an item to be processed by PutItem. All of the table's primary key attributes + // must be specified, and their data types must match those of the table's key + // schema. If any attributes are present in the item which are part of an index + // key schema for the table, their types must match the index key schema. + // + // Item is a required field + Item map[string]*AttributeValue `type:"map" required:"true"` +} + +// String returns the string representation +func (s PutRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRequest) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *PutRequest) SetItem(v map[string]*AttributeValue) *PutRequest { + s.Item = v + return s +} + +// Represents the input of a Query operation. +type QueryInput struct { + _ struct{} `type:"structure"` + + // This is a legacy parameter. Use ProjectionExpression instead. For more information, + // see AttributesToGet (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.AttributesToGet.html) + // in the Amazon DynamoDB Developer Guide. + AttributesToGet []*string `min:"1" type:"list"` + + // This is a legacy parameter. Use FilterExpression instead. For more information, + // see ConditionalOperator (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ConditionalOperator.html) + // in the Amazon DynamoDB Developer Guide. + ConditionalOperator *string `type:"string" enum:"ConditionalOperator"` + + // Determines the read consistency model: If set to true, then the operation + // uses strongly consistent reads; otherwise, the operation uses eventually + // consistent reads. + // + // Strongly consistent reads are not supported on global secondary indexes. + // If you query a global secondary index with ConsistentRead set to true, you + // will receive a ValidationException. + ConsistentRead *bool `type:"boolean"` + + // The primary key of the first item that this operation will evaluate. Use + // the value that was returned for LastEvaluatedKey in the previous operation. + // + // The data type for ExclusiveStartKey must be String, Number or Binary. No + // set data types are allowed. + ExclusiveStartKey map[string]*AttributeValue `type:"map"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // One or more values that can be substituted in an expression. + // + // Use the : (colon) character in an expression to dereference an attribute + // value. For example, suppose that you wanted to check whether the value of + // the ProductStatus attribute was one of the following: + // + // Available | Backordered | Discontinued + // + // You would first need to specify ExpressionAttributeValues as follows: + // + // { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} + // } + // + // You could then use these values in an expression, such as this: + // + // ProductStatus IN (:avail, :back, :disc) + // + // For more information on expression attribute values, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeValues map[string]*AttributeValue `type:"map"` + + // A string that contains conditions that DynamoDB applies after the Query operation, + // but before the data is returned to you. Items that do not satisfy the FilterExpression + // criteria are not returned. + // + // A FilterExpression does not allow key attributes. You cannot define a filter + // expression based on a partition key or a sort key. + // + // A FilterExpression is applied after the items have already been read; the + // process of filtering does not consume any additional read capacity units. + // + // For more information, see Filter Expressions (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#FilteringResults) + // in the Amazon DynamoDB Developer Guide. + FilterExpression *string `type:"string"` + + // The name of an index to query. This index can be any local secondary index + // or global secondary index on the table. Note that if you use the IndexName + // parameter, you must also provide TableName. + IndexName *string `min:"3" type:"string"` + + // The condition that specifies the key value(s) for items to be retrieved by + // the Query action. + // + // The condition must perform an equality test on a single partition key value. + // + // The condition can optionally perform one of several comparison tests on a + // single sort key value. This allows Query to retrieve one item with a given + // partition key value and sort key value, or several items that have the same + // partition key value but different sort key values. + // + // The partition key equality test is required, and must be specified in the + // following format: + // + // partitionKeyName=:partitionkeyval + // + // If you also want to provide a condition for the sort key, it must be combined + // using AND with the condition for the sort key. Following is an example, using + // the = comparison operator for the sort key: + // + // partitionKeyName=:partitionkeyvalANDsortKeyName=:sortkeyval + // + // Valid comparisons for the sort key condition are as follows: + // + // * sortKeyName=:sortkeyval - true if the sort key value is equal to :sortkeyval. + // + // * sortKeyName<:sortkeyval - true if the sort key value is less than :sortkeyval. + // + // * sortKeyName<=:sortkeyval - true if the sort key value is less than or + // equal to :sortkeyval. + // + // * sortKeyName>:sortkeyval - true if the sort key value is greater than + // :sortkeyval. + // + // * sortKeyName>= :sortkeyval - true if the sort key value is greater than + // or equal to :sortkeyval. + // + // * sortKeyNameBETWEEN:sortkeyval1AND:sortkeyval2 - true if the sort key + // value is greater than or equal to :sortkeyval1, and less than or equal + // to :sortkeyval2. + // + // * begins_with (sortKeyName, :sortkeyval) - true if the sort key value + // begins with a particular operand. (You cannot use this function with a + // sort key that is of type Number.) Note that the function name begins_with + // is case-sensitive. + // + // Use the ExpressionAttributeValues parameter to replace tokens such as :partitionval + // and :sortval with actual values at runtime. + // + // You can optionally use the ExpressionAttributeNames parameter to replace + // the names of the partition key and sort key with placeholder tokens. This + // option might be necessary if an attribute name conflicts with a DynamoDB + // reserved word. For example, the following KeyConditionExpression parameter + // causes an error because Size is a reserved word: + // + // * Size = :myval + // + // To work around this, define a placeholder (such a #S) to represent the attribute + // name Size. KeyConditionExpression then is as follows: + // + // * #S = :myval + // + // For a list of reserved words, see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide. + // + // For more information on ExpressionAttributeNames and ExpressionAttributeValues, + // see Using Placeholders for Attribute Names and Values (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ExpressionPlaceholders.html) + // in the Amazon DynamoDB Developer Guide. + KeyConditionExpression *string `type:"string"` + + // This is a legacy parameter. Use KeyConditionExpression instead. For more + // information, see KeyConditions (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.KeyConditions.html) + // in the Amazon DynamoDB Developer Guide. + KeyConditions map[string]*Condition `type:"map"` + + // The maximum number of items to evaluate (not necessarily the number of matching + // items). If DynamoDB processes the number of items up to the limit while processing + // the results, it stops the operation and returns the matching values up to + // that point, and a key in LastEvaluatedKey to apply in a subsequent operation, + // so that you can pick up where you left off. Also, if the processed data set + // size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation + // and returns the matching values up to the limit, and a key in LastEvaluatedKey + // to apply in a subsequent operation to continue the operation. For more information, + // see Query and Scan (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html) + // in the Amazon DynamoDB Developer Guide. + Limit *int64 `min:"1" type:"integer"` + + // A string that identifies one or more attributes to retrieve from the table. + // These attributes can include scalars, sets, or elements of a JSON document. + // The attributes in the expression must be separated by commas. + // + // If no attribute names are specified, then all attributes will be returned. + // If any of the requested attributes are not found, they will not appear in + // the result. + // + // For more information, see Accessing Item Attributes (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ProjectionExpression *string `type:"string"` + + // This is a legacy parameter. Use FilterExpression instead. For more information, + // see QueryFilter (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.QueryFilter.html) + // in the Amazon DynamoDB Developer Guide. + QueryFilter map[string]*Condition `type:"map"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // Specifies the order for index traversal: If true (default), the traversal + // is performed in ascending order; if false, the traversal is performed in + // descending order. + // + // Items with the same partition key value are stored in sorted order by sort + // key. If the sort key data type is Number, the results are stored in numeric + // order. For type String, the results are stored in order of UTF-8 bytes. For + // type Binary, DynamoDB treats each byte of the binary data as unsigned. + // + // If ScanIndexForward is true, DynamoDB returns the results in the order in + // which they are stored (by sort key value). This is the default behavior. + // If ScanIndexForward is false, DynamoDB reads the results in reverse order + // by sort key value, and then returns the results to the client. + ScanIndexForward *bool `type:"boolean"` + + // The attributes to be returned in the result. You can retrieve all item attributes, + // specific item attributes, the count of matching items, or in the case of + // an index, some or all of the attributes projected into the index. + // + // * ALL_ATTRIBUTES - Returns all of the item attributes from the specified + // table or index. If you query a local secondary index, then for each matching + // item in the index DynamoDB will fetch the entire item from the parent + // table. If the index is configured to project all item attributes, then + // all of the data can be obtained from the local secondary index, and no + // fetching is required. + // + // * ALL_PROJECTED_ATTRIBUTES - Allowed only when querying an index. Retrieves + // all attributes that have been projected into the index. If the index is + // configured to project all attributes, this return value is equivalent + // to specifying ALL_ATTRIBUTES. + // + // * COUNT - Returns the number of matching items, rather than the matching + // items themselves. + // + // * SPECIFIC_ATTRIBUTES - Returns only the attributes listed in AttributesToGet. + // This return value is equivalent to specifying AttributesToGet without + // specifying any value for Select. + // + // If you query or scan a local secondary index and request only attributes + // that are projected into that index, the operation will read only the index + // and not the table. If any of the requested attributes are not projected + // into the local secondary index, DynamoDB will fetch each of these attributes + // from the parent table. This extra fetching incurs additional throughput + // cost and latency. + // + // If you query or scan a global secondary index, you can only request attributes + // that are projected into the index. Global secondary index queries cannot + // fetch attributes from the parent table. + // + // If neither Select nor AttributesToGet are specified, DynamoDB defaults to + // ALL_ATTRIBUTES when accessing a table, and ALL_PROJECTED_ATTRIBUTES when + // accessing an index. You cannot use both Select and AttributesToGet together + // in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. + // (This usage is equivalent to specifying AttributesToGet without any value + // for Select.) + // + // If you use the ProjectionExpression parameter, then the value for Select + // can only be SPECIFIC_ATTRIBUTES. Any other value for Select will return an + // error. + Select *string `type:"string" enum:"Select"` + + // The name of the table containing the requested items. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s QueryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *QueryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueryInput"} + if s.AttributesToGet != nil && len(s.AttributesToGet) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributesToGet", 1)) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.KeyConditions != nil { + for i, v := range s.KeyConditions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "KeyConditions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.QueryFilter != nil { + for i, v := range s.QueryFilter { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "QueryFilter", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributesToGet sets the AttributesToGet field's value. +func (s *QueryInput) SetAttributesToGet(v []*string) *QueryInput { + s.AttributesToGet = v + return s +} + +// SetConditionalOperator sets the ConditionalOperator field's value. +func (s *QueryInput) SetConditionalOperator(v string) *QueryInput { + s.ConditionalOperator = &v + return s +} + +// SetConsistentRead sets the ConsistentRead field's value. +func (s *QueryInput) SetConsistentRead(v bool) *QueryInput { + s.ConsistentRead = &v + return s +} + +// SetExclusiveStartKey sets the ExclusiveStartKey field's value. +func (s *QueryInput) SetExclusiveStartKey(v map[string]*AttributeValue) *QueryInput { + s.ExclusiveStartKey = v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *QueryInput) SetExpressionAttributeNames(v map[string]*string) *QueryInput { + s.ExpressionAttributeNames = v + return s +} + +// SetExpressionAttributeValues sets the ExpressionAttributeValues field's value. +func (s *QueryInput) SetExpressionAttributeValues(v map[string]*AttributeValue) *QueryInput { + s.ExpressionAttributeValues = v + return s +} + +// SetFilterExpression sets the FilterExpression field's value. +func (s *QueryInput) SetFilterExpression(v string) *QueryInput { + s.FilterExpression = &v + return s +} + +// SetIndexName sets the IndexName field's value. +func (s *QueryInput) SetIndexName(v string) *QueryInput { + s.IndexName = &v + return s +} + +// SetKeyConditionExpression sets the KeyConditionExpression field's value. +func (s *QueryInput) SetKeyConditionExpression(v string) *QueryInput { + s.KeyConditionExpression = &v + return s +} + +// SetKeyConditions sets the KeyConditions field's value. +func (s *QueryInput) SetKeyConditions(v map[string]*Condition) *QueryInput { + s.KeyConditions = v + return s +} + +// SetLimit sets the Limit field's value. +func (s *QueryInput) SetLimit(v int64) *QueryInput { + s.Limit = &v + return s +} + +// SetProjectionExpression sets the ProjectionExpression field's value. +func (s *QueryInput) SetProjectionExpression(v string) *QueryInput { + s.ProjectionExpression = &v + return s +} + +// SetQueryFilter sets the QueryFilter field's value. +func (s *QueryInput) SetQueryFilter(v map[string]*Condition) *QueryInput { + s.QueryFilter = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *QueryInput) SetReturnConsumedCapacity(v string) *QueryInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetScanIndexForward sets the ScanIndexForward field's value. +func (s *QueryInput) SetScanIndexForward(v bool) *QueryInput { + s.ScanIndexForward = &v + return s +} + +// SetSelect sets the Select field's value. +func (s *QueryInput) SetSelect(v string) *QueryInput { + s.Select = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *QueryInput) SetTableName(v string) *QueryInput { + s.TableName = &v + return s +} + +// Represents the output of a Query operation. +type QueryOutput struct { + _ struct{} `type:"structure"` + + // The capacity units consumed by the Query operation. The data returned includes + // the total provisioned throughput consumed, along with statistics for the + // table and any indexes involved in the operation. ConsumedCapacity is only + // returned if the ReturnConsumedCapacity parameter was specified For more information, + // see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // The number of items in the response. + // + // If you used a QueryFilter in the request, then Count is the number of items + // returned after the filter was applied, and ScannedCount is the number of + // matching items before the filter was applied. + // + // If you did not use a filter in the request, then Count and ScannedCount are + // the same. + Count *int64 `type:"integer"` + + // An array of item attributes that match the query criteria. Each element in + // this array consists of an attribute name and the value for that attribute. + Items []map[string]*AttributeValue `type:"list"` + + // The primary key of the item where the operation stopped, inclusive of the + // previous result set. Use this value to start a new operation, excluding this + // value in the new request. + // + // If LastEvaluatedKey is empty, then the "last page" of results has been processed + // and there is no more data to be retrieved. + // + // If LastEvaluatedKey is not empty, it does not necessarily mean that there + // is more data in the result set. The only way to know when you have reached + // the end of the result set is when LastEvaluatedKey is empty. + LastEvaluatedKey map[string]*AttributeValue `type:"map"` + + // The number of items evaluated, before any QueryFilter is applied. A high + // ScannedCount value with few, or no, Count results indicates an inefficient + // Query operation. For more information, see Count and ScannedCount (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#Count) + // in the Amazon DynamoDB Developer Guide. + // + // If you did not use a filter in the request, then ScannedCount is the same + // as Count. + ScannedCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s QueryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueryOutput) GoString() string { + return s.String() +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *QueryOutput) SetConsumedCapacity(v *ConsumedCapacity) *QueryOutput { + s.ConsumedCapacity = v + return s +} + +// SetCount sets the Count field's value. +func (s *QueryOutput) SetCount(v int64) *QueryOutput { + s.Count = &v + return s +} + +// SetItems sets the Items field's value. +func (s *QueryOutput) SetItems(v []map[string]*AttributeValue) *QueryOutput { + s.Items = v + return s +} + +// SetLastEvaluatedKey sets the LastEvaluatedKey field's value. +func (s *QueryOutput) SetLastEvaluatedKey(v map[string]*AttributeValue) *QueryOutput { + s.LastEvaluatedKey = v + return s +} + +// SetScannedCount sets the ScannedCount field's value. +func (s *QueryOutput) SetScannedCount(v int64) *QueryOutput { + s.ScannedCount = &v + return s +} + +// Represents the properties of a replica. +type Replica struct { + _ struct{} `type:"structure"` + + // The region where the replica needs to be created. + RegionName *string `type:"string"` +} + +// String returns the string representation +func (s Replica) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Replica) GoString() string { + return s.String() +} + +// SetRegionName sets the RegionName field's value. +func (s *Replica) SetRegionName(v string) *Replica { + s.RegionName = &v + return s +} + +// Contains the details of the replica. +type ReplicaDescription struct { + _ struct{} `type:"structure"` + + // The name of the region. + RegionName *string `type:"string"` +} + +// String returns the string representation +func (s ReplicaDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaDescription) GoString() string { + return s.String() +} + +// SetRegionName sets the RegionName field's value. +func (s *ReplicaDescription) SetRegionName(v string) *ReplicaDescription { + s.RegionName = &v + return s +} + +// Represents the properties of a global secondary index. +type ReplicaGlobalSecondaryIndexSettingsDescription struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The current status of the global secondary index: + // + // * CREATING - The global secondary index is being created. + // + // * UPDATING - The global secondary index is being updated. + // + // * DELETING - The global secondary index is being deleted. + // + // * ACTIVE - The global secondary index is ready for use. + IndexStatus *string `type:"string" enum:"IndexStatus"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. + ProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + ProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsDescription) GoString() string { + return s.String() +} + +// SetIndexName sets the IndexName field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetIndexName(v string) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.IndexName = &v + return s +} + +// SetIndexStatus sets the IndexStatus field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetIndexStatus(v string) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.IndexStatus = &v + return s +} + +// SetProvisionedReadCapacityUnits sets the ProvisionedReadCapacityUnits field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetProvisionedReadCapacityUnits(v int64) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.ProvisionedReadCapacityUnits = &v + return s +} + +// SetProvisionedWriteCapacityUnits sets the ProvisionedWriteCapacityUnits field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetProvisionedWriteCapacityUnits(v int64) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.ProvisionedWriteCapacityUnits = &v + return s +} + +// Represents the settings of a global secondary index for a global table that +// will be modified. +type ReplicaGlobalSecondaryIndexSettingsUpdate struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. + ProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicaGlobalSecondaryIndexSettingsUpdate"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.ProvisionedReadCapacityUnits != nil && *s.ProvisionedReadCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ProvisionedReadCapacityUnits", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) SetIndexName(v string) *ReplicaGlobalSecondaryIndexSettingsUpdate { + s.IndexName = &v + return s +} + +// SetProvisionedReadCapacityUnits sets the ProvisionedReadCapacityUnits field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) SetProvisionedReadCapacityUnits(v int64) *ReplicaGlobalSecondaryIndexSettingsUpdate { + s.ProvisionedReadCapacityUnits = &v + return s +} + +// Represents the properties of a replica. +type ReplicaSettingsDescription struct { + _ struct{} `type:"structure"` + + // The region name of the replica. + // + // RegionName is a required field + RegionName *string `type:"string" required:"true"` + + // Replica global secondary index settings for the global table. + ReplicaGlobalSecondaryIndexSettings []*ReplicaGlobalSecondaryIndexSettingsDescription `type:"list"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. For more information, see Specifying + // Read and Write Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + ReplicaProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. For more information, see Specifying Read and Write + // Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + ReplicaProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` + + // The current state of the region: + // + // * CREATING - The region is being created. + // + // * UPDATING - The region is being updated. + // + // * DELETING - The region is being deleted. + // + // * ACTIVE - The region is ready for use. + ReplicaStatus *string `type:"string" enum:"ReplicaStatus"` +} + +// String returns the string representation +func (s ReplicaSettingsDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaSettingsDescription) GoString() string { + return s.String() +} + +// SetRegionName sets the RegionName field's value. +func (s *ReplicaSettingsDescription) SetRegionName(v string) *ReplicaSettingsDescription { + s.RegionName = &v + return s +} + +// SetReplicaGlobalSecondaryIndexSettings sets the ReplicaGlobalSecondaryIndexSettings field's value. +func (s *ReplicaSettingsDescription) SetReplicaGlobalSecondaryIndexSettings(v []*ReplicaGlobalSecondaryIndexSettingsDescription) *ReplicaSettingsDescription { + s.ReplicaGlobalSecondaryIndexSettings = v + return s +} + +// SetReplicaProvisionedReadCapacityUnits sets the ReplicaProvisionedReadCapacityUnits field's value. +func (s *ReplicaSettingsDescription) SetReplicaProvisionedReadCapacityUnits(v int64) *ReplicaSettingsDescription { + s.ReplicaProvisionedReadCapacityUnits = &v + return s +} + +// SetReplicaProvisionedWriteCapacityUnits sets the ReplicaProvisionedWriteCapacityUnits field's value. +func (s *ReplicaSettingsDescription) SetReplicaProvisionedWriteCapacityUnits(v int64) *ReplicaSettingsDescription { + s.ReplicaProvisionedWriteCapacityUnits = &v + return s +} + +// SetReplicaStatus sets the ReplicaStatus field's value. +func (s *ReplicaSettingsDescription) SetReplicaStatus(v string) *ReplicaSettingsDescription { + s.ReplicaStatus = &v + return s +} + +// Represents the settings for a global table in a region that will be modified. +type ReplicaSettingsUpdate struct { + _ struct{} `type:"structure"` + + // The region of the replica to be added. + // + // RegionName is a required field + RegionName *string `type:"string" required:"true"` + + // Represents the settings of a global secondary index for a global table that + // will be modified. + ReplicaGlobalSecondaryIndexSettingsUpdate []*ReplicaGlobalSecondaryIndexSettingsUpdate `min:"1" type:"list"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. For more information, see Specifying + // Read and Write Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + ReplicaProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s ReplicaSettingsUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaSettingsUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicaSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicaSettingsUpdate"} + if s.RegionName == nil { + invalidParams.Add(request.NewErrParamRequired("RegionName")) + } + if s.ReplicaGlobalSecondaryIndexSettingsUpdate != nil && len(s.ReplicaGlobalSecondaryIndexSettingsUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReplicaGlobalSecondaryIndexSettingsUpdate", 1)) + } + if s.ReplicaProvisionedReadCapacityUnits != nil && *s.ReplicaProvisionedReadCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ReplicaProvisionedReadCapacityUnits", 1)) + } + if s.ReplicaGlobalSecondaryIndexSettingsUpdate != nil { + for i, v := range s.ReplicaGlobalSecondaryIndexSettingsUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaGlobalSecondaryIndexSettingsUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRegionName sets the RegionName field's value. +func (s *ReplicaSettingsUpdate) SetRegionName(v string) *ReplicaSettingsUpdate { + s.RegionName = &v + return s +} + +// SetReplicaGlobalSecondaryIndexSettingsUpdate sets the ReplicaGlobalSecondaryIndexSettingsUpdate field's value. +func (s *ReplicaSettingsUpdate) SetReplicaGlobalSecondaryIndexSettingsUpdate(v []*ReplicaGlobalSecondaryIndexSettingsUpdate) *ReplicaSettingsUpdate { + s.ReplicaGlobalSecondaryIndexSettingsUpdate = v + return s +} + +// SetReplicaProvisionedReadCapacityUnits sets the ReplicaProvisionedReadCapacityUnits field's value. +func (s *ReplicaSettingsUpdate) SetReplicaProvisionedReadCapacityUnits(v int64) *ReplicaSettingsUpdate { + s.ReplicaProvisionedReadCapacityUnits = &v + return s +} + +// Represents one of the following: +// +// * A new replica to be added to an existing global table. +// +// * New parameters for an existing replica. +// +// * An existing replica to be removed from an existing global table. +type ReplicaUpdate struct { + _ struct{} `type:"structure"` + + // The parameters required for creating a replica on an existing global table. + Create *CreateReplicaAction `type:"structure"` + + // The name of the existing replica to be removed. + Delete *DeleteReplicaAction `type:"structure"` +} + +// String returns the string representation +func (s ReplicaUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicaUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicaUpdate"} + if s.Create != nil { + if err := s.Create.Validate(); err != nil { + invalidParams.AddNested("Create", err.(request.ErrInvalidParams)) + } + } + if s.Delete != nil { + if err := s.Delete.Validate(); err != nil { + invalidParams.AddNested("Delete", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCreate sets the Create field's value. +func (s *ReplicaUpdate) SetCreate(v *CreateReplicaAction) *ReplicaUpdate { + s.Create = v + return s +} + +// SetDelete sets the Delete field's value. +func (s *ReplicaUpdate) SetDelete(v *DeleteReplicaAction) *ReplicaUpdate { + s.Delete = v + return s +} + +// Contains details for the restore. +type RestoreSummary struct { + _ struct{} `type:"structure"` + + // Point in time or source backup time. + // + // RestoreDateTime is a required field + RestoreDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // Indicates if a restore is in progress or not. + // + // RestoreInProgress is a required field + RestoreInProgress *bool `type:"boolean" required:"true"` + + // ARN of the backup from which the table was restored. + SourceBackupArn *string `min:"37" type:"string"` + + // ARN of the source table of the backup that is being restored. + SourceTableArn *string `type:"string"` +} + +// String returns the string representation +func (s RestoreSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreSummary) GoString() string { + return s.String() +} + +// SetRestoreDateTime sets the RestoreDateTime field's value. +func (s *RestoreSummary) SetRestoreDateTime(v time.Time) *RestoreSummary { + s.RestoreDateTime = &v + return s +} + +// SetRestoreInProgress sets the RestoreInProgress field's value. +func (s *RestoreSummary) SetRestoreInProgress(v bool) *RestoreSummary { + s.RestoreInProgress = &v + return s +} + +// SetSourceBackupArn sets the SourceBackupArn field's value. +func (s *RestoreSummary) SetSourceBackupArn(v string) *RestoreSummary { + s.SourceBackupArn = &v + return s +} + +// SetSourceTableArn sets the SourceTableArn field's value. +func (s *RestoreSummary) SetSourceTableArn(v string) *RestoreSummary { + s.SourceTableArn = &v + return s +} + +type RestoreTableFromBackupInput struct { + _ struct{} `type:"structure"` + + // The ARN associated with the backup. + // + // BackupArn is a required field + BackupArn *string `min:"37" type:"string" required:"true"` + + // The name of the new table to which the backup must be restored. + // + // TargetTableName is a required field + TargetTableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s RestoreTableFromBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreTableFromBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreTableFromBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreTableFromBackupInput"} + if s.BackupArn == nil { + invalidParams.Add(request.NewErrParamRequired("BackupArn")) + } + if s.BackupArn != nil && len(*s.BackupArn) < 37 { + invalidParams.Add(request.NewErrParamMinLen("BackupArn", 37)) + } + if s.TargetTableName == nil { + invalidParams.Add(request.NewErrParamRequired("TargetTableName")) + } + if s.TargetTableName != nil && len(*s.TargetTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TargetTableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupArn sets the BackupArn field's value. +func (s *RestoreTableFromBackupInput) SetBackupArn(v string) *RestoreTableFromBackupInput { + s.BackupArn = &v + return s +} + +// SetTargetTableName sets the TargetTableName field's value. +func (s *RestoreTableFromBackupInput) SetTargetTableName(v string) *RestoreTableFromBackupInput { + s.TargetTableName = &v + return s +} + +type RestoreTableFromBackupOutput struct { + _ struct{} `type:"structure"` + + // The description of the table created from an existing backup. + TableDescription *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s RestoreTableFromBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreTableFromBackupOutput) GoString() string { + return s.String() +} + +// SetTableDescription sets the TableDescription field's value. +func (s *RestoreTableFromBackupOutput) SetTableDescription(v *TableDescription) *RestoreTableFromBackupOutput { + s.TableDescription = v + return s +} + +type RestoreTableToPointInTimeInput struct { + _ struct{} `type:"structure"` + + // Time in the past to restore the table to. + RestoreDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Name of the source table that is being restored. + // + // SourceTableName is a required field + SourceTableName *string `min:"3" type:"string" required:"true"` + + // The name of the new table to which it must be restored to. + // + // TargetTableName is a required field + TargetTableName *string `min:"3" type:"string" required:"true"` + + // Restore the table to the latest possible time. LatestRestorableDateTime is + // typically 5 minutes before the current time. + UseLatestRestorableTime *bool `type:"boolean"` +} + +// String returns the string representation +func (s RestoreTableToPointInTimeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreTableToPointInTimeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreTableToPointInTimeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreTableToPointInTimeInput"} + if s.SourceTableName == nil { + invalidParams.Add(request.NewErrParamRequired("SourceTableName")) + } + if s.SourceTableName != nil && len(*s.SourceTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("SourceTableName", 3)) + } + if s.TargetTableName == nil { + invalidParams.Add(request.NewErrParamRequired("TargetTableName")) + } + if s.TargetTableName != nil && len(*s.TargetTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TargetTableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRestoreDateTime sets the RestoreDateTime field's value. +func (s *RestoreTableToPointInTimeInput) SetRestoreDateTime(v time.Time) *RestoreTableToPointInTimeInput { + s.RestoreDateTime = &v + return s +} + +// SetSourceTableName sets the SourceTableName field's value. +func (s *RestoreTableToPointInTimeInput) SetSourceTableName(v string) *RestoreTableToPointInTimeInput { + s.SourceTableName = &v + return s +} + +// SetTargetTableName sets the TargetTableName field's value. +func (s *RestoreTableToPointInTimeInput) SetTargetTableName(v string) *RestoreTableToPointInTimeInput { + s.TargetTableName = &v + return s +} + +// SetUseLatestRestorableTime sets the UseLatestRestorableTime field's value. +func (s *RestoreTableToPointInTimeInput) SetUseLatestRestorableTime(v bool) *RestoreTableToPointInTimeInput { + s.UseLatestRestorableTime = &v + return s +} + +type RestoreTableToPointInTimeOutput struct { + _ struct{} `type:"structure"` + + // Represents the properties of a table. + TableDescription *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s RestoreTableToPointInTimeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreTableToPointInTimeOutput) GoString() string { + return s.String() +} + +// SetTableDescription sets the TableDescription field's value. +func (s *RestoreTableToPointInTimeOutput) SetTableDescription(v *TableDescription) *RestoreTableToPointInTimeOutput { + s.TableDescription = v + return s +} + +// The description of the server-side encryption status on the specified table. +type SSEDescription struct { + _ struct{} `type:"structure"` + + // The current state of server-side encryption: + // + // * ENABLING - Server-side encryption is being enabled. + // + // * ENABLED - Server-side encryption is enabled. + // + // * DISABLING - Server-side encryption is being disabled. + // + // * DISABLED - Server-side encryption is disabled. + Status *string `type:"string" enum:"SSEStatus"` +} + +// String returns the string representation +func (s SSEDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSEDescription) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *SSEDescription) SetStatus(v string) *SSEDescription { + s.Status = &v + return s +} + +// Represents the settings used to enable server-side encryption. +type SSESpecification struct { + _ struct{} `type:"structure"` + + // Indicates whether server-side encryption is enabled (true) or disabled (false) + // on the table. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s SSESpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSESpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SSESpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SSESpecification"} + if s.Enabled == nil { + invalidParams.Add(request.NewErrParamRequired("Enabled")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *SSESpecification) SetEnabled(v bool) *SSESpecification { + s.Enabled = &v + return s +} + +// Represents the input of a Scan operation. +type ScanInput struct { + _ struct{} `type:"structure"` + + // This is a legacy parameter. Use ProjectionExpression instead. For more information, + // see AttributesToGet (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.AttributesToGet.html) + // in the Amazon DynamoDB Developer Guide. + AttributesToGet []*string `min:"1" type:"list"` + + // This is a legacy parameter. Use FilterExpression instead. For more information, + // see ConditionalOperator (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ConditionalOperator.html) + // in the Amazon DynamoDB Developer Guide. + ConditionalOperator *string `type:"string" enum:"ConditionalOperator"` + + // A Boolean value that determines the read consistency model during the scan: + // + // * If ConsistentRead is false, then the data returned from Scan might not + // contain the results from other recently completed write operations (PutItem, + // UpdateItem or DeleteItem). + // + // * If ConsistentRead is true, then all of the write operations that completed + // before the Scan began are guaranteed to be contained in the Scan response. + // + // The default setting for ConsistentRead is false. + // + // The ConsistentRead parameter is not supported on global secondary indexes. + // If you scan a global secondary index with ConsistentRead set to true, you + // will receive a ValidationException. + ConsistentRead *bool `type:"boolean"` + + // The primary key of the first item that this operation will evaluate. Use + // the value that was returned for LastEvaluatedKey in the previous operation. + // + // The data type for ExclusiveStartKey must be String, Number or Binary. No + // set data types are allowed. + // + // In a parallel scan, a Scan request that includes ExclusiveStartKey must specify + // the same segment whose previous Scan returned the corresponding value of + // LastEvaluatedKey. + ExclusiveStartKey map[string]*AttributeValue `type:"map"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // One or more values that can be substituted in an expression. + // + // Use the : (colon) character in an expression to dereference an attribute + // value. For example, suppose that you wanted to check whether the value of + // the ProductStatus attribute was one of the following: + // + // Available | Backordered | Discontinued + // + // You would first need to specify ExpressionAttributeValues as follows: + // + // { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} + // } + // + // You could then use these values in an expression, such as this: + // + // ProductStatus IN (:avail, :back, :disc) + // + // For more information on expression attribute values, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeValues map[string]*AttributeValue `type:"map"` + + // A string that contains conditions that DynamoDB applies after the Scan operation, + // but before the data is returned to you. Items that do not satisfy the FilterExpression + // criteria are not returned. + // + // A FilterExpression is applied after the items have already been read; the + // process of filtering does not consume any additional read capacity units. + // + // For more information, see Filter Expressions (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#FilteringResults) + // in the Amazon DynamoDB Developer Guide. + FilterExpression *string `type:"string"` + + // The name of a secondary index to scan. This index can be any local secondary + // index or global secondary index. Note that if you use the IndexName parameter, + // you must also provide TableName. + IndexName *string `min:"3" type:"string"` + + // The maximum number of items to evaluate (not necessarily the number of matching + // items). If DynamoDB processes the number of items up to the limit while processing + // the results, it stops the operation and returns the matching values up to + // that point, and a key in LastEvaluatedKey to apply in a subsequent operation, + // so that you can pick up where you left off. Also, if the processed data set + // size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation + // and returns the matching values up to the limit, and a key in LastEvaluatedKey + // to apply in a subsequent operation to continue the operation. For more information, + // see Query and Scan (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html) + // in the Amazon DynamoDB Developer Guide. + Limit *int64 `min:"1" type:"integer"` + + // A string that identifies one or more attributes to retrieve from the specified + // table or index. These attributes can include scalars, sets, or elements of + // a JSON document. The attributes in the expression must be separated by commas. + // + // If no attribute names are specified, then all attributes will be returned. + // If any of the requested attributes are not found, they will not appear in + // the result. + // + // For more information, see Accessing Item Attributes (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ProjectionExpression *string `type:"string"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // This is a legacy parameter. Use FilterExpression instead. For more information, + // see ScanFilter (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ScanFilter.html) + // in the Amazon DynamoDB Developer Guide. + ScanFilter map[string]*Condition `type:"map"` + + // For a parallel Scan request, Segment identifies an individual segment to + // be scanned by an application worker. + // + // Segment IDs are zero-based, so the first segment is always 0. For example, + // if you want to use four application threads to scan a table or an index, + // then the first thread specifies a Segment value of 0, the second thread specifies + // 1, and so on. + // + // The value of LastEvaluatedKey returned from a parallel Scan request must + // be used as ExclusiveStartKey with the same segment ID in a subsequent Scan + // operation. + // + // The value for Segment must be greater than or equal to 0, and less than the + // value provided for TotalSegments. + // + // If you provide Segment, you must also provide TotalSegments. + Segment *int64 `type:"integer"` + + // The attributes to be returned in the result. You can retrieve all item attributes, + // specific item attributes, the count of matching items, or in the case of + // an index, some or all of the attributes projected into the index. + // + // * ALL_ATTRIBUTES - Returns all of the item attributes from the specified + // table or index. If you query a local secondary index, then for each matching + // item in the index DynamoDB will fetch the entire item from the parent + // table. If the index is configured to project all item attributes, then + // all of the data can be obtained from the local secondary index, and no + // fetching is required. + // + // * ALL_PROJECTED_ATTRIBUTES - Allowed only when querying an index. Retrieves + // all attributes that have been projected into the index. If the index is + // configured to project all attributes, this return value is equivalent + // to specifying ALL_ATTRIBUTES. + // + // * COUNT - Returns the number of matching items, rather than the matching + // items themselves. + // + // * SPECIFIC_ATTRIBUTES - Returns only the attributes listed in AttributesToGet. + // This return value is equivalent to specifying AttributesToGet without + // specifying any value for Select. + // + // If you query or scan a local secondary index and request only attributes + // that are projected into that index, the operation will read only the index + // and not the table. If any of the requested attributes are not projected + // into the local secondary index, DynamoDB will fetch each of these attributes + // from the parent table. This extra fetching incurs additional throughput + // cost and latency. + // + // If you query or scan a global secondary index, you can only request attributes + // that are projected into the index. Global secondary index queries cannot + // fetch attributes from the parent table. + // + // If neither Select nor AttributesToGet are specified, DynamoDB defaults to + // ALL_ATTRIBUTES when accessing a table, and ALL_PROJECTED_ATTRIBUTES when + // accessing an index. You cannot use both Select and AttributesToGet together + // in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. + // (This usage is equivalent to specifying AttributesToGet without any value + // for Select.) + // + // If you use the ProjectionExpression parameter, then the value for Select + // can only be SPECIFIC_ATTRIBUTES. Any other value for Select will return an + // error. + Select *string `type:"string" enum:"Select"` + + // The name of the table containing the requested items; or, if you provide + // IndexName, the name of the table to which that index belongs. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` + + // For a parallel Scan request, TotalSegments represents the total number of + // segments into which the Scan operation will be divided. The value of TotalSegments + // corresponds to the number of application workers that will perform the parallel + // scan. For example, if you want to use four application threads to scan a + // table or an index, specify a TotalSegments value of 4. + // + // The value for TotalSegments must be greater than or equal to 1, and less + // than or equal to 1000000. If you specify a TotalSegments value of 1, the + // Scan operation will be sequential rather than parallel. + // + // If you specify TotalSegments, you must also specify Segment. + TotalSegments *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s ScanInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScanInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ScanInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ScanInput"} + if s.AttributesToGet != nil && len(s.AttributesToGet) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributesToGet", 1)) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.TotalSegments != nil && *s.TotalSegments < 1 { + invalidParams.Add(request.NewErrParamMinValue("TotalSegments", 1)) + } + if s.ScanFilter != nil { + for i, v := range s.ScanFilter { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ScanFilter", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributesToGet sets the AttributesToGet field's value. +func (s *ScanInput) SetAttributesToGet(v []*string) *ScanInput { + s.AttributesToGet = v + return s +} + +// SetConditionalOperator sets the ConditionalOperator field's value. +func (s *ScanInput) SetConditionalOperator(v string) *ScanInput { + s.ConditionalOperator = &v + return s +} + +// SetConsistentRead sets the ConsistentRead field's value. +func (s *ScanInput) SetConsistentRead(v bool) *ScanInput { + s.ConsistentRead = &v + return s +} + +// SetExclusiveStartKey sets the ExclusiveStartKey field's value. +func (s *ScanInput) SetExclusiveStartKey(v map[string]*AttributeValue) *ScanInput { + s.ExclusiveStartKey = v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *ScanInput) SetExpressionAttributeNames(v map[string]*string) *ScanInput { + s.ExpressionAttributeNames = v + return s +} + +// SetExpressionAttributeValues sets the ExpressionAttributeValues field's value. +func (s *ScanInput) SetExpressionAttributeValues(v map[string]*AttributeValue) *ScanInput { + s.ExpressionAttributeValues = v + return s +} + +// SetFilterExpression sets the FilterExpression field's value. +func (s *ScanInput) SetFilterExpression(v string) *ScanInput { + s.FilterExpression = &v + return s +} + +// SetIndexName sets the IndexName field's value. +func (s *ScanInput) SetIndexName(v string) *ScanInput { + s.IndexName = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ScanInput) SetLimit(v int64) *ScanInput { + s.Limit = &v + return s +} + +// SetProjectionExpression sets the ProjectionExpression field's value. +func (s *ScanInput) SetProjectionExpression(v string) *ScanInput { + s.ProjectionExpression = &v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *ScanInput) SetReturnConsumedCapacity(v string) *ScanInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetScanFilter sets the ScanFilter field's value. +func (s *ScanInput) SetScanFilter(v map[string]*Condition) *ScanInput { + s.ScanFilter = v + return s +} + +// SetSegment sets the Segment field's value. +func (s *ScanInput) SetSegment(v int64) *ScanInput { + s.Segment = &v + return s +} + +// SetSelect sets the Select field's value. +func (s *ScanInput) SetSelect(v string) *ScanInput { + s.Select = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ScanInput) SetTableName(v string) *ScanInput { + s.TableName = &v + return s +} + +// SetTotalSegments sets the TotalSegments field's value. +func (s *ScanInput) SetTotalSegments(v int64) *ScanInput { + s.TotalSegments = &v + return s +} + +// Represents the output of a Scan operation. +type ScanOutput struct { + _ struct{} `type:"structure"` + + // The capacity units consumed by the Scan operation. The data returned includes + // the total provisioned throughput consumed, along with statistics for the + // table and any indexes involved in the operation. ConsumedCapacity is only + // returned if the ReturnConsumedCapacity parameter was specified. For more + // information, see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // The number of items in the response. + // + // If you set ScanFilter in the request, then Count is the number of items returned + // after the filter was applied, and ScannedCount is the number of matching + // items before the filter was applied. + // + // If you did not use a filter in the request, then Count is the same as ScannedCount. + Count *int64 `type:"integer"` + + // An array of item attributes that match the scan criteria. Each element in + // this array consists of an attribute name and the value for that attribute. + Items []map[string]*AttributeValue `type:"list"` + + // The primary key of the item where the operation stopped, inclusive of the + // previous result set. Use this value to start a new operation, excluding this + // value in the new request. + // + // If LastEvaluatedKey is empty, then the "last page" of results has been processed + // and there is no more data to be retrieved. + // + // If LastEvaluatedKey is not empty, it does not necessarily mean that there + // is more data in the result set. The only way to know when you have reached + // the end of the result set is when LastEvaluatedKey is empty. + LastEvaluatedKey map[string]*AttributeValue `type:"map"` + + // The number of items evaluated, before any ScanFilter is applied. A high ScannedCount + // value with few, or no, Count results indicates an inefficient Scan operation. + // For more information, see Count and ScannedCount (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#Count) + // in the Amazon DynamoDB Developer Guide. + // + // If you did not use a filter in the request, then ScannedCount is the same + // as Count. + ScannedCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s ScanOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScanOutput) GoString() string { + return s.String() +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *ScanOutput) SetConsumedCapacity(v *ConsumedCapacity) *ScanOutput { + s.ConsumedCapacity = v + return s +} + +// SetCount sets the Count field's value. +func (s *ScanOutput) SetCount(v int64) *ScanOutput { + s.Count = &v + return s +} + +// SetItems sets the Items field's value. +func (s *ScanOutput) SetItems(v []map[string]*AttributeValue) *ScanOutput { + s.Items = v + return s +} + +// SetLastEvaluatedKey sets the LastEvaluatedKey field's value. +func (s *ScanOutput) SetLastEvaluatedKey(v map[string]*AttributeValue) *ScanOutput { + s.LastEvaluatedKey = v + return s +} + +// SetScannedCount sets the ScannedCount field's value. +func (s *ScanOutput) SetScannedCount(v int64) *ScanOutput { + s.ScannedCount = &v + return s +} + +// Contains the details of the table when the backup was created. +type SourceTableDetails struct { + _ struct{} `type:"structure"` + + // Number of items in the table. Please note this is an approximate value. + ItemCount *int64 `type:"long"` + + // Schema of the table. + // + // KeySchema is a required field + KeySchema []*KeySchemaElement `min:"1" type:"list" required:"true"` + + // Read IOPs and Write IOPS on the table when the backup was created. + // + // ProvisionedThroughput is a required field + ProvisionedThroughput *ProvisionedThroughput `type:"structure" required:"true"` + + // ARN of the table for which backup was created. + TableArn *string `type:"string"` + + // Time when the source table was created. + // + // TableCreationDateTime is a required field + TableCreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // Unique identifier for the table for which the backup was created. + // + // TableId is a required field + TableId *string `type:"string" required:"true"` + + // The name of the table for which the backup was created. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` + + // Size of the table in bytes. Please note this is an approximate value. + TableSizeBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s SourceTableDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SourceTableDetails) GoString() string { + return s.String() +} + +// SetItemCount sets the ItemCount field's value. +func (s *SourceTableDetails) SetItemCount(v int64) *SourceTableDetails { + s.ItemCount = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *SourceTableDetails) SetKeySchema(v []*KeySchemaElement) *SourceTableDetails { + s.KeySchema = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *SourceTableDetails) SetProvisionedThroughput(v *ProvisionedThroughput) *SourceTableDetails { + s.ProvisionedThroughput = v + return s +} + +// SetTableArn sets the TableArn field's value. +func (s *SourceTableDetails) SetTableArn(v string) *SourceTableDetails { + s.TableArn = &v + return s +} + +// SetTableCreationDateTime sets the TableCreationDateTime field's value. +func (s *SourceTableDetails) SetTableCreationDateTime(v time.Time) *SourceTableDetails { + s.TableCreationDateTime = &v + return s +} + +// SetTableId sets the TableId field's value. +func (s *SourceTableDetails) SetTableId(v string) *SourceTableDetails { + s.TableId = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *SourceTableDetails) SetTableName(v string) *SourceTableDetails { + s.TableName = &v + return s +} + +// SetTableSizeBytes sets the TableSizeBytes field's value. +func (s *SourceTableDetails) SetTableSizeBytes(v int64) *SourceTableDetails { + s.TableSizeBytes = &v + return s +} + +// Contains the details of the features enabled on the table when the backup +// was created. For example, LSIs, GSIs, streams, TTL. +type SourceTableFeatureDetails struct { + _ struct{} `type:"structure"` + + // Represents the GSI properties for the table when the backup was created. + // It includes the IndexName, KeySchema, Projection and ProvisionedThroughput + // for the GSIs on the table at the time of backup. + GlobalSecondaryIndexes []*GlobalSecondaryIndexInfo `type:"list"` + + // Represents the LSI properties for the table when the backup was created. + // It includes the IndexName, KeySchema and Projection for the LSIs on the table + // at the time of backup. + LocalSecondaryIndexes []*LocalSecondaryIndexInfo `type:"list"` + + // The description of the server-side encryption status on the table when the + // backup was created. + SSEDescription *SSEDescription `type:"structure"` + + // Stream settings on the table when the backup was created. + StreamDescription *StreamSpecification `type:"structure"` + + // Time to Live settings on the table when the backup was created. + TimeToLiveDescription *TimeToLiveDescription `type:"structure"` +} + +// String returns the string representation +func (s SourceTableFeatureDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SourceTableFeatureDetails) GoString() string { + return s.String() +} + +// SetGlobalSecondaryIndexes sets the GlobalSecondaryIndexes field's value. +func (s *SourceTableFeatureDetails) SetGlobalSecondaryIndexes(v []*GlobalSecondaryIndexInfo) *SourceTableFeatureDetails { + s.GlobalSecondaryIndexes = v + return s +} + +// SetLocalSecondaryIndexes sets the LocalSecondaryIndexes field's value. +func (s *SourceTableFeatureDetails) SetLocalSecondaryIndexes(v []*LocalSecondaryIndexInfo) *SourceTableFeatureDetails { + s.LocalSecondaryIndexes = v + return s +} + +// SetSSEDescription sets the SSEDescription field's value. +func (s *SourceTableFeatureDetails) SetSSEDescription(v *SSEDescription) *SourceTableFeatureDetails { + s.SSEDescription = v + return s +} + +// SetStreamDescription sets the StreamDescription field's value. +func (s *SourceTableFeatureDetails) SetStreamDescription(v *StreamSpecification) *SourceTableFeatureDetails { + s.StreamDescription = v + return s +} + +// SetTimeToLiveDescription sets the TimeToLiveDescription field's value. +func (s *SourceTableFeatureDetails) SetTimeToLiveDescription(v *TimeToLiveDescription) *SourceTableFeatureDetails { + s.TimeToLiveDescription = v + return s +} + +// Represents the DynamoDB Streams configuration for a table in DynamoDB. +type StreamSpecification struct { + _ struct{} `type:"structure"` + + // Indicates whether DynamoDB Streams is enabled (true) or disabled (false) + // on the table. + StreamEnabled *bool `type:"boolean"` + + // When an item in the table is modified, StreamViewType determines what information + // is written to the stream for this table. Valid values for StreamViewType + // are: + // + // * KEYS_ONLY - Only the key attributes of the modified item are written + // to the stream. + // + // * NEW_IMAGE - The entire item, as it appears after it was modified, is + // written to the stream. + // + // * OLD_IMAGE - The entire item, as it appeared before it was modified, + // is written to the stream. + // + // * NEW_AND_OLD_IMAGES - Both the new and the old item images of the item + // are written to the stream. + StreamViewType *string `type:"string" enum:"StreamViewType"` +} + +// String returns the string representation +func (s StreamSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StreamSpecification) GoString() string { + return s.String() +} + +// SetStreamEnabled sets the StreamEnabled field's value. +func (s *StreamSpecification) SetStreamEnabled(v bool) *StreamSpecification { + s.StreamEnabled = &v + return s +} + +// SetStreamViewType sets the StreamViewType field's value. +func (s *StreamSpecification) SetStreamViewType(v string) *StreamSpecification { + s.StreamViewType = &v + return s +} + +// Represents the properties of a table. +type TableDescription struct { + _ struct{} `type:"structure"` + + // An array of AttributeDefinition objects. Each of these objects describes + // one attribute in the table and index key schema. + // + // Each AttributeDefinition object in this array is composed of: + // + // * AttributeName - The name of the attribute. + // + // * AttributeType - The data type for the attribute. + AttributeDefinitions []*AttributeDefinition `type:"list"` + + // The date and time when the table was created, in UNIX epoch time (http://www.epochconverter.com/) + // format. + CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The global secondary indexes, if any, on the table. Each index is scoped + // to a given partition key value. Each element is composed of: + // + // * Backfilling - If true, then the index is currently in the backfilling + // phase. Backfilling occurs only when a new global secondary index is added + // to the table; it is the process by which DynamoDB populates the new index + // with data from the table. (This attribute does not appear for indexes + // that were created during a CreateTable operation.) + // + // * IndexName - The name of the global secondary index. + // + // * IndexSizeBytes - The total size of the global secondary index, in bytes. + // DynamoDB updates this value approximately every six hours. Recent changes + // might not be reflected in this value. + // + // * IndexStatus - The current status of the global secondary index: + // + // CREATING - The index is being created. + // + // UPDATING - The index is being updated. + // + // DELETING - The index is being deleted. + // + // ACTIVE - The index is ready for use. + // + // * ItemCount - The number of items in the global secondary index. DynamoDB + // updates this value approximately every six hours. Recent changes might + // not be reflected in this value. + // + // * KeySchema - Specifies the complete index key schema. The attribute names + // in the key schema must be between 1 and 255 characters (inclusive). The + // key schema must begin with the same partition key as the table. + // + // * Projection - Specifies attributes that are copied (projected) from the + // table into the index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. Each attribute + // specification is composed of: + // + // ProjectionType - One of the following: + // + // KEYS_ONLY - Only the index and primary keys are projected into the index. + // + // INCLUDE - Only the specified table attributes are projected into the index. + // The list of projected attributes are in NonKeyAttributes. + // + // ALL - All of the table attributes are projected into the index. + // + // NonKeyAttributes - A list of one or more non-key attribute names that are + // projected into the secondary index. The total count of attributes provided + // in NonKeyAttributes, summed across all of the secondary indexes, must + // not exceed 20. If you project the same attribute into two different indexes, + // this counts as two distinct attributes when determining the total. + // + // * ProvisionedThroughput - The provisioned throughput settings for the + // global secondary index, consisting of read and write capacity units, along + // with data about increases and decreases. + // + // If the table is in the DELETING state, no information about indexes will + // be returned. + GlobalSecondaryIndexes []*GlobalSecondaryIndexDescription `type:"list"` + + // The number of items in the specified table. DynamoDB updates this value approximately + // every six hours. Recent changes might not be reflected in this value. + ItemCount *int64 `type:"long"` + + // The primary key structure for the table. Each KeySchemaElement consists of: + // + // * AttributeName - The name of the attribute. + // + // * KeyType - The role of the attribute: + // + // HASH - partition key + // + // RANGE - sort key + // + // The partition key of an item is also known as its hash attribute. The term + // "hash attribute" derives from DynamoDB' usage of an internal hash function + // to evenly distribute data items across partitions, based on their partition + // key values. + // + // The sort key of an item is also known as its range attribute. The term "range + // attribute" derives from the way DynamoDB stores items with the same partition + // key physically close together, in sorted order by the sort key value. + // + // For more information about primary keys, see Primary Key (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html#DataModelPrimaryKey) + // in the Amazon DynamoDB Developer Guide. + KeySchema []*KeySchemaElement `min:"1" type:"list"` + + // The Amazon Resource Name (ARN) that uniquely identifies the latest stream + // for this table. + LatestStreamArn *string `min:"37" type:"string"` + + // A timestamp, in ISO 8601 format, for this stream. + // + // Note that LatestStreamLabel is not a unique identifier for the stream, because + // it is possible that a stream from another table might have the same timestamp. + // However, the combination of the following three elements is guaranteed to + // be unique: + // + // * the AWS customer ID. + // + // * the table name. + // + // * the StreamLabel. + LatestStreamLabel *string `type:"string"` + + // Represents one or more local secondary indexes on the table. Each index is + // scoped to a given partition key value. Tables with one or more local secondary + // indexes are subject to an item collection size limit, where the amount of + // data within a given item collection cannot exceed 10 GB. Each element is + // composed of: + // + // * IndexName - The name of the local secondary index. + // + // * KeySchema - Specifies the complete index key schema. The attribute names + // in the key schema must be between 1 and 255 characters (inclusive). The + // key schema must begin with the same partition key as the table. + // + // * Projection - Specifies attributes that are copied (projected) from the + // table into the index. These are in addition to the primary key attributes + // and index key attributes, which are automatically projected. Each attribute + // specification is composed of: + // + // ProjectionType - One of the following: + // + // KEYS_ONLY - Only the index and primary keys are projected into the index. + // + // INCLUDE - Only the specified table attributes are projected into the index. + // The list of projected attributes are in NonKeyAttributes. + // + // ALL - All of the table attributes are projected into the index. + // + // NonKeyAttributes - A list of one or more non-key attribute names that are + // projected into the secondary index. The total count of attributes provided + // in NonKeyAttributes, summed across all of the secondary indexes, must + // not exceed 20. If you project the same attribute into two different indexes, + // this counts as two distinct attributes when determining the total. + // + // * IndexSizeBytes - Represents the total size of the index, in bytes. DynamoDB + // updates this value approximately every six hours. Recent changes might + // not be reflected in this value. + // + // * ItemCount - Represents the number of items in the index. DynamoDB updates + // this value approximately every six hours. Recent changes might not be + // reflected in this value. + // + // If the table is in the DELETING state, no information about indexes will + // be returned. + LocalSecondaryIndexes []*LocalSecondaryIndexDescription `type:"list"` + + // The provisioned throughput settings for the table, consisting of read and + // write capacity units, along with data about increases and decreases. + ProvisionedThroughput *ProvisionedThroughputDescription `type:"structure"` + + // Contains details for the restore. + RestoreSummary *RestoreSummary `type:"structure"` + + // The description of the server-side encryption status on the specified table. + SSEDescription *SSEDescription `type:"structure"` + + // The current DynamoDB Streams configuration for the table. + StreamSpecification *StreamSpecification `type:"structure"` + + // The Amazon Resource Name (ARN) that uniquely identifies the table. + TableArn *string `type:"string"` + + // Unique identifier for the table for which the backup was created. + TableId *string `type:"string"` + + // The name of the table. + TableName *string `min:"3" type:"string"` + + // The total size of the specified table, in bytes. DynamoDB updates this value + // approximately every six hours. Recent changes might not be reflected in this + // value. + TableSizeBytes *int64 `type:"long"` + + // The current state of the table: + // + // * CREATING - The table is being created. + // + // * UPDATING - The table is being updated. + // + // * DELETING - The table is being deleted. + // + // * ACTIVE - The table is ready for use. + TableStatus *string `type:"string" enum:"TableStatus"` +} + +// String returns the string representation +func (s TableDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TableDescription) GoString() string { + return s.String() +} + +// SetAttributeDefinitions sets the AttributeDefinitions field's value. +func (s *TableDescription) SetAttributeDefinitions(v []*AttributeDefinition) *TableDescription { + s.AttributeDefinitions = v + return s +} + +// SetCreationDateTime sets the CreationDateTime field's value. +func (s *TableDescription) SetCreationDateTime(v time.Time) *TableDescription { + s.CreationDateTime = &v + return s +} + +// SetGlobalSecondaryIndexes sets the GlobalSecondaryIndexes field's value. +func (s *TableDescription) SetGlobalSecondaryIndexes(v []*GlobalSecondaryIndexDescription) *TableDescription { + s.GlobalSecondaryIndexes = v + return s +} + +// SetItemCount sets the ItemCount field's value. +func (s *TableDescription) SetItemCount(v int64) *TableDescription { + s.ItemCount = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *TableDescription) SetKeySchema(v []*KeySchemaElement) *TableDescription { + s.KeySchema = v + return s +} + +// SetLatestStreamArn sets the LatestStreamArn field's value. +func (s *TableDescription) SetLatestStreamArn(v string) *TableDescription { + s.LatestStreamArn = &v + return s +} + +// SetLatestStreamLabel sets the LatestStreamLabel field's value. +func (s *TableDescription) SetLatestStreamLabel(v string) *TableDescription { + s.LatestStreamLabel = &v + return s +} + +// SetLocalSecondaryIndexes sets the LocalSecondaryIndexes field's value. +func (s *TableDescription) SetLocalSecondaryIndexes(v []*LocalSecondaryIndexDescription) *TableDescription { + s.LocalSecondaryIndexes = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *TableDescription) SetProvisionedThroughput(v *ProvisionedThroughputDescription) *TableDescription { + s.ProvisionedThroughput = v + return s +} + +// SetRestoreSummary sets the RestoreSummary field's value. +func (s *TableDescription) SetRestoreSummary(v *RestoreSummary) *TableDescription { + s.RestoreSummary = v + return s +} + +// SetSSEDescription sets the SSEDescription field's value. +func (s *TableDescription) SetSSEDescription(v *SSEDescription) *TableDescription { + s.SSEDescription = v + return s +} + +// SetStreamSpecification sets the StreamSpecification field's value. +func (s *TableDescription) SetStreamSpecification(v *StreamSpecification) *TableDescription { + s.StreamSpecification = v + return s +} + +// SetTableArn sets the TableArn field's value. +func (s *TableDescription) SetTableArn(v string) *TableDescription { + s.TableArn = &v + return s +} + +// SetTableId sets the TableId field's value. +func (s *TableDescription) SetTableId(v string) *TableDescription { + s.TableId = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *TableDescription) SetTableName(v string) *TableDescription { + s.TableName = &v + return s +} + +// SetTableSizeBytes sets the TableSizeBytes field's value. +func (s *TableDescription) SetTableSizeBytes(v int64) *TableDescription { + s.TableSizeBytes = &v + return s +} + +// SetTableStatus sets the TableStatus field's value. +func (s *TableDescription) SetTableStatus(v string) *TableDescription { + s.TableStatus = &v + return s +} + +// Describes a tag. A tag is a key-value pair. You can add up to 50 tags to +// a single DynamoDB table. +// +// AWS-assigned tag names and values are automatically assigned the aws: prefix, +// which the user cannot assign. AWS-assigned tag names do not count towards +// the tag limit of 50. User-assigned tag names have the prefix user: in the +// Cost Allocation Report. You cannot backdate the application of a tag. +// +// For an overview on tagging DynamoDB resources, see Tagging for DynamoDB (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tagging.html) +// in the Amazon DynamoDB Developer Guide. +type Tag struct { + _ struct{} `type:"structure"` + + // The key of the tag.Tag keys are case sensitive. Each DynamoDB table can only + // have up to one tag with the same key. If you try to add an existing tag (same + // key), the existing tag value will be updated to the new value. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The value of the tag. Tag values are case-sensitive and can be null. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type TagResourceInput struct { + _ struct{} `type:"structure"` + + // Identifies the Amazon DynamoDB resource to which tags should be added. This + // value is an Amazon Resource Name (ARN). + // + // ResourceArn is a required field + ResourceArn *string `min:"1" type:"string" required:"true"` + + // The tags to be assigned to the Amazon DynamoDB resource. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s TagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.ResourceArn != nil && len(*s.ResourceArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceArn", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *TagResourceInput) SetResourceArn(v string) *TagResourceInput { + s.ResourceArn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagResourceInput) SetTags(v []*Tag) *TagResourceInput { + s.Tags = v + return s +} + +type TagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceOutput) GoString() string { + return s.String() +} + +// The description of the Time to Live (TTL) status on the specified table. +type TimeToLiveDescription struct { + _ struct{} `type:"structure"` + + // The name of the Time to Live attribute for items in the table. + AttributeName *string `min:"1" type:"string"` + + // The Time to Live status for the table. + TimeToLiveStatus *string `type:"string" enum:"TimeToLiveStatus"` +} + +// String returns the string representation +func (s TimeToLiveDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TimeToLiveDescription) GoString() string { + return s.String() +} + +// SetAttributeName sets the AttributeName field's value. +func (s *TimeToLiveDescription) SetAttributeName(v string) *TimeToLiveDescription { + s.AttributeName = &v + return s +} + +// SetTimeToLiveStatus sets the TimeToLiveStatus field's value. +func (s *TimeToLiveDescription) SetTimeToLiveStatus(v string) *TimeToLiveDescription { + s.TimeToLiveStatus = &v + return s +} + +// Represents the settings used to enable or disable Time to Live for the specified +// table. +type TimeToLiveSpecification struct { + _ struct{} `type:"structure"` + + // The name of the Time to Live attribute used to store the expiration time + // for items in the table. + // + // AttributeName is a required field + AttributeName *string `min:"1" type:"string" required:"true"` + + // Indicates whether Time To Live is to be enabled (true) or disabled (false) + // on the table. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s TimeToLiveSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TimeToLiveSpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TimeToLiveSpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TimeToLiveSpecification"} + if s.AttributeName == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeName")) + } + if s.AttributeName != nil && len(*s.AttributeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttributeName", 1)) + } + if s.Enabled == nil { + invalidParams.Add(request.NewErrParamRequired("Enabled")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeName sets the AttributeName field's value. +func (s *TimeToLiveSpecification) SetAttributeName(v string) *TimeToLiveSpecification { + s.AttributeName = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *TimeToLiveSpecification) SetEnabled(v bool) *TimeToLiveSpecification { + s.Enabled = &v + return s +} + +type UntagResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon DyanamoDB resource the tags will be removed from. This value is + // an Amazon Resource Name (ARN). + // + // ResourceArn is a required field + ResourceArn *string `min:"1" type:"string" required:"true"` + + // A list of tag keys. Existing tags of the resource whose keys are members + // of this list will be removed from the Amazon DynamoDB resource. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.ResourceArn != nil && len(*s.ResourceArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceArn", 1)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *UntagResourceInput) SetResourceArn(v string) *UntagResourceInput { + s.ResourceArn = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *UntagResourceInput) SetTagKeys(v []*string) *UntagResourceInput { + s.TagKeys = v + return s +} + +type UntagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceOutput) GoString() string { + return s.String() +} + +type UpdateContinuousBackupsInput struct { + _ struct{} `type:"structure"` + + // Represents the settings used to enable point in time recovery. + // + // PointInTimeRecoverySpecification is a required field + PointInTimeRecoverySpecification *PointInTimeRecoverySpecification `type:"structure" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateContinuousBackupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateContinuousBackupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateContinuousBackupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateContinuousBackupsInput"} + if s.PointInTimeRecoverySpecification == nil { + invalidParams.Add(request.NewErrParamRequired("PointInTimeRecoverySpecification")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.PointInTimeRecoverySpecification != nil { + if err := s.PointInTimeRecoverySpecification.Validate(); err != nil { + invalidParams.AddNested("PointInTimeRecoverySpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPointInTimeRecoverySpecification sets the PointInTimeRecoverySpecification field's value. +func (s *UpdateContinuousBackupsInput) SetPointInTimeRecoverySpecification(v *PointInTimeRecoverySpecification) *UpdateContinuousBackupsInput { + s.PointInTimeRecoverySpecification = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *UpdateContinuousBackupsInput) SetTableName(v string) *UpdateContinuousBackupsInput { + s.TableName = &v + return s +} + +type UpdateContinuousBackupsOutput struct { + _ struct{} `type:"structure"` + + // Represents the continuous backups and point in time recovery settings on + // the table. + ContinuousBackupsDescription *ContinuousBackupsDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateContinuousBackupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateContinuousBackupsOutput) GoString() string { + return s.String() +} + +// SetContinuousBackupsDescription sets the ContinuousBackupsDescription field's value. +func (s *UpdateContinuousBackupsOutput) SetContinuousBackupsDescription(v *ContinuousBackupsDescription) *UpdateContinuousBackupsOutput { + s.ContinuousBackupsDescription = v + return s +} + +// Represents the new provisioned throughput settings to be applied to a global +// secondary index. +type UpdateGlobalSecondaryIndexAction struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index to be updated. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // Represents the provisioned throughput settings for the specified global secondary + // index. + // + // For current minimum and maximum provisioned throughput values, see Limits + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) + // in the Amazon DynamoDB Developer Guide. + // + // ProvisionedThroughput is a required field + ProvisionedThroughput *ProvisionedThroughput `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateGlobalSecondaryIndexAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalSecondaryIndexAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGlobalSecondaryIndexAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGlobalSecondaryIndexAction"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.ProvisionedThroughput == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisionedThroughput")) + } + if s.ProvisionedThroughput != nil { + if err := s.ProvisionedThroughput.Validate(); err != nil { + invalidParams.AddNested("ProvisionedThroughput", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *UpdateGlobalSecondaryIndexAction) SetIndexName(v string) *UpdateGlobalSecondaryIndexAction { + s.IndexName = &v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *UpdateGlobalSecondaryIndexAction) SetProvisionedThroughput(v *ProvisionedThroughput) *UpdateGlobalSecondaryIndexAction { + s.ProvisionedThroughput = v + return s +} + +type UpdateGlobalTableInput struct { + _ struct{} `type:"structure"` + + // The global table name. + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` + + // A list of regions that should be added or removed from the global table. + // + // ReplicaUpdates is a required field + ReplicaUpdates []*ReplicaUpdate `type:"list" required:"true"` +} + +// String returns the string representation +func (s UpdateGlobalTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGlobalTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGlobalTableInput"} + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + if s.ReplicaUpdates == nil { + invalidParams.Add(request.NewErrParamRequired("ReplicaUpdates")) + } + if s.ReplicaUpdates != nil { + for i, v := range s.ReplicaUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *UpdateGlobalTableInput) SetGlobalTableName(v string) *UpdateGlobalTableInput { + s.GlobalTableName = &v + return s +} + +// SetReplicaUpdates sets the ReplicaUpdates field's value. +func (s *UpdateGlobalTableInput) SetReplicaUpdates(v []*ReplicaUpdate) *UpdateGlobalTableInput { + s.ReplicaUpdates = v + return s +} + +type UpdateGlobalTableOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of the global table. + GlobalTableDescription *GlobalTableDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateGlobalTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalTableOutput) GoString() string { + return s.String() +} + +// SetGlobalTableDescription sets the GlobalTableDescription field's value. +func (s *UpdateGlobalTableOutput) SetGlobalTableDescription(v *GlobalTableDescription) *UpdateGlobalTableOutput { + s.GlobalTableDescription = v + return s +} + +type UpdateGlobalTableSettingsInput struct { + _ struct{} `type:"structure"` + + // Represents the settings of a global secondary index for a global table that + // will be modified. + GlobalTableGlobalSecondaryIndexSettingsUpdate []*GlobalTableGlobalSecondaryIndexSettingsUpdate `min:"1" type:"list"` + + // The name of the global table + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + GlobalTableProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` + + // Represents the settings for a global table in a region that will be modified. + ReplicaSettingsUpdate []*ReplicaSettingsUpdate `min:"1" type:"list"` +} + +// String returns the string representation +func (s UpdateGlobalTableSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalTableSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGlobalTableSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGlobalTableSettingsInput"} + if s.GlobalTableGlobalSecondaryIndexSettingsUpdate != nil && len(s.GlobalTableGlobalSecondaryIndexSettingsUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableGlobalSecondaryIndexSettingsUpdate", 1)) + } + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + if s.GlobalTableProvisionedWriteCapacityUnits != nil && *s.GlobalTableProvisionedWriteCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("GlobalTableProvisionedWriteCapacityUnits", 1)) + } + if s.ReplicaSettingsUpdate != nil && len(s.ReplicaSettingsUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReplicaSettingsUpdate", 1)) + } + if s.GlobalTableGlobalSecondaryIndexSettingsUpdate != nil { + for i, v := range s.GlobalTableGlobalSecondaryIndexSettingsUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "GlobalTableGlobalSecondaryIndexSettingsUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ReplicaSettingsUpdate != nil { + for i, v := range s.ReplicaSettingsUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaSettingsUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableGlobalSecondaryIndexSettingsUpdate sets the GlobalTableGlobalSecondaryIndexSettingsUpdate field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableGlobalSecondaryIndexSettingsUpdate(v []*GlobalTableGlobalSecondaryIndexSettingsUpdate) *UpdateGlobalTableSettingsInput { + s.GlobalTableGlobalSecondaryIndexSettingsUpdate = v + return s +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableName(v string) *UpdateGlobalTableSettingsInput { + s.GlobalTableName = &v + return s +} + +// SetGlobalTableProvisionedWriteCapacityUnits sets the GlobalTableProvisionedWriteCapacityUnits field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableProvisionedWriteCapacityUnits(v int64) *UpdateGlobalTableSettingsInput { + s.GlobalTableProvisionedWriteCapacityUnits = &v + return s +} + +// SetReplicaSettingsUpdate sets the ReplicaSettingsUpdate field's value. +func (s *UpdateGlobalTableSettingsInput) SetReplicaSettingsUpdate(v []*ReplicaSettingsUpdate) *UpdateGlobalTableSettingsInput { + s.ReplicaSettingsUpdate = v + return s +} + +type UpdateGlobalTableSettingsOutput struct { + _ struct{} `type:"structure"` + + // The name of the global table. + GlobalTableName *string `min:"3" type:"string"` + + // The region specific settings for the global table. + ReplicaSettings []*ReplicaSettingsDescription `type:"list"` +} + +// String returns the string representation +func (s UpdateGlobalTableSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalTableSettingsOutput) GoString() string { + return s.String() +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *UpdateGlobalTableSettingsOutput) SetGlobalTableName(v string) *UpdateGlobalTableSettingsOutput { + s.GlobalTableName = &v + return s +} + +// SetReplicaSettings sets the ReplicaSettings field's value. +func (s *UpdateGlobalTableSettingsOutput) SetReplicaSettings(v []*ReplicaSettingsDescription) *UpdateGlobalTableSettingsOutput { + s.ReplicaSettings = v + return s +} + +// Represents the input of an UpdateItem operation. +type UpdateItemInput struct { + _ struct{} `type:"structure"` + + // This is a legacy parameter. Use UpdateExpression instead. For more information, + // see AttributeUpdates (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.AttributeUpdates.html) + // in the Amazon DynamoDB Developer Guide. + AttributeUpdates map[string]*AttributeValueUpdate `type:"map"` + + // A condition that must be satisfied in order for a conditional update to succeed. + // + // An expression can contain any of the following: + // + // * Functions: attribute_exists | attribute_not_exists | attribute_type + // | contains | begins_with | size + // + // These function names are case-sensitive. + // + // * Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN + // + // * Logical operators: AND | OR | NOT + // + // For more information on condition expressions, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ConditionExpression *string `type:"string"` + + // This is a legacy parameter. Use ConditionExpression instead. For more information, + // see ConditionalOperator (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ConditionalOperator.html) + // in the Amazon DynamoDB Developer Guide. + ConditionalOperator *string `type:"string" enum:"ConditionalOperator"` + + // This is a legacy parameter. Use ConditionExpression instead. For more information, + // see Expected (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.Expected.html) + // in the Amazon DynamoDB Developer Guide. + Expected map[string]*ExpectedAttributeValue `type:"map"` + + // One or more substitution tokens for attribute names in an expression. The + // following are some use cases for using ExpressionAttributeNames: + // + // * To access an attribute whose name conflicts with a DynamoDB reserved + // word. + // + // * To create a placeholder for repeating occurrences of an attribute name + // in an expression. + // + // * To prevent special characters in an attribute name from being misinterpreted + // in an expression. + // + // Use the # character in an expression to dereference an attribute name. For + // example, consider the following attribute name: + // + // * Percentile + // + // The name of this attribute conflicts with a reserved word, so it cannot be + // used directly in an expression. (For the complete list of reserved words, + // see Reserved Words (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html) + // in the Amazon DynamoDB Developer Guide). To work around this, you could specify + // the following for ExpressionAttributeNames: + // + // * {"#P":"Percentile"} + // + // You could then use this substitution in an expression, as in this example: + // + // * #P = :val + // + // Tokens that begin with the : character are expression attribute values, which + // are placeholders for the actual value at runtime. + // + // For more information on expression attribute names, see Accessing Item Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.AccessingItemAttributes.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeNames map[string]*string `type:"map"` + + // One or more values that can be substituted in an expression. + // + // Use the : (colon) character in an expression to dereference an attribute + // value. For example, suppose that you wanted to check whether the value of + // the ProductStatus attribute was one of the following: + // + // Available | Backordered | Discontinued + // + // You would first need to specify ExpressionAttributeValues as follows: + // + // { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} + // } + // + // You could then use these values in an expression, such as this: + // + // ProductStatus IN (:avail, :back, :disc) + // + // For more information on expression attribute values, see Specifying Conditions + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.SpecifyingConditions.html) + // in the Amazon DynamoDB Developer Guide. + ExpressionAttributeValues map[string]*AttributeValue `type:"map"` + + // The primary key of the item to be updated. Each element consists of an attribute + // name and a value for that attribute. + // + // For the primary key, you must provide all of the attributes. For example, + // with a simple primary key, you only need to provide a value for the partition + // key. For a composite primary key, you must provide values for both the partition + // key and the sort key. + // + // Key is a required field + Key map[string]*AttributeValue `type:"map" required:"true"` + + // Determines the level of detail about provisioned throughput consumption that + // is returned in the response: + // + // * INDEXES - The response includes the aggregate ConsumedCapacity for the + // operation, together with ConsumedCapacity for each table and secondary + // index that was accessed. + // + // Note that some operations, such as GetItem and BatchGetItem, do not access + // any indexes at all. In these cases, specifying INDEXES will only return + // ConsumedCapacity information for table(s). + // + // * TOTAL - The response includes only the aggregate ConsumedCapacity for + // the operation. + // + // * NONE - No ConsumedCapacity details are included in the response. + ReturnConsumedCapacity *string `type:"string" enum:"ReturnConsumedCapacity"` + + // Determines whether item collection metrics are returned. If set to SIZE, + // the response includes statistics about item collections, if any, that were + // modified during the operation are returned in the response. If set to NONE + // (the default), no statistics are returned. + ReturnItemCollectionMetrics *string `type:"string" enum:"ReturnItemCollectionMetrics"` + + // Use ReturnValues if you want to get the item attributes as they appear before + // or after they are updated. For UpdateItem, the valid values are: + // + // * NONE - If ReturnValues is not specified, or if its value is NONE, then + // nothing is returned. (This setting is the default for ReturnValues.) + // + // * ALL_OLD - Returns all of the attributes of the item, as they appeared + // before the UpdateItem operation. + // + // * UPDATED_OLD - Returns only the updated attributes, as they appeared + // before the UpdateItem operation. + // + // * ALL_NEW - Returns all of the attributes of the item, as they appear + // after the UpdateItem operation. + // + // * UPDATED_NEW - Returns only the updated attributes, as they appear after + // the UpdateItem operation. + // + // There is no additional cost associated with requesting a return value aside + // from the small network and processing overhead of receiving a larger response. + // No read capacity units are consumed. + // + // The values returned are strongly consistent. + ReturnValues *string `type:"string" enum:"ReturnValue"` + + // The name of the table containing the item to update. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` + + // An expression that defines one or more attributes to be updated, the action + // to be performed on them, and new value(s) for them. + // + // The following action values are available for UpdateExpression. + // + // * SET - Adds one or more attributes and values to an item. If any of these + // attribute already exist, they are replaced by the new values. You can + // also use SET to add or subtract from an attribute that is of type Number. + // For example: SET myNum = myNum + :val + // + // SET supports the following functions: + // + // if_not_exists (path, operand) - if the item does not contain an attribute + // at the specified path, then if_not_exists evaluates to operand; otherwise, + // it evaluates to path. You can use this function to avoid overwriting an + // attribute that may already be present in the item. + // + // list_append (operand, operand) - evaluates to a list with a new element added + // to it. You can append the new element to the start or the end of the list + // by reversing the order of the operands. + // + // These function names are case-sensitive. + // + // * REMOVE - Removes one or more attributes from an item. + // + // * ADD - Adds the specified value to the item, if the attribute does not + // already exist. If the attribute does exist, then the behavior of ADD depends + // on the data type of the attribute: + // + // If the existing attribute is a number, and if Value is also a number, then + // Value is mathematically added to the existing attribute. If Value is a + // negative number, then it is subtracted from the existing attribute. + // + // If you use ADD to increment or decrement a number value for an item that + // doesn't exist before the update, DynamoDB uses 0 as the initial value. + // + // Similarly, if you use ADD for an existing item to increment or decrement + // an attribute value that doesn't exist before the update, DynamoDB uses + // 0 as the initial value. For example, suppose that the item you want to + // update doesn't have an attribute named itemcount, but you decide to ADD + // the number 3 to this attribute anyway. DynamoDB will create the itemcount + // attribute, set its initial value to 0, and finally add 3 to it. The result + // will be a new itemcount attribute in the item, with a value of 3. + // + // If the existing data type is a set and if Value is also a set, then Value + // is added to the existing set. For example, if the attribute value is the + // set [1,2], and the ADD action specified [3], then the final attribute + // value is [1,2,3]. An error occurs if an ADD action is specified for a + // set attribute and the attribute type specified does not match the existing + // set type. + // + // Both sets must have the same primitive data type. For example, if the existing + // data type is a set of strings, the Value must also be a set of strings. + // + // The ADD action only supports Number and set data types. In addition, ADD + // can only be used on top-level attributes, not nested attributes. + // + // * DELETE - Deletes an element from a set. + // + // If a set of values is specified, then those values are subtracted from the + // old set. For example, if the attribute value was the set [a,b,c] and the + // DELETE action specifies [a,c], then the final attribute value is [b]. + // Specifying an empty set is an error. + // + // The DELETE action only supports set data types. In addition, DELETE can only + // be used on top-level attributes, not nested attributes. + // + // You can have many actions in a single expression, such as the following: + // SET a=:value1, b=:value2 DELETE :value3, :value4, :value5 + // + // For more information on update expressions, see Modifying Items and Attributes + // (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.Modifying.html) + // in the Amazon DynamoDB Developer Guide. + UpdateExpression *string `type:"string"` +} + +// String returns the string representation +func (s UpdateItemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateItemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateItemInput"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeUpdates sets the AttributeUpdates field's value. +func (s *UpdateItemInput) SetAttributeUpdates(v map[string]*AttributeValueUpdate) *UpdateItemInput { + s.AttributeUpdates = v + return s +} + +// SetConditionExpression sets the ConditionExpression field's value. +func (s *UpdateItemInput) SetConditionExpression(v string) *UpdateItemInput { + s.ConditionExpression = &v + return s +} + +// SetConditionalOperator sets the ConditionalOperator field's value. +func (s *UpdateItemInput) SetConditionalOperator(v string) *UpdateItemInput { + s.ConditionalOperator = &v + return s +} + +// SetExpected sets the Expected field's value. +func (s *UpdateItemInput) SetExpected(v map[string]*ExpectedAttributeValue) *UpdateItemInput { + s.Expected = v + return s +} + +// SetExpressionAttributeNames sets the ExpressionAttributeNames field's value. +func (s *UpdateItemInput) SetExpressionAttributeNames(v map[string]*string) *UpdateItemInput { + s.ExpressionAttributeNames = v + return s +} + +// SetExpressionAttributeValues sets the ExpressionAttributeValues field's value. +func (s *UpdateItemInput) SetExpressionAttributeValues(v map[string]*AttributeValue) *UpdateItemInput { + s.ExpressionAttributeValues = v + return s +} + +// SetKey sets the Key field's value. +func (s *UpdateItemInput) SetKey(v map[string]*AttributeValue) *UpdateItemInput { + s.Key = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *UpdateItemInput) SetReturnConsumedCapacity(v string) *UpdateItemInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetReturnItemCollectionMetrics sets the ReturnItemCollectionMetrics field's value. +func (s *UpdateItemInput) SetReturnItemCollectionMetrics(v string) *UpdateItemInput { + s.ReturnItemCollectionMetrics = &v + return s +} + +// SetReturnValues sets the ReturnValues field's value. +func (s *UpdateItemInput) SetReturnValues(v string) *UpdateItemInput { + s.ReturnValues = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *UpdateItemInput) SetTableName(v string) *UpdateItemInput { + s.TableName = &v + return s +} + +// SetUpdateExpression sets the UpdateExpression field's value. +func (s *UpdateItemInput) SetUpdateExpression(v string) *UpdateItemInput { + s.UpdateExpression = &v + return s +} + +// Represents the output of an UpdateItem operation. +type UpdateItemOutput struct { + _ struct{} `type:"structure"` + + // A map of attribute values as they appear before or after the UpdateItem operation, + // as determined by the ReturnValues parameter. + // + // The Attributes map is only present if ReturnValues was specified as something + // other than NONE in the request. Each element represents one attribute. + Attributes map[string]*AttributeValue `type:"map"` + + // The capacity units consumed by the UpdateItem operation. The data returned + // includes the total provisioned throughput consumed, along with statistics + // for the table and any indexes involved in the operation. ConsumedCapacity + // is only returned if the ReturnConsumedCapacity parameter was specified. For + // more information, see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // Information about item collections, if any, that were affected by the UpdateItem + // operation. ItemCollectionMetrics is only returned if the ReturnItemCollectionMetrics + // parameter was specified. If the table does not have any local secondary indexes, + // this information is not returned in the response. + // + // Each ItemCollectionMetrics element consists of: + // + // * ItemCollectionKey - The partition key value of the item collection. + // This is the same as the partition key value of the item itself. + // + // * SizeEstimateRangeGB - An estimate of item collection size, in gigabytes. + // This value is a two-element array containing a lower bound and an upper + // bound for the estimate. The estimate includes the size of all the items + // in the table, plus the size of all attributes projected into all of the + // local secondary indexes on that table. Use this estimate to measure whether + // a local secondary index is approaching its size limit. + // + // The estimate is subject to change over time; therefore, do not rely on the + // precision or accuracy of the estimate. + ItemCollectionMetrics *ItemCollectionMetrics `type:"structure"` +} + +// String returns the string representation +func (s UpdateItemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateItemOutput) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *UpdateItemOutput) SetAttributes(v map[string]*AttributeValue) *UpdateItemOutput { + s.Attributes = v + return s +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *UpdateItemOutput) SetConsumedCapacity(v *ConsumedCapacity) *UpdateItemOutput { + s.ConsumedCapacity = v + return s +} + +// SetItemCollectionMetrics sets the ItemCollectionMetrics field's value. +func (s *UpdateItemOutput) SetItemCollectionMetrics(v *ItemCollectionMetrics) *UpdateItemOutput { + s.ItemCollectionMetrics = v + return s +} + +// Represents the input of an UpdateTable operation. +type UpdateTableInput struct { + _ struct{} `type:"structure"` + + // An array of attributes that describe the key schema for the table and indexes. + // If you are adding a new global secondary index to the table, AttributeDefinitions + // must include the key element(s) of the new index. + AttributeDefinitions []*AttributeDefinition `type:"list"` + + // An array of one or more global secondary indexes for the table. For each + // index in the array, you can request one action: + // + // * Create - add a new global secondary index to the table. + // + // * Update - modify the provisioned throughput settings of an existing global + // secondary index. + // + // * Delete - remove a global secondary index from the table. + // + // For more information, see Managing Global Secondary Indexes (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.OnlineOps.html) + // in the Amazon DynamoDB Developer Guide. + GlobalSecondaryIndexUpdates []*GlobalSecondaryIndexUpdate `type:"list"` + + // The new provisioned throughput settings for the specified table or index. + ProvisionedThroughput *ProvisionedThroughput `type:"structure"` + + // Represents the DynamoDB Streams configuration for the table. + // + // You will receive a ResourceInUseException if you attempt to enable a stream + // on a table that already has a stream, or if you attempt to disable a stream + // on a table which does not have a stream. + StreamSpecification *StreamSpecification `type:"structure"` + + // The name of the table to be updated. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateTableInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTableInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateTableInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateTableInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.AttributeDefinitions != nil { + for i, v := range s.AttributeDefinitions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AttributeDefinitions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.GlobalSecondaryIndexUpdates != nil { + for i, v := range s.GlobalSecondaryIndexUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "GlobalSecondaryIndexUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ProvisionedThroughput != nil { + if err := s.ProvisionedThroughput.Validate(); err != nil { + invalidParams.AddNested("ProvisionedThroughput", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeDefinitions sets the AttributeDefinitions field's value. +func (s *UpdateTableInput) SetAttributeDefinitions(v []*AttributeDefinition) *UpdateTableInput { + s.AttributeDefinitions = v + return s +} + +// SetGlobalSecondaryIndexUpdates sets the GlobalSecondaryIndexUpdates field's value. +func (s *UpdateTableInput) SetGlobalSecondaryIndexUpdates(v []*GlobalSecondaryIndexUpdate) *UpdateTableInput { + s.GlobalSecondaryIndexUpdates = v + return s +} + +// SetProvisionedThroughput sets the ProvisionedThroughput field's value. +func (s *UpdateTableInput) SetProvisionedThroughput(v *ProvisionedThroughput) *UpdateTableInput { + s.ProvisionedThroughput = v + return s +} + +// SetStreamSpecification sets the StreamSpecification field's value. +func (s *UpdateTableInput) SetStreamSpecification(v *StreamSpecification) *UpdateTableInput { + s.StreamSpecification = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *UpdateTableInput) SetTableName(v string) *UpdateTableInput { + s.TableName = &v + return s +} + +// Represents the output of an UpdateTable operation. +type UpdateTableOutput struct { + _ struct{} `type:"structure"` + + // Represents the properties of the table. + TableDescription *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateTableOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTableOutput) GoString() string { + return s.String() +} + +// SetTableDescription sets the TableDescription field's value. +func (s *UpdateTableOutput) SetTableDescription(v *TableDescription) *UpdateTableOutput { + s.TableDescription = v + return s +} + +// Represents the input of an UpdateTimeToLive operation. +type UpdateTimeToLiveInput struct { + _ struct{} `type:"structure"` + + // The name of the table to be configured. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` + + // Represents the settings used to enable or disable Time to Live for the specified + // table. + // + // TimeToLiveSpecification is a required field + TimeToLiveSpecification *TimeToLiveSpecification `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateTimeToLiveInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTimeToLiveInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateTimeToLiveInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateTimeToLiveInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.TimeToLiveSpecification == nil { + invalidParams.Add(request.NewErrParamRequired("TimeToLiveSpecification")) + } + if s.TimeToLiveSpecification != nil { + if err := s.TimeToLiveSpecification.Validate(); err != nil { + invalidParams.AddNested("TimeToLiveSpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTableName sets the TableName field's value. +func (s *UpdateTimeToLiveInput) SetTableName(v string) *UpdateTimeToLiveInput { + s.TableName = &v + return s +} + +// SetTimeToLiveSpecification sets the TimeToLiveSpecification field's value. +func (s *UpdateTimeToLiveInput) SetTimeToLiveSpecification(v *TimeToLiveSpecification) *UpdateTimeToLiveInput { + s.TimeToLiveSpecification = v + return s +} + +type UpdateTimeToLiveOutput struct { + _ struct{} `type:"structure"` + + // Represents the output of an UpdateTimeToLive operation. + TimeToLiveSpecification *TimeToLiveSpecification `type:"structure"` +} + +// String returns the string representation +func (s UpdateTimeToLiveOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTimeToLiveOutput) GoString() string { + return s.String() +} + +// SetTimeToLiveSpecification sets the TimeToLiveSpecification field's value. +func (s *UpdateTimeToLiveOutput) SetTimeToLiveSpecification(v *TimeToLiveSpecification) *UpdateTimeToLiveOutput { + s.TimeToLiveSpecification = v + return s +} + +// Represents an operation to perform - either DeleteItem or PutItem. You can +// only request one of these operations, not both, in a single WriteRequest. +// If you do need to perform both of these operations, you will need to provide +// two separate WriteRequest objects. +type WriteRequest struct { + _ struct{} `type:"structure"` + + // A request to perform a DeleteItem operation. + DeleteRequest *DeleteRequest `type:"structure"` + + // A request to perform a PutItem operation. + PutRequest *PutRequest `type:"structure"` +} + +// String returns the string representation +func (s WriteRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WriteRequest) GoString() string { + return s.String() +} + +// SetDeleteRequest sets the DeleteRequest field's value. +func (s *WriteRequest) SetDeleteRequest(v *DeleteRequest) *WriteRequest { + s.DeleteRequest = v + return s +} + +// SetPutRequest sets the PutRequest field's value. +func (s *WriteRequest) SetPutRequest(v *PutRequest) *WriteRequest { + s.PutRequest = v + return s +} + +const ( + // AttributeActionAdd is a AttributeAction enum value + AttributeActionAdd = "ADD" + + // AttributeActionPut is a AttributeAction enum value + AttributeActionPut = "PUT" + + // AttributeActionDelete is a AttributeAction enum value + AttributeActionDelete = "DELETE" +) + +const ( + // BackupStatusCreating is a BackupStatus enum value + BackupStatusCreating = "CREATING" + + // BackupStatusDeleted is a BackupStatus enum value + BackupStatusDeleted = "DELETED" + + // BackupStatusAvailable is a BackupStatus enum value + BackupStatusAvailable = "AVAILABLE" +) + +const ( + // ComparisonOperatorEq is a ComparisonOperator enum value + ComparisonOperatorEq = "EQ" + + // ComparisonOperatorNe is a ComparisonOperator enum value + ComparisonOperatorNe = "NE" + + // ComparisonOperatorIn is a ComparisonOperator enum value + ComparisonOperatorIn = "IN" + + // ComparisonOperatorLe is a ComparisonOperator enum value + ComparisonOperatorLe = "LE" + + // ComparisonOperatorLt is a ComparisonOperator enum value + ComparisonOperatorLt = "LT" + + // ComparisonOperatorGe is a ComparisonOperator enum value + ComparisonOperatorGe = "GE" + + // ComparisonOperatorGt is a ComparisonOperator enum value + ComparisonOperatorGt = "GT" + + // ComparisonOperatorBetween is a ComparisonOperator enum value + ComparisonOperatorBetween = "BETWEEN" + + // ComparisonOperatorNotNull is a ComparisonOperator enum value + ComparisonOperatorNotNull = "NOT_NULL" + + // ComparisonOperatorNull is a ComparisonOperator enum value + ComparisonOperatorNull = "NULL" + + // ComparisonOperatorContains is a ComparisonOperator enum value + ComparisonOperatorContains = "CONTAINS" + + // ComparisonOperatorNotContains is a ComparisonOperator enum value + ComparisonOperatorNotContains = "NOT_CONTAINS" + + // ComparisonOperatorBeginsWith is a ComparisonOperator enum value + ComparisonOperatorBeginsWith = "BEGINS_WITH" +) + +const ( + // ConditionalOperatorAnd is a ConditionalOperator enum value + ConditionalOperatorAnd = "AND" + + // ConditionalOperatorOr is a ConditionalOperator enum value + ConditionalOperatorOr = "OR" +) + +const ( + // ContinuousBackupsStatusEnabled is a ContinuousBackupsStatus enum value + ContinuousBackupsStatusEnabled = "ENABLED" + + // ContinuousBackupsStatusDisabled is a ContinuousBackupsStatus enum value + ContinuousBackupsStatusDisabled = "DISABLED" +) + +const ( + // GlobalTableStatusCreating is a GlobalTableStatus enum value + GlobalTableStatusCreating = "CREATING" + + // GlobalTableStatusActive is a GlobalTableStatus enum value + GlobalTableStatusActive = "ACTIVE" + + // GlobalTableStatusDeleting is a GlobalTableStatus enum value + GlobalTableStatusDeleting = "DELETING" + + // GlobalTableStatusUpdating is a GlobalTableStatus enum value + GlobalTableStatusUpdating = "UPDATING" +) + +const ( + // IndexStatusCreating is a IndexStatus enum value + IndexStatusCreating = "CREATING" + + // IndexStatusUpdating is a IndexStatus enum value + IndexStatusUpdating = "UPDATING" + + // IndexStatusDeleting is a IndexStatus enum value + IndexStatusDeleting = "DELETING" + + // IndexStatusActive is a IndexStatus enum value + IndexStatusActive = "ACTIVE" +) + +const ( + // KeyTypeHash is a KeyType enum value + KeyTypeHash = "HASH" + + // KeyTypeRange is a KeyType enum value + KeyTypeRange = "RANGE" +) + +const ( + // PointInTimeRecoveryStatusEnabled is a PointInTimeRecoveryStatus enum value + PointInTimeRecoveryStatusEnabled = "ENABLED" + + // PointInTimeRecoveryStatusDisabled is a PointInTimeRecoveryStatus enum value + PointInTimeRecoveryStatusDisabled = "DISABLED" +) + +const ( + // ProjectionTypeAll is a ProjectionType enum value + ProjectionTypeAll = "ALL" + + // ProjectionTypeKeysOnly is a ProjectionType enum value + ProjectionTypeKeysOnly = "KEYS_ONLY" + + // ProjectionTypeInclude is a ProjectionType enum value + ProjectionTypeInclude = "INCLUDE" +) + +const ( + // ReplicaStatusCreating is a ReplicaStatus enum value + ReplicaStatusCreating = "CREATING" + + // ReplicaStatusUpdating is a ReplicaStatus enum value + ReplicaStatusUpdating = "UPDATING" + + // ReplicaStatusDeleting is a ReplicaStatus enum value + ReplicaStatusDeleting = "DELETING" + + // ReplicaStatusActive is a ReplicaStatus enum value + ReplicaStatusActive = "ACTIVE" +) + +// Determines the level of detail about provisioned throughput consumption that +// is returned in the response: +// +// * INDEXES - The response includes the aggregate ConsumedCapacity for the +// operation, together with ConsumedCapacity for each table and secondary +// index that was accessed. +// +// Note that some operations, such as GetItem and BatchGetItem, do not access +// any indexes at all. In these cases, specifying INDEXES will only return +// ConsumedCapacity information for table(s). +// +// * TOTAL - The response includes only the aggregate ConsumedCapacity for +// the operation. +// +// * NONE - No ConsumedCapacity details are included in the response. +const ( + // ReturnConsumedCapacityIndexes is a ReturnConsumedCapacity enum value + ReturnConsumedCapacityIndexes = "INDEXES" + + // ReturnConsumedCapacityTotal is a ReturnConsumedCapacity enum value + ReturnConsumedCapacityTotal = "TOTAL" + + // ReturnConsumedCapacityNone is a ReturnConsumedCapacity enum value + ReturnConsumedCapacityNone = "NONE" +) + +const ( + // ReturnItemCollectionMetricsSize is a ReturnItemCollectionMetrics enum value + ReturnItemCollectionMetricsSize = "SIZE" + + // ReturnItemCollectionMetricsNone is a ReturnItemCollectionMetrics enum value + ReturnItemCollectionMetricsNone = "NONE" +) + +const ( + // ReturnValueNone is a ReturnValue enum value + ReturnValueNone = "NONE" + + // ReturnValueAllOld is a ReturnValue enum value + ReturnValueAllOld = "ALL_OLD" + + // ReturnValueUpdatedOld is a ReturnValue enum value + ReturnValueUpdatedOld = "UPDATED_OLD" + + // ReturnValueAllNew is a ReturnValue enum value + ReturnValueAllNew = "ALL_NEW" + + // ReturnValueUpdatedNew is a ReturnValue enum value + ReturnValueUpdatedNew = "UPDATED_NEW" +) + +const ( + // SSEStatusEnabling is a SSEStatus enum value + SSEStatusEnabling = "ENABLING" + + // SSEStatusEnabled is a SSEStatus enum value + SSEStatusEnabled = "ENABLED" + + // SSEStatusDisabling is a SSEStatus enum value + SSEStatusDisabling = "DISABLING" + + // SSEStatusDisabled is a SSEStatus enum value + SSEStatusDisabled = "DISABLED" +) + +const ( + // ScalarAttributeTypeS is a ScalarAttributeType enum value + ScalarAttributeTypeS = "S" + + // ScalarAttributeTypeN is a ScalarAttributeType enum value + ScalarAttributeTypeN = "N" + + // ScalarAttributeTypeB is a ScalarAttributeType enum value + ScalarAttributeTypeB = "B" +) + +const ( + // SelectAllAttributes is a Select enum value + SelectAllAttributes = "ALL_ATTRIBUTES" + + // SelectAllProjectedAttributes is a Select enum value + SelectAllProjectedAttributes = "ALL_PROJECTED_ATTRIBUTES" + + // SelectSpecificAttributes is a Select enum value + SelectSpecificAttributes = "SPECIFIC_ATTRIBUTES" + + // SelectCount is a Select enum value + SelectCount = "COUNT" +) + +const ( + // StreamViewTypeNewImage is a StreamViewType enum value + StreamViewTypeNewImage = "NEW_IMAGE" + + // StreamViewTypeOldImage is a StreamViewType enum value + StreamViewTypeOldImage = "OLD_IMAGE" + + // StreamViewTypeNewAndOldImages is a StreamViewType enum value + StreamViewTypeNewAndOldImages = "NEW_AND_OLD_IMAGES" + + // StreamViewTypeKeysOnly is a StreamViewType enum value + StreamViewTypeKeysOnly = "KEYS_ONLY" +) + +const ( + // TableStatusCreating is a TableStatus enum value + TableStatusCreating = "CREATING" + + // TableStatusUpdating is a TableStatus enum value + TableStatusUpdating = "UPDATING" + + // TableStatusDeleting is a TableStatus enum value + TableStatusDeleting = "DELETING" + + // TableStatusActive is a TableStatus enum value + TableStatusActive = "ACTIVE" +) + +const ( + // TimeToLiveStatusEnabling is a TimeToLiveStatus enum value + TimeToLiveStatusEnabling = "ENABLING" + + // TimeToLiveStatusDisabling is a TimeToLiveStatus enum value + TimeToLiveStatusDisabling = "DISABLING" + + // TimeToLiveStatusEnabled is a TimeToLiveStatus enum value + TimeToLiveStatusEnabled = "ENABLED" + + // TimeToLiveStatusDisabled is a TimeToLiveStatus enum value + TimeToLiveStatusDisabled = "DISABLED" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go new file mode 100644 index 00000000..333e61bf --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/customizations.go @@ -0,0 +1,109 @@ +package dynamodb + +import ( + "bytes" + "hash/crc32" + "io" + "io/ioutil" + "math" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/request" +) + +type retryer struct { + client.DefaultRetryer +} + +func (d retryer) RetryRules(r *request.Request) time.Duration { + delay := time.Duration(math.Pow(2, float64(r.RetryCount))) * 50 + return delay * time.Millisecond +} + +func init() { + initClient = func(c *client.Client) { + if c.Config.Retryer == nil { + // Only override the retryer with a custom one if the config + // does not already contain a retryer + setCustomRetryer(c) + } + + c.Handlers.Build.PushBack(disableCompression) + c.Handlers.Unmarshal.PushFront(validateCRC32) + } +} + +func setCustomRetryer(c *client.Client) { + maxRetries := aws.IntValue(c.Config.MaxRetries) + if c.Config.MaxRetries == nil || maxRetries == aws.UseServiceDefaultRetries { + maxRetries = 10 + } + + c.Retryer = retryer{ + DefaultRetryer: client.DefaultRetryer{ + NumMaxRetries: maxRetries, + }, + } +} + +func drainBody(b io.ReadCloser, length int64) (out *bytes.Buffer, err error) { + if length < 0 { + length = 0 + } + buf := bytes.NewBuffer(make([]byte, 0, length)) + + if _, err = buf.ReadFrom(b); err != nil { + return nil, err + } + if err = b.Close(); err != nil { + return nil, err + } + return buf, nil +} + +func disableCompression(r *request.Request) { + r.HTTPRequest.Header.Set("Accept-Encoding", "identity") +} + +func validateCRC32(r *request.Request) { + if r.Error != nil { + return // already have an error, no need to verify CRC + } + + // Checksum validation is off, skip + if aws.BoolValue(r.Config.DisableComputeChecksums) { + return + } + + // Try to get CRC from response + header := r.HTTPResponse.Header.Get("X-Amz-Crc32") + if header == "" { + return // No header, skip + } + + expected, err := strconv.ParseUint(header, 10, 32) + if err != nil { + return // Could not determine CRC value, skip + } + + buf, err := drainBody(r.HTTPResponse.Body, r.HTTPResponse.ContentLength) + if err != nil { // failed to read the response body, skip + return + } + + // Reset body for subsequent reads + r.HTTPResponse.Body = ioutil.NopCloser(bytes.NewReader(buf.Bytes())) + + // Compute the CRC checksum + crc := crc32.ChecksumIEEE(buf.Bytes()) + + if crc != uint32(expected) { + // CRC does not match, set a retryable error + r.Retryable = aws.Bool(true) + r.Error = awserr.New("CRC32CheckFailed", "CRC32 integrity check failed", nil) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/doc.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/doc.go new file mode 100644 index 00000000..f244a733 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/doc.go @@ -0,0 +1,45 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package dynamodb provides the client and types for making API +// requests to Amazon DynamoDB. +// +// Amazon DynamoDB is a fully managed NoSQL database service that provides fast +// and predictable performance with seamless scalability. DynamoDB lets you +// offload the administrative burdens of operating and scaling a distributed +// database, so that you don't have to worry about hardware provisioning, setup +// and configuration, replication, software patching, or cluster scaling. +// +// With DynamoDB, you can create database tables that can store and retrieve +// any amount of data, and serve any level of request traffic. You can scale +// up or scale down your tables' throughput capacity without downtime or performance +// degradation, and use the AWS Management Console to monitor resource utilization +// and performance metrics. +// +// DynamoDB automatically spreads the data and traffic for your tables over +// a sufficient number of servers to handle your throughput and storage requirements, +// while maintaining consistent and fast performance. All of your data is stored +// on solid state disks (SSDs) and automatically replicated across multiple +// Availability Zones in an AWS region, providing built-in high availability +// and data durability. +// +// See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. +// +// See dynamodb package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ +// +// Using the Client +// +// To contact Amazon DynamoDB with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon DynamoDB client DynamoDB for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New +package dynamodb diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/doc_custom.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/doc_custom.go new file mode 100644 index 00000000..5ebc5807 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/doc_custom.go @@ -0,0 +1,27 @@ +/* +AttributeValue Marshaling and Unmarshaling Helpers + +Utility helpers to marshal and unmarshal AttributeValue to and +from Go types can be found in the dynamodbattribute sub package. This package +provides has specialized functions for the common ways of working with +AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and +directly with *AttributeValue. This is helpful for marshaling Go types for API +operations such as PutItem, and unmarshaling Query and Scan APIs' responses. + +See the dynamodbattribute package documentation for more information. +https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ + +Expression Builders + +The expression package provides utility types and functions to build DynamoDB +expression for type safe construction of API ExpressionAttributeNames, and +ExpressionAttribute Values. + +The package represents the various DynamoDB Expressions as structs named +accordingly. For example, ConditionBuilder represents a DynamoDB Condition +Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. + +See the expression package documentation for more information. +https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/ +*/ +package dynamodb diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/converter.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/converter.go new file mode 100644 index 00000000..e38e41da --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/converter.go @@ -0,0 +1,443 @@ +package dynamodbattribute + +import ( + "bytes" + "encoding/json" + "fmt" + "reflect" + "runtime" + "strconv" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/dynamodb" +) + +// ConvertToMap accepts a map[string]interface{} or struct and converts it to a +// map[string]*dynamodb.AttributeValue. +// +// If in contains any structs, it is first JSON encoded/decoded it to convert it +// to a map[string]interface{}, so `json` struct tags are respected. +// +// Deprecated: Use MarshalMap instead +func ConvertToMap(in interface{}) (item map[string]*dynamodb.AttributeValue, err error) { + defer func() { + if r := recover(); r != nil { + if e, ok := r.(runtime.Error); ok { + err = e + } else if s, ok := r.(string); ok { + err = fmt.Errorf(s) + } else { + err = r.(error) + } + item = nil + } + }() + + if in == nil { + return nil, awserr.New("SerializationError", + "in must be a map[string]interface{} or struct, got ", nil) + } + + v := reflect.ValueOf(in) + if v.Kind() != reflect.Struct && !(v.Kind() == reflect.Map && v.Type().Key().Kind() == reflect.String) { + return nil, awserr.New("SerializationError", + fmt.Sprintf("in must be a map[string]interface{} or struct, got %s", + v.Type().String()), + nil) + } + + if isTyped(reflect.TypeOf(in)) { + var out map[string]interface{} + in = convertToUntyped(in, out) + } + + item = make(map[string]*dynamodb.AttributeValue) + for k, v := range in.(map[string]interface{}) { + item[k] = convertTo(v) + } + + return item, nil +} + +// ConvertFromMap accepts a map[string]*dynamodb.AttributeValue and converts it to a +// map[string]interface{} or struct. +// +// If v points to a struct, the result is first converted it to a +// map[string]interface{}, then JSON encoded/decoded it to convert to a struct, +// so `json` struct tags are respected. +// +// Deprecated: Use UnmarshalMap instead +func ConvertFromMap(item map[string]*dynamodb.AttributeValue, v interface{}) (err error) { + defer func() { + if r := recover(); r != nil { + if e, ok := r.(runtime.Error); ok { + err = e + } else if s, ok := r.(string); ok { + err = fmt.Errorf(s) + } else { + err = r.(error) + } + item = nil + } + }() + + rv := reflect.ValueOf(v) + if rv.Kind() != reflect.Ptr || rv.IsNil() { + return awserr.New("SerializationError", + fmt.Sprintf("v must be a non-nil pointer to a map[string]interface{} or struct, got %s", + rv.Type()), + nil) + } + if rv.Elem().Kind() != reflect.Struct && !(rv.Elem().Kind() == reflect.Map && rv.Elem().Type().Key().Kind() == reflect.String) { + return awserr.New("SerializationError", + fmt.Sprintf("v must be a non-nil pointer to a map[string]interface{} or struct, got %s", + rv.Type()), + nil) + } + + m := make(map[string]interface{}) + for k, v := range item { + m[k] = convertFrom(v) + } + + if isTyped(reflect.TypeOf(v)) { + err = convertToTyped(m, v) + } else { + rv.Elem().Set(reflect.ValueOf(m)) + } + + return err +} + +// ConvertToList accepts an array or slice and converts it to a +// []*dynamodb.AttributeValue. +// +// Converting []byte fields to dynamodb.AttributeValue are only currently supported +// if the input is a map[string]interface{} type. []byte within typed structs are not +// converted correctly and are converted into base64 strings. This is a known bug, +// and will be fixed in a later release. +// +// If in contains any structs, it is first JSON encoded/decoded it to convert it +// to a []interface{}, so `json` struct tags are respected. +// +// Deprecated: Use MarshalList instead +func ConvertToList(in interface{}) (item []*dynamodb.AttributeValue, err error) { + defer func() { + if r := recover(); r != nil { + if e, ok := r.(runtime.Error); ok { + err = e + } else if s, ok := r.(string); ok { + err = fmt.Errorf(s) + } else { + err = r.(error) + } + item = nil + } + }() + + if in == nil { + return nil, awserr.New("SerializationError", + "in must be an array or slice, got ", + nil) + } + + v := reflect.ValueOf(in) + if v.Kind() != reflect.Array && v.Kind() != reflect.Slice { + return nil, awserr.New("SerializationError", + fmt.Sprintf("in must be an array or slice, got %s", + v.Type().String()), + nil) + } + + if isTyped(reflect.TypeOf(in)) { + var out []interface{} + in = convertToUntyped(in, out) + } + + item = make([]*dynamodb.AttributeValue, 0, len(in.([]interface{}))) + for _, v := range in.([]interface{}) { + item = append(item, convertTo(v)) + } + + return item, nil +} + +// ConvertFromList accepts a []*dynamodb.AttributeValue and converts it to an array or +// slice. +// +// If v contains any structs, the result is first converted it to a +// []interface{}, then JSON encoded/decoded it to convert to a typed array or +// slice, so `json` struct tags are respected. +// +// Deprecated: Use UnmarshalList instead +func ConvertFromList(item []*dynamodb.AttributeValue, v interface{}) (err error) { + defer func() { + if r := recover(); r != nil { + if e, ok := r.(runtime.Error); ok { + err = e + } else if s, ok := r.(string); ok { + err = fmt.Errorf(s) + } else { + err = r.(error) + } + item = nil + } + }() + + rv := reflect.ValueOf(v) + if rv.Kind() != reflect.Ptr || rv.IsNil() { + return awserr.New("SerializationError", + fmt.Sprintf("v must be a non-nil pointer to an array or slice, got %s", + rv.Type()), + nil) + } + if rv.Elem().Kind() != reflect.Array && rv.Elem().Kind() != reflect.Slice { + return awserr.New("SerializationError", + fmt.Sprintf("v must be a non-nil pointer to an array or slice, got %s", + rv.Type()), + nil) + } + + l := make([]interface{}, 0, len(item)) + for _, v := range item { + l = append(l, convertFrom(v)) + } + + if isTyped(reflect.TypeOf(v)) { + err = convertToTyped(l, v) + } else { + rv.Elem().Set(reflect.ValueOf(l)) + } + + return err +} + +// ConvertTo accepts any interface{} and converts it to a *dynamodb.AttributeValue. +// +// If in contains any structs, it is first JSON encoded/decoded it to convert it +// to a interface{}, so `json` struct tags are respected. +// +// Deprecated: Use Marshal instead +func ConvertTo(in interface{}) (item *dynamodb.AttributeValue, err error) { + defer func() { + if r := recover(); r != nil { + if e, ok := r.(runtime.Error); ok { + err = e + } else if s, ok := r.(string); ok { + err = fmt.Errorf(s) + } else { + err = r.(error) + } + item = nil + } + }() + + if in != nil && isTyped(reflect.TypeOf(in)) { + var out interface{} + in = convertToUntyped(in, out) + } + + item = convertTo(in) + return item, nil +} + +// ConvertFrom accepts a *dynamodb.AttributeValue and converts it to any interface{}. +// +// If v contains any structs, the result is first converted it to a interface{}, +// then JSON encoded/decoded it to convert to a struct, so `json` struct tags +// are respected. +// +// Deprecated: Use Unmarshal instead +func ConvertFrom(item *dynamodb.AttributeValue, v interface{}) (err error) { + defer func() { + if r := recover(); r != nil { + if e, ok := r.(runtime.Error); ok { + err = e + } else if s, ok := r.(string); ok { + err = fmt.Errorf(s) + } else { + err = r.(error) + } + item = nil + } + }() + + rv := reflect.ValueOf(v) + if rv.Kind() != reflect.Ptr || rv.IsNil() { + return awserr.New("SerializationError", + fmt.Sprintf("v must be a non-nil pointer to an interface{} or struct, got %s", + rv.Type()), + nil) + } + if rv.Elem().Kind() != reflect.Interface && rv.Elem().Kind() != reflect.Struct { + return awserr.New("SerializationError", + fmt.Sprintf("v must be a non-nil pointer to an interface{} or struct, got %s", + rv.Type()), + nil) + } + + res := convertFrom(item) + + if isTyped(reflect.TypeOf(v)) { + err = convertToTyped(res, v) + } else if res != nil { + rv.Elem().Set(reflect.ValueOf(res)) + } + + return err +} + +func isTyped(v reflect.Type) bool { + switch v.Kind() { + case reflect.Struct: + return true + case reflect.Array, reflect.Slice: + if isTyped(v.Elem()) { + return true + } + case reflect.Map: + if isTyped(v.Key()) { + return true + } + if isTyped(v.Elem()) { + return true + } + case reflect.Ptr: + return isTyped(v.Elem()) + } + return false +} + +func convertToUntyped(in, out interface{}) interface{} { + b, err := json.Marshal(in) + if err != nil { + panic(err) + } + + decoder := json.NewDecoder(bytes.NewReader(b)) + decoder.UseNumber() + err = decoder.Decode(&out) + if err != nil { + panic(err) + } + + return out +} + +func convertToTyped(in, out interface{}) error { + b, err := json.Marshal(in) + if err != nil { + return err + } + + decoder := json.NewDecoder(bytes.NewReader(b)) + return decoder.Decode(&out) +} + +func convertTo(in interface{}) *dynamodb.AttributeValue { + a := &dynamodb.AttributeValue{} + + if in == nil { + a.NULL = new(bool) + *a.NULL = true + return a + } + + if m, ok := in.(map[string]interface{}); ok { + a.M = make(map[string]*dynamodb.AttributeValue) + for k, v := range m { + a.M[k] = convertTo(v) + } + return a + } + + v := reflect.ValueOf(in) + switch v.Kind() { + case reflect.Bool: + a.BOOL = new(bool) + *a.BOOL = v.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + a.N = new(string) + *a.N = strconv.FormatInt(v.Int(), 10) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + a.N = new(string) + *a.N = strconv.FormatUint(v.Uint(), 10) + case reflect.Float32, reflect.Float64: + a.N = new(string) + *a.N = strconv.FormatFloat(v.Float(), 'f', -1, 64) + case reflect.String: + if n, ok := in.(json.Number); ok { + a.N = new(string) + *a.N = n.String() + } else { + a.S = new(string) + *a.S = v.String() + } + case reflect.Slice: + switch v.Type() { + case reflect.TypeOf(([]byte)(nil)): + a.B = v.Bytes() + default: + a.L = make([]*dynamodb.AttributeValue, v.Len()) + for i := 0; i < v.Len(); i++ { + a.L[i] = convertTo(v.Index(i).Interface()) + } + } + default: + panic(fmt.Sprintf("the type %s is not supported", v.Type().String())) + } + + return a +} + +func convertFrom(a *dynamodb.AttributeValue) interface{} { + if a.S != nil { + return *a.S + } + + if a.N != nil { + // Number is tricky b/c we don't know which numeric type to use. Here we + // simply try the different types from most to least restrictive. + if n, err := strconv.ParseInt(*a.N, 10, 64); err == nil { + return int(n) + } + if n, err := strconv.ParseUint(*a.N, 10, 64); err == nil { + return uint(n) + } + n, err := strconv.ParseFloat(*a.N, 64) + if err != nil { + panic(err) + } + return n + } + + if a.BOOL != nil { + return *a.BOOL + } + + if a.NULL != nil { + return nil + } + + if a.M != nil { + m := make(map[string]interface{}) + for k, v := range a.M { + m[k] = convertFrom(v) + } + return m + } + + if a.L != nil { + l := make([]interface{}, len(a.L)) + for index, v := range a.L { + l[index] = convertFrom(v) + } + return l + } + + if a.B != nil { + return a.B + } + + panic(fmt.Sprintf("%#v is not a supported dynamodb.AttributeValue", a)) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/decode.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/decode.go new file mode 100644 index 00000000..e0249756 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/decode.go @@ -0,0 +1,761 @@ +package dynamodbattribute + +import ( + "fmt" + "reflect" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/service/dynamodb" +) + +// An Unmarshaler is an interface to provide custom unmarshaling of +// AttributeValues. Use this to provide custom logic determining +// how AttributeValues should be unmarshaled. +// type ExampleUnmarshaler struct { +// Value int +// } +// +// func (u *exampleUnmarshaler) UnmarshalDynamoDBAttributeValue(av *dynamodb.AttributeValue) error { +// if av.N == nil { +// return nil +// } +// +// n, err := strconv.ParseInt(*av.N, 10, 0) +// if err != nil { +// return err +// } +// +// u.Value = n +// return nil +// } +type Unmarshaler interface { + UnmarshalDynamoDBAttributeValue(*dynamodb.AttributeValue) error +} + +// Unmarshal will unmarshal DynamoDB AttributeValues to Go value types. +// Both generic interface{} and concrete types are valid unmarshal +// destination types. +// +// Unmarshal will allocate maps, slices, and pointers as needed to +// unmarshal the AttributeValue into the provided type value. +// +// When unmarshaling AttributeValues into structs Unmarshal matches +// the field names of the struct to the AttributeValue Map keys. +// Initially it will look for exact field name matching, but will +// fall back to case insensitive if not exact match is found. +// +// With the exception of omitempty, omitemptyelem, binaryset, numberset +// and stringset all struct tags used by Marshal are also used by +// Unmarshal. +// +// When decoding AttributeValues to interfaces Unmarshal will use the +// following types. +// +// []byte, AV Binary (B) +// [][]byte, AV Binary Set (BS) +// bool, AV Boolean (BOOL) +// []interface{}, AV List (L) +// map[string]interface{}, AV Map (M) +// float64, AV Number (N) +// Number, AV Number (N) with UseNumber set +// []float64, AV Number Set (NS) +// []Number, AV Number Set (NS) with UseNumber set +// string, AV String (S) +// []string, AV String Set (SS) +// +// If the Decoder option, UseNumber is set numbers will be unmarshaled +// as Number values instead of float64. Use this to maintain the original +// string formating of the number as it was represented in the AttributeValue. +// In addition provides additional opportunities to parse the number +// string based on individual use cases. +// +// When unmarshaling any error that occurs will halt the unmarshal +// and return the error. +// +// The output value provided must be a non-nil pointer +func Unmarshal(av *dynamodb.AttributeValue, out interface{}) error { + return NewDecoder().Decode(av, out) +} + +// UnmarshalMap is an alias for Unmarshal which unmarshals from +// a map of AttributeValues. +// +// The output value provided must be a non-nil pointer +func UnmarshalMap(m map[string]*dynamodb.AttributeValue, out interface{}) error { + return NewDecoder().Decode(&dynamodb.AttributeValue{M: m}, out) +} + +// UnmarshalList is an alias for Unmarshal func which unmarshals +// a slice of AttributeValues. +// +// The output value provided must be a non-nil pointer +func UnmarshalList(l []*dynamodb.AttributeValue, out interface{}) error { + return NewDecoder().Decode(&dynamodb.AttributeValue{L: l}, out) +} + +// UnmarshalListOfMaps is an alias for Unmarshal func which unmarshals a +// slice of maps of attribute values. +// +// This is useful for when you need to unmarshal the Items from a DynamoDB +// Query API call. +// +// The output value provided must be a non-nil pointer +func UnmarshalListOfMaps(l []map[string]*dynamodb.AttributeValue, out interface{}) error { + items := make([]*dynamodb.AttributeValue, len(l)) + for i, m := range l { + items[i] = &dynamodb.AttributeValue{M: m} + } + + return UnmarshalList(items, out) +} + +// A Decoder provides unmarshaling AttributeValues to Go value types. +type Decoder struct { + MarshalOptions + + // Instructs the decoder to decode AttributeValue Numbers as + // Number type instead of float64 when the destination type + // is interface{}. Similar to encoding/json.Number + UseNumber bool +} + +// NewDecoder creates a new Decoder with default configuration. Use +// the `opts` functional options to override the default configuration. +func NewDecoder(opts ...func(*Decoder)) *Decoder { + d := &Decoder{ + MarshalOptions: MarshalOptions{ + SupportJSONTags: true, + }, + } + for _, o := range opts { + o(d) + } + + return d +} + +// Decode will unmarshal an AttributeValue into a Go value type. An error +// will be return if the decoder is unable to unmarshal the AttributeValue +// to the provide Go value type. +// +// The output value provided must be a non-nil pointer +func (d *Decoder) Decode(av *dynamodb.AttributeValue, out interface{}, opts ...func(*Decoder)) error { + v := reflect.ValueOf(out) + if v.Kind() != reflect.Ptr || v.IsNil() || !v.IsValid() { + return &InvalidUnmarshalError{Type: reflect.TypeOf(out)} + } + + return d.decode(av, v, tag{}) +} + +var stringInterfaceMapType = reflect.TypeOf(map[string]interface{}(nil)) +var byteSliceType = reflect.TypeOf([]byte(nil)) +var byteSliceSlicetype = reflect.TypeOf([][]byte(nil)) +var numberType = reflect.TypeOf(Number("")) +var timeType = reflect.TypeOf(time.Time{}) + +func (d *Decoder) decode(av *dynamodb.AttributeValue, v reflect.Value, fieldTag tag) error { + var u Unmarshaler + if av == nil || av.NULL != nil { + u, v = indirect(v, true) + if u != nil { + return u.UnmarshalDynamoDBAttributeValue(av) + } + return d.decodeNull(v) + } + + u, v = indirect(v, false) + if u != nil { + return u.UnmarshalDynamoDBAttributeValue(av) + } + + switch { + case len(av.B) != 0: + return d.decodeBinary(av.B, v) + case av.BOOL != nil: + return d.decodeBool(av.BOOL, v) + case len(av.BS) != 0: + return d.decodeBinarySet(av.BS, v) + case len(av.L) != 0: + return d.decodeList(av.L, v) + case len(av.M) != 0: + return d.decodeMap(av.M, v) + case av.N != nil: + return d.decodeNumber(av.N, v, fieldTag) + case len(av.NS) != 0: + return d.decodeNumberSet(av.NS, v) + case av.S != nil: + return d.decodeString(av.S, v, fieldTag) + case len(av.SS) != 0: + return d.decodeStringSet(av.SS, v) + } + + return nil +} + +func (d *Decoder) decodeBinary(b []byte, v reflect.Value) error { + if v.Kind() == reflect.Interface { + buf := make([]byte, len(b)) + copy(buf, b) + v.Set(reflect.ValueOf(buf)) + return nil + } + + if v.Kind() != reflect.Slice && v.Kind() != reflect.Array { + return &UnmarshalTypeError{Value: "binary", Type: v.Type()} + } + + if v.Type() == byteSliceType { + // Optimization for []byte types + if v.IsNil() || v.Cap() < len(b) { + v.Set(reflect.MakeSlice(byteSliceType, len(b), len(b))) + } else if v.Len() != len(b) { + v.SetLen(len(b)) + } + copy(v.Interface().([]byte), b) + return nil + } + + switch v.Type().Elem().Kind() { + case reflect.Uint8: + // Fallback to reflection copy for type aliased of []byte type + if v.Kind() != reflect.Array && (v.IsNil() || v.Cap() < len(b)) { + v.Set(reflect.MakeSlice(v.Type(), len(b), len(b))) + } else if v.Len() != len(b) { + v.SetLen(len(b)) + } + for i := 0; i < len(b); i++ { + v.Index(i).SetUint(uint64(b[i])) + } + default: + if v.Kind() == reflect.Array { + switch v.Type().Elem().Kind() { + case reflect.Uint8: + reflect.Copy(v, reflect.ValueOf(b)) + default: + return &UnmarshalTypeError{Value: "binary", Type: v.Type()} + } + + break + } + + return &UnmarshalTypeError{Value: "binary", Type: v.Type()} + } + + return nil +} + +func (d *Decoder) decodeBool(b *bool, v reflect.Value) error { + switch v.Kind() { + case reflect.Bool, reflect.Interface: + v.Set(reflect.ValueOf(*b).Convert(v.Type())) + default: + return &UnmarshalTypeError{Value: "bool", Type: v.Type()} + } + + return nil +} + +func (d *Decoder) decodeBinarySet(bs [][]byte, v reflect.Value) error { + isArray := false + + switch v.Kind() { + case reflect.Slice: + // Make room for the slice elements if needed + if v.IsNil() || v.Cap() < len(bs) { + // What about if ignoring nil/empty values? + v.Set(reflect.MakeSlice(v.Type(), 0, len(bs))) + } + case reflect.Array: + // Limited to capacity of existing array. + isArray = true + case reflect.Interface: + set := make([][]byte, len(bs)) + for i, b := range bs { + if err := d.decodeBinary(b, reflect.ValueOf(&set[i]).Elem()); err != nil { + return err + } + } + v.Set(reflect.ValueOf(set)) + return nil + default: + return &UnmarshalTypeError{Value: "binary set", Type: v.Type()} + } + + for i := 0; i < v.Cap() && i < len(bs); i++ { + if !isArray { + v.SetLen(i + 1) + } + u, elem := indirect(v.Index(i), false) + if u != nil { + return u.UnmarshalDynamoDBAttributeValue(&dynamodb.AttributeValue{BS: bs}) + } + if err := d.decodeBinary(bs[i], elem); err != nil { + return err + } + } + + return nil +} + +func (d *Decoder) decodeNumber(n *string, v reflect.Value, fieldTag tag) error { + switch v.Kind() { + case reflect.Interface: + i, err := d.decodeNumberToInterface(n) + if err != nil { + return err + } + v.Set(reflect.ValueOf(i)) + return nil + case reflect.String: + if v.Type() == numberType { // Support Number value type + v.Set(reflect.ValueOf(Number(*n))) + return nil + } + v.Set(reflect.ValueOf(*n)) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + i, err := strconv.ParseInt(*n, 10, 64) + if err != nil { + return err + } + if v.OverflowInt(i) { + return &UnmarshalTypeError{ + Value: fmt.Sprintf("number overflow, %s", *n), + Type: v.Type(), + } + } + v.SetInt(i) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + i, err := strconv.ParseUint(*n, 10, 64) + if err != nil { + return err + } + if v.OverflowUint(i) { + return &UnmarshalTypeError{ + Value: fmt.Sprintf("number overflow, %s", *n), + Type: v.Type(), + } + } + v.SetUint(i) + case reflect.Float32, reflect.Float64: + i, err := strconv.ParseFloat(*n, 64) + if err != nil { + return err + } + if v.OverflowFloat(i) { + return &UnmarshalTypeError{ + Value: fmt.Sprintf("number overflow, %s", *n), + Type: v.Type(), + } + } + v.SetFloat(i) + default: + if v.Type().ConvertibleTo(timeType) && fieldTag.AsUnixTime { + t, err := decodeUnixTime(*n) + if err != nil { + return err + } + v.Set(reflect.ValueOf(t).Convert(v.Type())) + return nil + } + return &UnmarshalTypeError{Value: "number", Type: v.Type()} + } + + return nil +} + +func (d *Decoder) decodeNumberToInterface(n *string) (interface{}, error) { + if d.UseNumber { + return Number(*n), nil + } + + // Default to float64 for all numbers + return strconv.ParseFloat(*n, 64) +} + +func (d *Decoder) decodeNumberSet(ns []*string, v reflect.Value) error { + isArray := false + + switch v.Kind() { + case reflect.Slice: + // Make room for the slice elements if needed + if v.IsNil() || v.Cap() < len(ns) { + // What about if ignoring nil/empty values? + v.Set(reflect.MakeSlice(v.Type(), 0, len(ns))) + } + case reflect.Array: + // Limited to capacity of existing array. + isArray = true + case reflect.Interface: + if d.UseNumber { + set := make([]Number, len(ns)) + for i, n := range ns { + if err := d.decodeNumber(n, reflect.ValueOf(&set[i]).Elem(), tag{}); err != nil { + return err + } + } + v.Set(reflect.ValueOf(set)) + } else { + set := make([]float64, len(ns)) + for i, n := range ns { + if err := d.decodeNumber(n, reflect.ValueOf(&set[i]).Elem(), tag{}); err != nil { + return err + } + } + v.Set(reflect.ValueOf(set)) + } + return nil + default: + return &UnmarshalTypeError{Value: "number set", Type: v.Type()} + } + + for i := 0; i < v.Cap() && i < len(ns); i++ { + if !isArray { + v.SetLen(i + 1) + } + u, elem := indirect(v.Index(i), false) + if u != nil { + return u.UnmarshalDynamoDBAttributeValue(&dynamodb.AttributeValue{NS: ns}) + } + if err := d.decodeNumber(ns[i], elem, tag{}); err != nil { + return err + } + } + + return nil +} + +func (d *Decoder) decodeList(avList []*dynamodb.AttributeValue, v reflect.Value) error { + isArray := false + + switch v.Kind() { + case reflect.Slice: + // Make room for the slice elements if needed + if v.IsNil() || v.Cap() < len(avList) { + // What about if ignoring nil/empty values? + v.Set(reflect.MakeSlice(v.Type(), 0, len(avList))) + } + case reflect.Array: + // Limited to capacity of existing array. + isArray = true + case reflect.Interface: + s := make([]interface{}, len(avList)) + for i, av := range avList { + if err := d.decode(av, reflect.ValueOf(&s[i]).Elem(), tag{}); err != nil { + return err + } + } + v.Set(reflect.ValueOf(s)) + return nil + default: + return &UnmarshalTypeError{Value: "list", Type: v.Type()} + } + + // If v is not a slice, array + for i := 0; i < v.Cap() && i < len(avList); i++ { + if !isArray { + v.SetLen(i + 1) + } + + if err := d.decode(avList[i], v.Index(i), tag{}); err != nil { + return err + } + } + + return nil +} + +func (d *Decoder) decodeMap(avMap map[string]*dynamodb.AttributeValue, v reflect.Value) error { + switch v.Kind() { + case reflect.Map: + t := v.Type() + if t.Key().Kind() != reflect.String { + return &UnmarshalTypeError{Value: "map string key", Type: t.Key()} + } + if v.IsNil() { + v.Set(reflect.MakeMap(t)) + } + case reflect.Struct: + case reflect.Interface: + v.Set(reflect.MakeMap(stringInterfaceMapType)) + v = v.Elem() + default: + return &UnmarshalTypeError{Value: "map", Type: v.Type()} + } + + if v.Kind() == reflect.Map { + for k, av := range avMap { + key := reflect.ValueOf(k) + elem := reflect.New(v.Type().Elem()).Elem() + if err := d.decode(av, elem, tag{}); err != nil { + return err + } + v.SetMapIndex(key, elem) + } + } else if v.Kind() == reflect.Struct { + fields := unionStructFields(v.Type(), d.MarshalOptions) + for k, av := range avMap { + if f, ok := fieldByName(fields, k); ok { + fv := fieldByIndex(v, f.Index, func(v *reflect.Value) bool { + v.Set(reflect.New(v.Type().Elem())) + return true // to continue the loop. + }) + if err := d.decode(av, fv, f.tag); err != nil { + return err + } + } + } + } + + return nil +} + +func (d *Decoder) decodeNull(v reflect.Value) error { + if v.IsValid() && v.CanSet() { + v.Set(reflect.Zero(v.Type())) + } + + return nil +} + +func (d *Decoder) decodeString(s *string, v reflect.Value, fieldTag tag) error { + if fieldTag.AsString { + return d.decodeNumber(s, v, fieldTag) + } + + // To maintain backwards compatibility with ConvertFrom family of methods which + // converted strings to time.Time structs + if v.Type().ConvertibleTo(timeType) { + t, err := time.Parse(time.RFC3339, *s) + if err != nil { + return err + } + v.Set(reflect.ValueOf(t).Convert(v.Type())) + return nil + } + + switch v.Kind() { + case reflect.String: + v.SetString(*s) + case reflect.Interface: + // Ensure type aliasing is handled properly + v.Set(reflect.ValueOf(*s).Convert(v.Type())) + default: + return &UnmarshalTypeError{Value: "string", Type: v.Type()} + } + + return nil +} + +func (d *Decoder) decodeStringSet(ss []*string, v reflect.Value) error { + isArray := false + + switch v.Kind() { + case reflect.Slice: + // Make room for the slice elements if needed + if v.IsNil() || v.Cap() < len(ss) { + v.Set(reflect.MakeSlice(v.Type(), 0, len(ss))) + } + case reflect.Array: + // Limited to capacity of existing array. + isArray = true + case reflect.Interface: + set := make([]string, len(ss)) + for i, s := range ss { + if err := d.decodeString(s, reflect.ValueOf(&set[i]).Elem(), tag{}); err != nil { + return err + } + } + v.Set(reflect.ValueOf(set)) + return nil + default: + return &UnmarshalTypeError{Value: "string set", Type: v.Type()} + } + + for i := 0; i < v.Cap() && i < len(ss); i++ { + if !isArray { + v.SetLen(i + 1) + } + u, elem := indirect(v.Index(i), false) + if u != nil { + return u.UnmarshalDynamoDBAttributeValue(&dynamodb.AttributeValue{SS: ss}) + } + if err := d.decodeString(ss[i], elem, tag{}); err != nil { + return err + } + } + + return nil +} + +func decodeUnixTime(n string) (time.Time, error) { + v, err := strconv.ParseInt(n, 10, 64) + if err != nil { + return time.Time{}, &UnmarshalError{ + Err: err, Value: n, Type: timeType, + } + } + + return time.Unix(v, 0), nil +} + +// indirect will walk a value's interface or pointer value types. Returning +// the final value or the value a unmarshaler is defined on. +// +// Based on the enoding/json type reflect value type indirection in Go Stdlib +// https://golang.org/src/encoding/json/decode.go indirect func. +func indirect(v reflect.Value, decodingNull bool) (Unmarshaler, reflect.Value) { + if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() { + v = v.Addr() + } + for { + if v.Kind() == reflect.Interface && !v.IsNil() { + e := v.Elem() + if e.Kind() == reflect.Ptr && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Ptr) { + v = e + continue + } + } + if v.Kind() != reflect.Ptr { + break + } + if v.Elem().Kind() != reflect.Ptr && decodingNull && v.CanSet() { + break + } + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + if v.Type().NumMethod() > 0 { + if u, ok := v.Interface().(Unmarshaler); ok { + return u, reflect.Value{} + } + } + v = v.Elem() + } + + return nil, v +} + +// A Number represents a Attributevalue number literal. +type Number string + +// Float64 attempts to cast the number ot a float64, returning +// the result of the case or error if the case failed. +func (n Number) Float64() (float64, error) { + return strconv.ParseFloat(string(n), 64) +} + +// Int64 attempts to cast the number ot a int64, returning +// the result of the case or error if the case failed. +func (n Number) Int64() (int64, error) { + return strconv.ParseInt(string(n), 10, 64) +} + +// Uint64 attempts to cast the number ot a uint64, returning +// the result of the case or error if the case failed. +func (n Number) Uint64() (uint64, error) { + return strconv.ParseUint(string(n), 10, 64) +} + +// String returns the raw number represented as a string +func (n Number) String() string { + return string(n) +} + +type emptyOrigError struct{} + +func (e emptyOrigError) OrigErr() error { + return nil +} + +// An UnmarshalTypeError is an error type representing a error +// unmarshaling the AttributeValue's element to a Go value type. +// Includes details about the AttributeValue type and Go value type. +type UnmarshalTypeError struct { + emptyOrigError + Value string + Type reflect.Type +} + +// Error returns the string representation of the error. +// satisfying the error interface +func (e *UnmarshalTypeError) Error() string { + return fmt.Sprintf("%s: %s", e.Code(), e.Message()) +} + +// Code returns the code of the error, satisfying the awserr.Error +// interface. +func (e *UnmarshalTypeError) Code() string { + return "UnmarshalTypeError" +} + +// Message returns the detailed message of the error, satisfying +// the awserr.Error interface. +func (e *UnmarshalTypeError) Message() string { + return "cannot unmarshal " + e.Value + " into Go value of type " + e.Type.String() +} + +// An InvalidUnmarshalError is an error type representing an invalid type +// encountered while unmarshaling a AttributeValue to a Go value type. +type InvalidUnmarshalError struct { + emptyOrigError + Type reflect.Type +} + +// Error returns the string representation of the error. +// satisfying the error interface +func (e *InvalidUnmarshalError) Error() string { + return fmt.Sprintf("%s: %s", e.Code(), e.Message()) +} + +// Code returns the code of the error, satisfying the awserr.Error +// interface. +func (e *InvalidUnmarshalError) Code() string { + return "InvalidUnmarshalError" +} + +// Message returns the detailed message of the error, satisfying +// the awserr.Error interface. +func (e *InvalidUnmarshalError) Message() string { + if e.Type == nil { + return "cannot unmarshal to nil value" + } + if e.Type.Kind() != reflect.Ptr { + return "cannot unmarshal to non-pointer value, got " + e.Type.String() + } + return "cannot unmarshal to nil value, " + e.Type.String() +} + +// An UnmarshalError wraps an error that occured while unmarshaling a DynamoDB +// AttributeValue element into a Go type. This is different from UnmarshalTypeError +// in that it wraps the underlying error that occured. +type UnmarshalError struct { + Err error + Value string + Type reflect.Type +} + +// Error returns the string representation of the error. +// satisfying the error interface. +func (e *UnmarshalError) Error() string { + return fmt.Sprintf("%s: %s\ncaused by: %v", e.Code(), e.Message(), e.Err) +} + +// OrigErr returns the original error that caused this issue. +func (e UnmarshalError) OrigErr() error { + return e.Err +} + +// Code returns the code of the error, satisfying the awserr.Error +// interface. +func (e *UnmarshalError) Code() string { + return "UnmarshalError" +} + +// Message returns the detailed message of the error, satisfying +// the awserr.Error interface. +func (e *UnmarshalError) Message() string { + return fmt.Sprintf("cannot unmarshal %q into %s.", + e.Value, e.Type.String()) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/doc.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/doc.go new file mode 100644 index 00000000..7a51ac07 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/doc.go @@ -0,0 +1,95 @@ +// Package dynamodbattribute provides marshaling and unmarshaling utilities to +// convert between Go types and dynamodb.AttributeValues. +// +// These utilities allow you to marshal slices, maps, structs, and scalar values +// to and from dynamodb.AttributeValue. These are useful when marshaling +// Go value tyes to dynamodb.AttributeValue for DynamoDB requests, or +// unmarshaling the dynamodb.AttributeValue back into a Go value type. +// +// AttributeValue Marshaling +// +// To marshal a Go type to a dynamodbAttributeValue you can use the Marshal +// functions in the dynamodbattribute package. There are specialized versions +// of these functions for collections of Attributevalue, such as maps and lists. +// +// The following example uses MarshalMap to convert the Record Go type to a +// dynamodb.AttributeValue type and use the value to make a PutItem API request. +// +// type Record struct { +// ID string +// URLs []string +// } +// +// //... +// +// r := Record{ +// ID: "ABC123", +// URLs: []string{ +// "https://example.com/first/link", +// "https://example.com/second/url", +// }, +// } +// av, err := dynamodbattribute.MarshalMap(r) +// if err != nil { +// panic(fmt.Sprintf("failed to DynamoDB marshal Record, %v", err)) +// } +// +// _, err = svc.PutItem(&dynamodb.PutItemInput{ +// TableName: aws.String(myTableName), +// Item: av, +// }) +// if err != nil { +// panic(fmt.Sprintf("failed to put Record to DynamoDB, %v", err)) +// } +// +// AttributeValue Unmarshaling +// +// To unmarshal a dynamodb.AttributeValue to a Go type you can use the Unmarshal +// functions in the dynamodbattribute package. There are specialized versions +// of these functions for collections of Attributevalue, such as maps and lists. +// +// The following example will unmarshal the DynamoDB's Scan API operation. The +// Items returned by the operation will be unmarshaled into the slice of Records +// Go type. +// +// type Record struct { +// ID string +// URLs []string +// } +// +// //... +// +// var records []Record +// +// // Use the ScanPages method to perform the scan with pagination. Use +// // just Scan method to make the API call without pagination. +// err := svc.ScanPages(&dynamodb.ScanInput{ +// TableName: aws.String(myTableName), +// }, func(page *dynamodb.ScanOutput, last bool) bool { +// recs := []Record{} +// +// err := dynamodbattribute.UnmarshalListOfMaps(page.Items, &recs) +// if err != nil { +// panic(fmt.Sprintf("failed to unmarshal Dynamodb Scan Items, %v", err)) +// } +// +// records = append(records, recs...) +// +// return true // keep paging +// }) +// +// The ConvertTo, ConvertToList, ConvertToMap, ConvertFrom, ConvertFromMap +// and ConvertFromList methods have been deprecated. The Marshal and Unmarshal +// functions should be used instead. The ConvertTo|From marshallers do not +// support BinarySet, NumberSet, nor StringSets, and will incorrect marshal +// binary data fields in structs as base64 strings. +// +// The Marshal and Unmarshal functions correct this behavior, and removes +// the reliance on encoding.json. `json` struct tags are still supported. In +// addition support for a new struct tag `dynamodbav` was added. Support for +// the json.Marshaler and json.Unmarshaler interfaces have been removed and +// replaced with have been replaced with dynamodbattribute.Marshaler and +// dynamodbattribute.Unmarshaler interfaces. +// +// `time.Time` is marshaled as RFC3339 format. +package dynamodbattribute diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/encode.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/encode.go new file mode 100644 index 00000000..fb30eff9 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/encode.go @@ -0,0 +1,641 @@ +package dynamodbattribute + +import ( + "fmt" + "reflect" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dynamodb" +) + +// An UnixTime provides aliasing of time.Time into a type that when marshaled +// and unmarshaled with DynamoDB AttributeValues it will be done so as number +// instead of string in seconds since January 1, 1970 UTC. +// +// This type is useful as an alternative to the struct tag `unixtime` when you +// want to have your time value marshaled as Unix time in seconds intead of +// the default time.RFC3339. +// +// Important to note that zero value time as unixtime is not 0 seconds +// from January 1, 1970 UTC, but -62135596800. Which is seconds between +// January 1, 0001 UTC, and January 1, 0001 UTC. +type UnixTime time.Time + +// MarshalDynamoDBAttributeValue implements the Marshaler interface so that +// the UnixTime can be marshaled from to a DynamoDB AttributeValue number +// value encoded in the number of seconds since January 1, 1970 UTC. +func (e UnixTime) MarshalDynamoDBAttributeValue(av *dynamodb.AttributeValue) error { + t := time.Time(e) + s := strconv.FormatInt(t.Unix(), 10) + av.N = &s + + return nil +} + +// UnmarshalDynamoDBAttributeValue implements the Unmarshaler interface so that +// the UnixTime can be unmarshaled from a DynamoDB AttributeValue number representing +// the number of seconds since January 1, 1970 UTC. +// +// If an error parsing the AttributeValue number occurs UnmarshalError will be +// returned. +func (e *UnixTime) UnmarshalDynamoDBAttributeValue(av *dynamodb.AttributeValue) error { + t, err := decodeUnixTime(aws.StringValue(av.N)) + if err != nil { + return err + } + + *e = UnixTime(t) + return nil +} + +// A Marshaler is an interface to provide custom marshaling of Go value types +// to AttributeValues. Use this to provide custom logic determining how a +// Go Value type should be marshaled. +// +// type ExampleMarshaler struct { +// Value int +// } +// func (m *ExampleMarshaler) MarshalDynamoDBAttributeValue(av *dynamodb.AttributeValue) error { +// n := fmt.Sprintf("%v", m.Value) +// av.N = &n +// return nil +// } +// +type Marshaler interface { + MarshalDynamoDBAttributeValue(*dynamodb.AttributeValue) error +} + +// Marshal will serialize the passed in Go value type into a DynamoDB AttributeValue +// type. This value can be used in DynamoDB API operations to simplify marshaling +// your Go value types into AttributeValues. +// +// Marshal will recursively transverse the passed in value marshaling its +// contents into a AttributeValue. Marshal supports basic scalars +// (int,uint,float,bool,string), maps, slices, and structs. Anonymous +// nested types are flattened based on Go anonymous type visibility. +// +// Marshaling slices to AttributeValue will default to a List for all +// types except for []byte and [][]byte. []byte will be marshaled as +// Binary data (B), and [][]byte will be marshaled as binary data set +// (BS). +// +// `dynamodbav` struct tag can be used to control how the value will be +// marshaled into a AttributeValue. +// +// // Field is ignored +// Field int `dynamodbav:"-"` +// +// // Field AttributeValue map key "myName" +// Field int `dynamodbav:"myName"` +// +// // Field AttributeValue map key "myName", and +// // Field is omitted if it is empty +// Field int `dynamodbav:"myName,omitempty"` +// +// // Field AttributeValue map key "Field", and +// // Field is omitted if it is empty +// Field int `dynamodbav:",omitempty"` +// +// // Field's elems will be omitted if empty +// // only valid for slices, and maps. +// Field []string `dynamodbav:",omitemptyelem"` +// +// // Field will be marshaled as a AttributeValue string +// // only value for number types, (int,uint,float) +// Field int `dynamodbav:",string"` +// +// // Field will be marshaled as a binary set +// Field [][]byte `dynamodbav:",binaryset"` +// +// // Field will be marshaled as a number set +// Field []int `dynamodbav:",numberset"` +// +// // Field will be marshaled as a string set +// Field []string `dynamodbav:",stringset"` +// +// // Field will be marshaled as Unix time number in seconds. +// // This tag is only valid with time.Time typed struct fields. +// // Important to note that zero value time as unixtime is not 0 seconds +// // from January 1, 1970 UTC, but -62135596800. Which is seconds between +// // January 1, 0001 UTC, and January 1, 0001 UTC. +// Field time.Time `dynamodbav:",unixtime"` +// +// The omitempty tag is only used during Marshaling and is ignored for +// Unmarshal. Any zero value or a value when marshaled results in a +// AttributeValue NULL will be added to AttributeValue Maps during struct +// marshal. The omitemptyelem tag works the same as omitempty except it +// applies to maps and slices instead of struct fields, and will not be +// included in the marshaled AttributeValue Map, List, or Set. +// +// For convenience and backwards compatibility with ConvertTo functions +// json struct tags are supported by the Marshal and Unmarshal. If +// both json and dynamodbav struct tags are provided the json tag will +// be ignored in favor of dynamodbav. +// +// All struct fields and with anonymous fields, are marshaled unless the +// any of the following conditions are meet. +// +// - the field is not exported +// - json or dynamodbav field tag is "-" +// - json or dynamodbav field tag specifies "omitempty", and is empty. +// +// Pointer and interfaces values encode as the value pointed to or contained +// in the interface. A nil value encodes as the AttributeValue NULL value. +// +// Channel, complex, and function values are not encoded and will be skipped +// when walking the value to be marshaled. +// +// When marshaling any error that occurs will halt the marshal and return +// the error. +// +// Marshal cannot represent cyclic data structures and will not handle them. +// Passing cyclic structures to Marshal will result in an infinite recursion. +func Marshal(in interface{}) (*dynamodb.AttributeValue, error) { + return NewEncoder().Encode(in) +} + +// MarshalMap is an alias for Marshal func which marshals Go value +// type to a map of AttributeValues. +// +// This is useful for DynamoDB APIs such as PutItem. +func MarshalMap(in interface{}) (map[string]*dynamodb.AttributeValue, error) { + av, err := NewEncoder().Encode(in) + if err != nil || av == nil || av.M == nil { + return map[string]*dynamodb.AttributeValue{}, err + } + + return av.M, nil +} + +// MarshalList is an alias for Marshal func which marshals Go value +// type to a slice of AttributeValues. +func MarshalList(in interface{}) ([]*dynamodb.AttributeValue, error) { + av, err := NewEncoder().Encode(in) + if err != nil || av == nil || av.L == nil { + return []*dynamodb.AttributeValue{}, err + } + + return av.L, nil +} + +// A MarshalOptions is a collection of options shared between marshaling +// and unmarshaling +type MarshalOptions struct { + // States that the encoding/json struct tags should be supported. + // if a `dynamodbav` struct tag is also provided the encoding/json + // tag will be ignored. + // + // Enabled by default. + SupportJSONTags bool +} + +// An Encoder provides marshaling Go value types to AttributeValues. +type Encoder struct { + MarshalOptions + + // Empty strings, "", will be marked as NULL AttributeValue types. + // Empty strings are not valid values for DynamoDB. Will not apply + // to lists, sets, or maps. Use the struct tag `omitemptyelem` + // to skip empty (zero) values in lists, sets and maps. + // + // Enabled by default. + NullEmptyString bool +} + +// NewEncoder creates a new Encoder with default configuration. Use +// the `opts` functional options to override the default configuration. +func NewEncoder(opts ...func(*Encoder)) *Encoder { + e := &Encoder{ + MarshalOptions: MarshalOptions{ + SupportJSONTags: true, + }, + NullEmptyString: true, + } + for _, o := range opts { + o(e) + } + + return e +} + +// Encode will marshal a Go value type to an AttributeValue. Returning +// the AttributeValue constructed or error. +func (e *Encoder) Encode(in interface{}) (*dynamodb.AttributeValue, error) { + av := &dynamodb.AttributeValue{} + if err := e.encode(av, reflect.ValueOf(in), tag{}); err != nil { + return nil, err + } + + return av, nil +} + +func fieldByIndex(v reflect.Value, index []int, + OnEmbeddedNilStruct func(*reflect.Value) bool) reflect.Value { + fv := v + for i, x := range index { + if i > 0 { + if fv.Kind() == reflect.Ptr && fv.Type().Elem().Kind() == reflect.Struct { + if fv.IsNil() && !OnEmbeddedNilStruct(&fv) { + break + } + fv = fv.Elem() + } + } + fv = fv.Field(x) + } + return fv +} + +func (e *Encoder) encode(av *dynamodb.AttributeValue, v reflect.Value, fieldTag tag) error { + // We should check for omitted values first before dereferencing. + if fieldTag.OmitEmpty && emptyValue(v) { + encodeNull(av) + return nil + } + + // Handle both pointers and interface conversion into types + v = valueElem(v) + + if v.Kind() != reflect.Invalid { + if used, err := tryMarshaler(av, v); used { + return err + } + } + + switch v.Kind() { + case reflect.Invalid: + encodeNull(av) + case reflect.Struct: + return e.encodeStruct(av, v, fieldTag) + case reflect.Map: + return e.encodeMap(av, v, fieldTag) + case reflect.Slice, reflect.Array: + return e.encodeSlice(av, v, fieldTag) + case reflect.Chan, reflect.Func, reflect.UnsafePointer: + // do nothing for unsupported types + default: + return e.encodeScalar(av, v, fieldTag) + } + + return nil +} + +func (e *Encoder) encodeStruct(av *dynamodb.AttributeValue, v reflect.Value, fieldTag tag) error { + // To maintain backwards compatibility with ConvertTo family of methods which + // converted time.Time structs to strings + if v.Type().ConvertibleTo(timeType) { + var t time.Time + t = v.Convert(timeType).Interface().(time.Time) + if fieldTag.AsUnixTime { + return UnixTime(t).MarshalDynamoDBAttributeValue(av) + } + s := t.Format(time.RFC3339Nano) + av.S = &s + return nil + } + + av.M = map[string]*dynamodb.AttributeValue{} + fields := unionStructFields(v.Type(), e.MarshalOptions) + for _, f := range fields { + if f.Name == "" { + return &InvalidMarshalError{msg: "map key cannot be empty"} + } + + found := true + fv := fieldByIndex(v, f.Index, func(v *reflect.Value) bool { + found = false + return false // to break the loop. + }) + if !found { + continue + } + elem := &dynamodb.AttributeValue{} + err := e.encode(elem, fv, f.tag) + if err != nil { + return err + } + skip, err := keepOrOmitEmpty(f.OmitEmpty, elem, err) + if err != nil { + return err + } else if skip { + continue + } + + av.M[f.Name] = elem + } + if len(av.M) == 0 { + encodeNull(av) + } + + return nil +} + +func (e *Encoder) encodeMap(av *dynamodb.AttributeValue, v reflect.Value, fieldTag tag) error { + av.M = map[string]*dynamodb.AttributeValue{} + for _, key := range v.MapKeys() { + keyName := fmt.Sprint(key.Interface()) + if keyName == "" { + return &InvalidMarshalError{msg: "map key cannot be empty"} + } + + elemVal := v.MapIndex(key) + elem := &dynamodb.AttributeValue{} + err := e.encode(elem, elemVal, tag{}) + skip, err := keepOrOmitEmpty(fieldTag.OmitEmptyElem, elem, err) + if err != nil { + return err + } else if skip { + continue + } + + av.M[keyName] = elem + } + if len(av.M) == 0 { + encodeNull(av) + } + + return nil +} + +func (e *Encoder) encodeSlice(av *dynamodb.AttributeValue, v reflect.Value, fieldTag tag) error { + switch v.Type().Elem().Kind() { + case reflect.Uint8: + slice := reflect.MakeSlice(byteSliceType, v.Len(), v.Len()) + reflect.Copy(slice, v) + + b := slice.Bytes() + if len(b) == 0 { + encodeNull(av) + return nil + } + av.B = append([]byte{}, b...) + default: + var elemFn func(dynamodb.AttributeValue) error + + if fieldTag.AsBinSet || v.Type() == byteSliceSlicetype { // Binary Set + av.BS = make([][]byte, 0, v.Len()) + elemFn = func(elem dynamodb.AttributeValue) error { + if elem.B == nil { + return &InvalidMarshalError{msg: "binary set must only contain non-nil byte slices"} + } + av.BS = append(av.BS, elem.B) + return nil + } + } else if fieldTag.AsNumSet { // Number Set + av.NS = make([]*string, 0, v.Len()) + elemFn = func(elem dynamodb.AttributeValue) error { + if elem.N == nil { + return &InvalidMarshalError{msg: "number set must only contain non-nil string numbers"} + } + av.NS = append(av.NS, elem.N) + return nil + } + } else if fieldTag.AsStrSet { // String Set + av.SS = make([]*string, 0, v.Len()) + elemFn = func(elem dynamodb.AttributeValue) error { + if elem.S == nil { + return &InvalidMarshalError{msg: "string set must only contain non-nil strings"} + } + av.SS = append(av.SS, elem.S) + return nil + } + } else { // List + av.L = make([]*dynamodb.AttributeValue, 0, v.Len()) + elemFn = func(elem dynamodb.AttributeValue) error { + av.L = append(av.L, &elem) + return nil + } + } + + if n, err := e.encodeList(v, fieldTag, elemFn); err != nil { + return err + } else if n == 0 { + encodeNull(av) + } + } + + return nil +} + +func (e *Encoder) encodeList(v reflect.Value, fieldTag tag, elemFn func(dynamodb.AttributeValue) error) (int, error) { + count := 0 + for i := 0; i < v.Len(); i++ { + elem := dynamodb.AttributeValue{} + err := e.encode(&elem, v.Index(i), tag{OmitEmpty: fieldTag.OmitEmptyElem}) + skip, err := keepOrOmitEmpty(fieldTag.OmitEmptyElem, &elem, err) + if err != nil { + return 0, err + } else if skip { + continue + } + + if err := elemFn(elem); err != nil { + return 0, err + } + count++ + } + + return count, nil +} + +func (e *Encoder) encodeScalar(av *dynamodb.AttributeValue, v reflect.Value, fieldTag tag) error { + if v.Type() == numberType { + s := v.String() + if fieldTag.AsString { + av.S = &s + } else { + av.N = &s + } + return nil + } + + switch v.Kind() { + case reflect.Bool: + av.BOOL = new(bool) + *av.BOOL = v.Bool() + case reflect.String: + if err := e.encodeString(av, v); err != nil { + return err + } + default: + // Fallback to encoding numbers, will return invalid type if not supported + if err := e.encodeNumber(av, v); err != nil { + return err + } + if fieldTag.AsString && av.NULL == nil && av.N != nil { + av.S = av.N + av.N = nil + } + } + + return nil +} + +func (e *Encoder) encodeNumber(av *dynamodb.AttributeValue, v reflect.Value) error { + if used, err := tryMarshaler(av, v); used { + return err + } + + var out string + switch v.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + out = encodeInt(v.Int()) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + out = encodeUint(v.Uint()) + case reflect.Float32, reflect.Float64: + out = encodeFloat(v.Float()) + default: + return &unsupportedMarshalTypeError{Type: v.Type()} + } + + av.N = &out + + return nil +} + +func (e *Encoder) encodeString(av *dynamodb.AttributeValue, v reflect.Value) error { + if used, err := tryMarshaler(av, v); used { + return err + } + + switch v.Kind() { + case reflect.String: + s := v.String() + if len(s) == 0 && e.NullEmptyString { + encodeNull(av) + } else { + av.S = &s + } + default: + return &unsupportedMarshalTypeError{Type: v.Type()} + } + + return nil +} + +func encodeInt(i int64) string { + return strconv.FormatInt(i, 10) +} +func encodeUint(u uint64) string { + return strconv.FormatUint(u, 10) +} +func encodeFloat(f float64) string { + return strconv.FormatFloat(f, 'f', -1, 64) +} +func encodeNull(av *dynamodb.AttributeValue) { + t := true + *av = dynamodb.AttributeValue{NULL: &t} +} + +func valueElem(v reflect.Value) reflect.Value { + switch v.Kind() { + case reflect.Interface, reflect.Ptr: + for v.Kind() == reflect.Interface || v.Kind() == reflect.Ptr { + v = v.Elem() + } + } + + return v +} + +func emptyValue(v reflect.Value) bool { + switch v.Kind() { + case reflect.Array, reflect.Map, reflect.Slice, reflect.String: + return v.Len() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Interface, reflect.Ptr: + return v.IsNil() + } + return false +} + +func tryMarshaler(av *dynamodb.AttributeValue, v reflect.Value) (bool, error) { + if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() { + v = v.Addr() + } + + if v.Type().NumMethod() == 0 { + return false, nil + } + + if m, ok := v.Interface().(Marshaler); ok { + return true, m.MarshalDynamoDBAttributeValue(av) + } + + return false, nil +} + +func keepOrOmitEmpty(omitEmpty bool, av *dynamodb.AttributeValue, err error) (bool, error) { + if err != nil { + if _, ok := err.(*unsupportedMarshalTypeError); ok { + return true, nil + } + return false, err + } + + if av.NULL != nil && omitEmpty { + return true, nil + } + + return false, nil +} + +// An InvalidMarshalError is an error type representing an error +// occurring when marshaling a Go value type to an AttributeValue. +type InvalidMarshalError struct { + emptyOrigError + msg string +} + +// Error returns the string representation of the error. +// satisfying the error interface +func (e *InvalidMarshalError) Error() string { + return fmt.Sprintf("%s: %s", e.Code(), e.Message()) +} + +// Code returns the code of the error, satisfying the awserr.Error +// interface. +func (e *InvalidMarshalError) Code() string { + return "InvalidMarshalError" +} + +// Message returns the detailed message of the error, satisfying +// the awserr.Error interface. +func (e *InvalidMarshalError) Message() string { + return e.msg +} + +// An unsupportedMarshalTypeError represents a Go value type +// which cannot be marshaled into an AttributeValue and should +// be skipped by the marshaler. +type unsupportedMarshalTypeError struct { + emptyOrigError + Type reflect.Type +} + +// Error returns the string representation of the error. +// satisfying the error interface +func (e *unsupportedMarshalTypeError) Error() string { + return fmt.Sprintf("%s: %s", e.Code(), e.Message()) +} + +// Code returns the code of the error, satisfying the awserr.Error +// interface. +func (e *unsupportedMarshalTypeError) Code() string { + return "unsupportedMarshalTypeError" +} + +// Message returns the detailed message of the error, satisfying +// the awserr.Error interface. +func (e *unsupportedMarshalTypeError) Message() string { + return "Go value type " + e.Type.String() + " is not supported" +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/field.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/field.go new file mode 100644 index 00000000..1fe0d350 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/field.go @@ -0,0 +1,269 @@ +package dynamodbattribute + +import ( + "reflect" + "sort" + "strings" +) + +type field struct { + tag + + Name string + NameFromTag bool + + Index []int + Type reflect.Type +} + +func fieldByName(fields []field, name string) (field, bool) { + foldExists := false + foldField := field{} + + for _, f := range fields { + if f.Name == name { + return f, true + } + if !foldExists && strings.EqualFold(f.Name, name) { + foldField = f + foldExists = true + } + } + + return foldField, foldExists +} + +func buildField(pIdx []int, i int, sf reflect.StructField, fieldTag tag) field { + f := field{ + Name: sf.Name, + Type: sf.Type, + tag: fieldTag, + } + if len(fieldTag.Name) != 0 { + f.NameFromTag = true + f.Name = fieldTag.Name + } + + f.Index = make([]int, len(pIdx)+1) + copy(f.Index, pIdx) + f.Index[len(pIdx)] = i + + return f +} + +func unionStructFields(t reflect.Type, opts MarshalOptions) []field { + fields := enumFields(t, opts) + + sort.Sort(fieldsByName(fields)) + + fields = visibleFields(fields) + + return fields +} + +// enumFields will recursively iterate through a structure and its nested +// anonymous fields. +// +// Based on the enoding/json struct field enumeration of the Go Stdlib +// https://golang.org/src/encoding/json/encode.go typeField func. +func enumFields(t reflect.Type, opts MarshalOptions) []field { + // Fields to explore + current := []field{} + next := []field{{Type: t}} + + // count of queued names + count := map[reflect.Type]int{} + nextCount := map[reflect.Type]int{} + + visited := map[reflect.Type]struct{}{} + fields := []field{} + + for len(next) > 0 { + current, next = next, current[:0] + count, nextCount = nextCount, map[reflect.Type]int{} + + for _, f := range current { + if _, ok := visited[f.Type]; ok { + continue + } + visited[f.Type] = struct{}{} + + for i := 0; i < f.Type.NumField(); i++ { + sf := f.Type.Field(i) + if sf.PkgPath != "" && !sf.Anonymous { + // Ignore unexported and non-anonymous fields + // unexported but anonymous field may still be used if + // the type has exported nested fields + continue + } + + fieldTag := tag{} + fieldTag.parseAVTag(sf.Tag) + if opts.SupportJSONTags && fieldTag == (tag{}) { + fieldTag.parseJSONTag(sf.Tag) + } + + if fieldTag.Ignore { + continue + } + + ft := sf.Type + if ft.Name() == "" && ft.Kind() == reflect.Ptr { + ft = ft.Elem() + } + + structField := buildField(f.Index, i, sf, fieldTag) + structField.Type = ft + + if !sf.Anonymous || ft.Kind() != reflect.Struct { + fields = append(fields, structField) + if count[f.Type] > 1 { + // If there were multiple instances, add a second, + // so that the annihilation code will see a duplicate. + // It only cares about the distinction between 1 or 2, + // so don't bother generating any more copies. + fields = append(fields, structField) + } + continue + } + + // Record new anon struct to explore next round + nextCount[ft]++ + if nextCount[ft] == 1 { + next = append(next, structField) + } + } + } + } + + return fields +} + +// visibleFields will return a slice of fields which are visible based on +// Go's standard visiblity rules with the exception of ties being broken +// by depth and struct tag naming. +// +// Based on the enoding/json field filtering of the Go Stdlib +// https://golang.org/src/encoding/json/encode.go typeField func. +func visibleFields(fields []field) []field { + // Delete all fields that are hidden by the Go rules for embedded fields, + // except that fields with JSON tags are promoted. + + // The fields are sorted in primary order of name, secondary order + // of field index length. Loop over names; for each name, delete + // hidden fields by choosing the one dominant field that survives. + out := fields[:0] + for advance, i := 0, 0; i < len(fields); i += advance { + // One iteration per name. + // Find the sequence of fields with the name of this first field. + fi := fields[i] + name := fi.Name + for advance = 1; i+advance < len(fields); advance++ { + fj := fields[i+advance] + if fj.Name != name { + break + } + } + if advance == 1 { // Only one field with this name + out = append(out, fi) + continue + } + dominant, ok := dominantField(fields[i : i+advance]) + if ok { + out = append(out, dominant) + } + } + + fields = out + sort.Sort(fieldsByIndex(fields)) + + return fields +} + +// dominantField looks through the fields, all of which are known to +// have the same name, to find the single field that dominates the +// others using Go's embedding rules, modified by the presence of +// JSON tags. If there are multiple top-level fields, the boolean +// will be false: This condition is an error in Go and we skip all +// the fields. +// +// Based on the enoding/json field filtering of the Go Stdlib +// https://golang.org/src/encoding/json/encode.go dominantField func. +func dominantField(fields []field) (field, bool) { + // The fields are sorted in increasing index-length order. The winner + // must therefore be one with the shortest index length. Drop all + // longer entries, which is easy: just truncate the slice. + length := len(fields[0].Index) + tagged := -1 // Index of first tagged field. + for i, f := range fields { + if len(f.Index) > length { + fields = fields[:i] + break + } + if f.NameFromTag { + if tagged >= 0 { + // Multiple tagged fields at the same level: conflict. + // Return no field. + return field{}, false + } + tagged = i + } + } + if tagged >= 0 { + return fields[tagged], true + } + // All remaining fields have the same length. If there's more than one, + // we have a conflict (two fields named "X" at the same level) and we + // return no field. + if len(fields) > 1 { + return field{}, false + } + return fields[0], true +} + +// fieldsByName sorts field by name, breaking ties with depth, +// then breaking ties with "name came from json tag", then +// breaking ties with index sequence. +// +// Based on the enoding/json field filtering of the Go Stdlib +// https://golang.org/src/encoding/json/encode.go fieldsByName type. +type fieldsByName []field + +func (x fieldsByName) Len() int { return len(x) } + +func (x fieldsByName) Swap(i, j int) { x[i], x[j] = x[j], x[i] } + +func (x fieldsByName) Less(i, j int) bool { + if x[i].Name != x[j].Name { + return x[i].Name < x[j].Name + } + if len(x[i].Index) != len(x[j].Index) { + return len(x[i].Index) < len(x[j].Index) + } + if x[i].NameFromTag != x[j].NameFromTag { + return x[i].NameFromTag + } + return fieldsByIndex(x).Less(i, j) +} + +// fieldsByIndex sorts field by index sequence. +// +// Based on the enoding/json field filtering of the Go Stdlib +// https://golang.org/src/encoding/json/encode.go fieldsByIndex type. +type fieldsByIndex []field + +func (x fieldsByIndex) Len() int { return len(x) } + +func (x fieldsByIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] } + +func (x fieldsByIndex) Less(i, j int) bool { + for k, xik := range x[i].Index { + if k >= len(x[j].Index) { + return false + } + if xik != x[j].Index[k] { + return xik < x[j].Index[k] + } + } + return len(x[i].Index) < len(x[j].Index) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/tag.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/tag.go new file mode 100644 index 00000000..60bd609b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute/tag.go @@ -0,0 +1,68 @@ +package dynamodbattribute + +import ( + "reflect" + "strings" +) + +type tag struct { + Name string + Ignore bool + OmitEmpty bool + OmitEmptyElem bool + AsString bool + AsBinSet, AsNumSet, AsStrSet bool + AsUnixTime bool +} + +func (t *tag) parseAVTag(structTag reflect.StructTag) { + tagStr := structTag.Get("dynamodbav") + if len(tagStr) == 0 { + return + } + + t.parseTagStr(tagStr) +} + +func (t *tag) parseJSONTag(structTag reflect.StructTag) { + tagStr := structTag.Get("json") + if len(tagStr) == 0 { + return + } + + t.parseTagStr(tagStr) +} + +func (t *tag) parseTagStr(tagStr string) { + parts := strings.Split(tagStr, ",") + if len(parts) == 0 { + return + } + + if name := parts[0]; name == "-" { + t.Name = "" + t.Ignore = true + } else { + t.Name = name + t.Ignore = false + } + + for _, opt := range parts[1:] { + switch opt { + case "omitempty": + t.OmitEmpty = true + case "omitemptyelem": + t.OmitEmptyElem = true + case "string": + t.AsString = true + case "binaryset": + t.AsBinSet = true + case "numberset": + t.AsNumSet = true + case "stringset": + t.AsStrSet = true + case "unixtime": + t.AsUnixTime = true + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go new file mode 100644 index 00000000..5f601652 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go @@ -0,0 +1,149 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dynamodb + +const ( + + // ErrCodeBackupInUseException for service response error code + // "BackupInUseException". + // + // There is another ongoing conflicting backup control plane operation on the + // table. The backups is either being created, deleted or restored to a table. + ErrCodeBackupInUseException = "BackupInUseException" + + // ErrCodeBackupNotFoundException for service response error code + // "BackupNotFoundException". + // + // Backup not found for the given BackupARN. + ErrCodeBackupNotFoundException = "BackupNotFoundException" + + // ErrCodeConditionalCheckFailedException for service response error code + // "ConditionalCheckFailedException". + // + // A condition specified in the operation could not be evaluated. + ErrCodeConditionalCheckFailedException = "ConditionalCheckFailedException" + + // ErrCodeContinuousBackupsUnavailableException for service response error code + // "ContinuousBackupsUnavailableException". + // + // Backups have not yet been enabled for this table. + ErrCodeContinuousBackupsUnavailableException = "ContinuousBackupsUnavailableException" + + // ErrCodeGlobalTableAlreadyExistsException for service response error code + // "GlobalTableAlreadyExistsException". + // + // The specified global table already exists. + ErrCodeGlobalTableAlreadyExistsException = "GlobalTableAlreadyExistsException" + + // ErrCodeGlobalTableNotFoundException for service response error code + // "GlobalTableNotFoundException". + // + // The specified global table does not exist. + ErrCodeGlobalTableNotFoundException = "GlobalTableNotFoundException" + + // ErrCodeIndexNotFoundException for service response error code + // "IndexNotFoundException". + // + // The operation tried to access a nonexistent index. + ErrCodeIndexNotFoundException = "IndexNotFoundException" + + // ErrCodeInternalServerError for service response error code + // "InternalServerError". + // + // An error occurred on the server side. + ErrCodeInternalServerError = "InternalServerError" + + // ErrCodeInvalidRestoreTimeException for service response error code + // "InvalidRestoreTimeException". + // + // An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime + // and LatestRestorableDateTime. + ErrCodeInvalidRestoreTimeException = "InvalidRestoreTimeException" + + // ErrCodeItemCollectionSizeLimitExceededException for service response error code + // "ItemCollectionSizeLimitExceededException". + // + // An item collection is too large. This exception is only returned for tables + // that have one or more local secondary indexes. + ErrCodeItemCollectionSizeLimitExceededException = "ItemCollectionSizeLimitExceededException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // Up to 50 CreateBackup operations are allowed per second, per account. There + // is no limit to the number of daily on-demand backups that can be taken. + // + // Up to 10 simultaneous table operations are allowed per account. These operations + // include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, + // and RestoreTableToPointInTime. + // + // For tables with secondary indexes, only one of those tables can be in the + // CREATING state at any point in time. Do not attempt to create more than one + // such table simultaneously. + // + // The total limit of tables in the ACTIVE state is 250. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodePointInTimeRecoveryUnavailableException for service response error code + // "PointInTimeRecoveryUnavailableException". + // + // Point in time recovery has not yet been enabled for this source table. + ErrCodePointInTimeRecoveryUnavailableException = "PointInTimeRecoveryUnavailableException" + + // ErrCodeProvisionedThroughputExceededException for service response error code + // "ProvisionedThroughputExceededException". + // + // Your request rate is too high. The AWS SDKs for DynamoDB automatically retry + // requests that receive this exception. Your request is eventually successful, + // unless your retry queue is too large to finish. Reduce the frequency of requests + // and use exponential backoff. For more information, go to Error Retries and + // Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) + // in the Amazon DynamoDB Developer Guide. + ErrCodeProvisionedThroughputExceededException = "ProvisionedThroughputExceededException" + + // ErrCodeReplicaAlreadyExistsException for service response error code + // "ReplicaAlreadyExistsException". + // + // The specified replica is already part of the global table. + ErrCodeReplicaAlreadyExistsException = "ReplicaAlreadyExistsException" + + // ErrCodeReplicaNotFoundException for service response error code + // "ReplicaNotFoundException". + // + // The specified replica is no longer part of the global table. + ErrCodeReplicaNotFoundException = "ReplicaNotFoundException" + + // ErrCodeResourceInUseException for service response error code + // "ResourceInUseException". + // + // The operation conflicts with the resource's availability. For example, you + // attempted to recreate an existing table, or tried to delete a table currently + // in the CREATING state. + ErrCodeResourceInUseException = "ResourceInUseException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // The operation tried to access a nonexistent table or index. The resource + // might not be specified correctly, or its status might not be ACTIVE. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" + + // ErrCodeTableAlreadyExistsException for service response error code + // "TableAlreadyExistsException". + // + // A target table with the specified name already exists. + ErrCodeTableAlreadyExistsException = "TableAlreadyExistsException" + + // ErrCodeTableInUseException for service response error code + // "TableInUseException". + // + // A target table with the specified name is either being created or deleted. + ErrCodeTableInUseException = "TableInUseException" + + // ErrCodeTableNotFoundException for service response error code + // "TableNotFoundException". + // + // A source table with the name TableName does not currently exist within the + // subscriber's account. + ErrCodeTableNotFoundException = "TableNotFoundException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go new file mode 100644 index 00000000..80dcd19f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go @@ -0,0 +1,95 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dynamodb + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// DynamoDB provides the API operation methods for making requests to +// Amazon DynamoDB. See this package's package overview docs +// for details on the service. +// +// DynamoDB methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type DynamoDB struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "dynamodb" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the DynamoDB client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a DynamoDB client from just a session. +// svc := dynamodb.New(mySession) +// +// // Create a DynamoDB client with additional configuration +// svc := dynamodb.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *DynamoDB { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *DynamoDB { + svc := &DynamoDB{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2012-08-10", + JSONVersion: "1.0", + TargetPrefix: "DynamoDB_20120810", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a DynamoDB operation and runs any +// custom request initialization. +func (c *DynamoDB) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/waiters.go new file mode 100644 index 00000000..ae515f7d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/waiters.go @@ -0,0 +1,107 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dynamodb + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilTableExists uses the DynamoDB API operation +// DescribeTable to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DynamoDB) WaitUntilTableExists(input *DescribeTableInput) error { + return c.WaitUntilTableExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilTableExistsWithContext is an extended version of WaitUntilTableExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) WaitUntilTableExistsWithContext(ctx aws.Context, input *DescribeTableInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilTableExists", + MaxAttempts: 25, + Delay: request.ConstantWaiterDelay(20 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathWaiterMatch, Argument: "Table.TableStatus", + Expected: "ACTIVE", + }, + { + State: request.RetryWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundException", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeTableInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeTableRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilTableNotExists uses the DynamoDB API operation +// DescribeTable to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DynamoDB) WaitUntilTableNotExists(input *DescribeTableInput) error { + return c.WaitUntilTableNotExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilTableNotExistsWithContext is an extended version of WaitUntilTableNotExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) WaitUntilTableNotExistsWithContext(ctx aws.Context, input *DescribeTableInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilTableNotExists", + MaxAttempts: 25, + Delay: request.ConstantWaiterDelay(20 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundException", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeTableInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeTableRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/api.go b/vendor/github.com/aws/aws-sdk-go/service/iam/api.go new file mode 100644 index 00000000..81bab06b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/api.go @@ -0,0 +1,28511 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package iam + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/query" +) + +const opAddClientIDToOpenIDConnectProvider = "AddClientIDToOpenIDConnectProvider" + +// AddClientIDToOpenIDConnectProviderRequest generates a "aws/request.Request" representing the +// client's request for the AddClientIDToOpenIDConnectProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddClientIDToOpenIDConnectProvider for more information on using the AddClientIDToOpenIDConnectProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddClientIDToOpenIDConnectProviderRequest method. +// req, resp := client.AddClientIDToOpenIDConnectProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddClientIDToOpenIDConnectProvider +func (c *IAM) AddClientIDToOpenIDConnectProviderRequest(input *AddClientIDToOpenIDConnectProviderInput) (req *request.Request, output *AddClientIDToOpenIDConnectProviderOutput) { + op := &request.Operation{ + Name: opAddClientIDToOpenIDConnectProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddClientIDToOpenIDConnectProviderInput{} + } + + output = &AddClientIDToOpenIDConnectProviderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AddClientIDToOpenIDConnectProvider API operation for AWS Identity and Access Management. +// +// Adds a new client ID (also known as audience) to the list of client IDs already +// registered for the specified IAM OpenID Connect (OIDC) provider resource. +// +// This operation is idempotent; it does not fail or return an error if you +// add an existing client ID to the provider. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation AddClientIDToOpenIDConnectProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddClientIDToOpenIDConnectProvider +func (c *IAM) AddClientIDToOpenIDConnectProvider(input *AddClientIDToOpenIDConnectProviderInput) (*AddClientIDToOpenIDConnectProviderOutput, error) { + req, out := c.AddClientIDToOpenIDConnectProviderRequest(input) + return out, req.Send() +} + +// AddClientIDToOpenIDConnectProviderWithContext is the same as AddClientIDToOpenIDConnectProvider with the addition of +// the ability to pass a context and additional request options. +// +// See AddClientIDToOpenIDConnectProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) AddClientIDToOpenIDConnectProviderWithContext(ctx aws.Context, input *AddClientIDToOpenIDConnectProviderInput, opts ...request.Option) (*AddClientIDToOpenIDConnectProviderOutput, error) { + req, out := c.AddClientIDToOpenIDConnectProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddRoleToInstanceProfile = "AddRoleToInstanceProfile" + +// AddRoleToInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the AddRoleToInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddRoleToInstanceProfile for more information on using the AddRoleToInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddRoleToInstanceProfileRequest method. +// req, resp := client.AddRoleToInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddRoleToInstanceProfile +func (c *IAM) AddRoleToInstanceProfileRequest(input *AddRoleToInstanceProfileInput) (req *request.Request, output *AddRoleToInstanceProfileOutput) { + op := &request.Operation{ + Name: opAddRoleToInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddRoleToInstanceProfileInput{} + } + + output = &AddRoleToInstanceProfileOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AddRoleToInstanceProfile API operation for AWS Identity and Access Management. +// +// Adds the specified IAM role to the specified instance profile. An instance +// profile can contain only one role, and this limit cannot be increased. You +// can remove the existing role and then add a different role to an instance +// profile. You must then wait for the change to appear across all of AWS because +// of eventual consistency (https://en.wikipedia.org/wiki/Eventual_consistency). +// To force the change, you must disassociate the instance profile (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DisassociateIamInstanceProfile.html) +// and then associate the instance profile (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssociateIamInstanceProfile.html), +// or you can stop your instance and then restart it. +// +// The caller of this API must be granted the PassRole permission on the IAM +// role by a permission policy. +// +// For more information about roles, go to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// For more information about instance profiles, go to About Instance Profiles +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation AddRoleToInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddRoleToInstanceProfile +func (c *IAM) AddRoleToInstanceProfile(input *AddRoleToInstanceProfileInput) (*AddRoleToInstanceProfileOutput, error) { + req, out := c.AddRoleToInstanceProfileRequest(input) + return out, req.Send() +} + +// AddRoleToInstanceProfileWithContext is the same as AddRoleToInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See AddRoleToInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) AddRoleToInstanceProfileWithContext(ctx aws.Context, input *AddRoleToInstanceProfileInput, opts ...request.Option) (*AddRoleToInstanceProfileOutput, error) { + req, out := c.AddRoleToInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddUserToGroup = "AddUserToGroup" + +// AddUserToGroupRequest generates a "aws/request.Request" representing the +// client's request for the AddUserToGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddUserToGroup for more information on using the AddUserToGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddUserToGroupRequest method. +// req, resp := client.AddUserToGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddUserToGroup +func (c *IAM) AddUserToGroupRequest(input *AddUserToGroupInput) (req *request.Request, output *AddUserToGroupOutput) { + op := &request.Operation{ + Name: opAddUserToGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddUserToGroupInput{} + } + + output = &AddUserToGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AddUserToGroup API operation for AWS Identity and Access Management. +// +// Adds the specified user to the specified group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation AddUserToGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddUserToGroup +func (c *IAM) AddUserToGroup(input *AddUserToGroupInput) (*AddUserToGroupOutput, error) { + req, out := c.AddUserToGroupRequest(input) + return out, req.Send() +} + +// AddUserToGroupWithContext is the same as AddUserToGroup with the addition of +// the ability to pass a context and additional request options. +// +// See AddUserToGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) AddUserToGroupWithContext(ctx aws.Context, input *AddUserToGroupInput, opts ...request.Option) (*AddUserToGroupOutput, error) { + req, out := c.AddUserToGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachGroupPolicy = "AttachGroupPolicy" + +// AttachGroupPolicyRequest generates a "aws/request.Request" representing the +// client's request for the AttachGroupPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AttachGroupPolicy for more information on using the AttachGroupPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AttachGroupPolicyRequest method. +// req, resp := client.AttachGroupPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachGroupPolicy +func (c *IAM) AttachGroupPolicyRequest(input *AttachGroupPolicyInput) (req *request.Request, output *AttachGroupPolicyOutput) { + op := &request.Operation{ + Name: opAttachGroupPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AttachGroupPolicyInput{} + } + + output = &AttachGroupPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AttachGroupPolicy API operation for AWS Identity and Access Management. +// +// Attaches the specified managed policy to the specified IAM group. +// +// You use this API to attach a managed policy to a group. To embed an inline +// policy in a group, use PutGroupPolicy. +// +// For more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation AttachGroupPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodePolicyNotAttachableException "PolicyNotAttachable" +// The request failed because AWS service role policies can only be attached +// to the service-linked role for that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachGroupPolicy +func (c *IAM) AttachGroupPolicy(input *AttachGroupPolicyInput) (*AttachGroupPolicyOutput, error) { + req, out := c.AttachGroupPolicyRequest(input) + return out, req.Send() +} + +// AttachGroupPolicyWithContext is the same as AttachGroupPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See AttachGroupPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) AttachGroupPolicyWithContext(ctx aws.Context, input *AttachGroupPolicyInput, opts ...request.Option) (*AttachGroupPolicyOutput, error) { + req, out := c.AttachGroupPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachRolePolicy = "AttachRolePolicy" + +// AttachRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the AttachRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AttachRolePolicy for more information on using the AttachRolePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AttachRolePolicyRequest method. +// req, resp := client.AttachRolePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachRolePolicy +func (c *IAM) AttachRolePolicyRequest(input *AttachRolePolicyInput) (req *request.Request, output *AttachRolePolicyOutput) { + op := &request.Operation{ + Name: opAttachRolePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AttachRolePolicyInput{} + } + + output = &AttachRolePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AttachRolePolicy API operation for AWS Identity and Access Management. +// +// Attaches the specified managed policy to the specified IAM role. When you +// attach a managed policy to a role, the managed policy becomes part of the +// role's permission (access) policy. +// +// You cannot use a managed policy as the role's trust policy. The role's trust +// policy is created at the same time as the role, using CreateRole. You can +// update a role's trust policy using UpdateAssumeRolePolicy. +// +// Use this API to attach a managed policy to a role. To embed an inline policy +// in a role, use PutRolePolicy. For more information about policies, see Managed +// Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation AttachRolePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodePolicyNotAttachableException "PolicyNotAttachable" +// The request failed because AWS service role policies can only be attached +// to the service-linked role for that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachRolePolicy +func (c *IAM) AttachRolePolicy(input *AttachRolePolicyInput) (*AttachRolePolicyOutput, error) { + req, out := c.AttachRolePolicyRequest(input) + return out, req.Send() +} + +// AttachRolePolicyWithContext is the same as AttachRolePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See AttachRolePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) AttachRolePolicyWithContext(ctx aws.Context, input *AttachRolePolicyInput, opts ...request.Option) (*AttachRolePolicyOutput, error) { + req, out := c.AttachRolePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachUserPolicy = "AttachUserPolicy" + +// AttachUserPolicyRequest generates a "aws/request.Request" representing the +// client's request for the AttachUserPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AttachUserPolicy for more information on using the AttachUserPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AttachUserPolicyRequest method. +// req, resp := client.AttachUserPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachUserPolicy +func (c *IAM) AttachUserPolicyRequest(input *AttachUserPolicyInput) (req *request.Request, output *AttachUserPolicyOutput) { + op := &request.Operation{ + Name: opAttachUserPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AttachUserPolicyInput{} + } + + output = &AttachUserPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AttachUserPolicy API operation for AWS Identity and Access Management. +// +// Attaches the specified managed policy to the specified user. +// +// You use this API to attach a managed policy to a user. To embed an inline +// policy in a user, use PutUserPolicy. +// +// For more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation AttachUserPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodePolicyNotAttachableException "PolicyNotAttachable" +// The request failed because AWS service role policies can only be attached +// to the service-linked role for that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachUserPolicy +func (c *IAM) AttachUserPolicy(input *AttachUserPolicyInput) (*AttachUserPolicyOutput, error) { + req, out := c.AttachUserPolicyRequest(input) + return out, req.Send() +} + +// AttachUserPolicyWithContext is the same as AttachUserPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See AttachUserPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) AttachUserPolicyWithContext(ctx aws.Context, input *AttachUserPolicyInput, opts ...request.Option) (*AttachUserPolicyOutput, error) { + req, out := c.AttachUserPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opChangePassword = "ChangePassword" + +// ChangePasswordRequest generates a "aws/request.Request" representing the +// client's request for the ChangePassword operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ChangePassword for more information on using the ChangePassword +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ChangePasswordRequest method. +// req, resp := client.ChangePasswordRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ChangePassword +func (c *IAM) ChangePasswordRequest(input *ChangePasswordInput) (req *request.Request, output *ChangePasswordOutput) { + op := &request.Operation{ + Name: opChangePassword, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ChangePasswordInput{} + } + + output = &ChangePasswordOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// ChangePassword API operation for AWS Identity and Access Management. +// +// Changes the password of the IAM user who is calling this operation. The AWS +// account root user password is not affected by this operation. +// +// To change the password for a different user, see UpdateLoginProfile. For +// more information about modifying passwords, see Managing Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ChangePassword for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidUserTypeException "InvalidUserType" +// The request was rejected because the type of user for the transaction was +// incorrect. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodePasswordPolicyViolationException "PasswordPolicyViolation" +// The request was rejected because the provided password did not meet the requirements +// imposed by the account password policy. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ChangePassword +func (c *IAM) ChangePassword(input *ChangePasswordInput) (*ChangePasswordOutput, error) { + req, out := c.ChangePasswordRequest(input) + return out, req.Send() +} + +// ChangePasswordWithContext is the same as ChangePassword with the addition of +// the ability to pass a context and additional request options. +// +// See ChangePassword for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ChangePasswordWithContext(ctx aws.Context, input *ChangePasswordInput, opts ...request.Option) (*ChangePasswordOutput, error) { + req, out := c.ChangePasswordRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateAccessKey = "CreateAccessKey" + +// CreateAccessKeyRequest generates a "aws/request.Request" representing the +// client's request for the CreateAccessKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateAccessKey for more information on using the CreateAccessKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAccessKeyRequest method. +// req, resp := client.CreateAccessKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateAccessKey +func (c *IAM) CreateAccessKeyRequest(input *CreateAccessKeyInput) (req *request.Request, output *CreateAccessKeyOutput) { + op := &request.Operation{ + Name: opCreateAccessKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateAccessKeyInput{} + } + + output = &CreateAccessKeyOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateAccessKey API operation for AWS Identity and Access Management. +// +// Creates a new AWS secret access key and corresponding AWS access key ID for +// the specified user. The default status for new keys is Active. +// +// If you do not specify a user name, IAM determines the user name implicitly +// based on the AWS access key ID signing the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials. This is true even if the AWS account +// has no associated users. +// +// For information about limits on the number of keys you can create, see Limitations +// on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// To ensure the security of your AWS account, the secret access key is accessible +// only during key and user creation. You must save the key (for example, in +// a text file) if you want to be able to access it again. If a secret key is +// lost, you can delete the access keys for the associated user and then create +// new keys. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateAccessKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateAccessKey +func (c *IAM) CreateAccessKey(input *CreateAccessKeyInput) (*CreateAccessKeyOutput, error) { + req, out := c.CreateAccessKeyRequest(input) + return out, req.Send() +} + +// CreateAccessKeyWithContext is the same as CreateAccessKey with the addition of +// the ability to pass a context and additional request options. +// +// See CreateAccessKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateAccessKeyWithContext(ctx aws.Context, input *CreateAccessKeyInput, opts ...request.Option) (*CreateAccessKeyOutput, error) { + req, out := c.CreateAccessKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateAccountAlias = "CreateAccountAlias" + +// CreateAccountAliasRequest generates a "aws/request.Request" representing the +// client's request for the CreateAccountAlias operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateAccountAlias for more information on using the CreateAccountAlias +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAccountAliasRequest method. +// req, resp := client.CreateAccountAliasRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateAccountAlias +func (c *IAM) CreateAccountAliasRequest(input *CreateAccountAliasInput) (req *request.Request, output *CreateAccountAliasOutput) { + op := &request.Operation{ + Name: opCreateAccountAlias, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateAccountAliasInput{} + } + + output = &CreateAccountAliasOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// CreateAccountAlias API operation for AWS Identity and Access Management. +// +// Creates an alias for your AWS account. For information about using an AWS +// account alias, see Using an Alias for Your AWS Account ID (http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateAccountAlias for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateAccountAlias +func (c *IAM) CreateAccountAlias(input *CreateAccountAliasInput) (*CreateAccountAliasOutput, error) { + req, out := c.CreateAccountAliasRequest(input) + return out, req.Send() +} + +// CreateAccountAliasWithContext is the same as CreateAccountAlias with the addition of +// the ability to pass a context and additional request options. +// +// See CreateAccountAlias for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateAccountAliasWithContext(ctx aws.Context, input *CreateAccountAliasInput, opts ...request.Option) (*CreateAccountAliasOutput, error) { + req, out := c.CreateAccountAliasRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateGroup = "CreateGroup" + +// CreateGroupRequest generates a "aws/request.Request" representing the +// client's request for the CreateGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateGroup for more information on using the CreateGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateGroupRequest method. +// req, resp := client.CreateGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateGroup +func (c *IAM) CreateGroupRequest(input *CreateGroupInput) (req *request.Request, output *CreateGroupOutput) { + op := &request.Operation{ + Name: opCreateGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateGroupInput{} + } + + output = &CreateGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateGroup API operation for AWS Identity and Access Management. +// +// Creates a new group. +// +// For information about the number of groups you can create, see Limitations +// on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateGroup +func (c *IAM) CreateGroup(input *CreateGroupInput) (*CreateGroupOutput, error) { + req, out := c.CreateGroupRequest(input) + return out, req.Send() +} + +// CreateGroupWithContext is the same as CreateGroup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateGroupWithContext(ctx aws.Context, input *CreateGroupInput, opts ...request.Option) (*CreateGroupOutput, error) { + req, out := c.CreateGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateInstanceProfile = "CreateInstanceProfile" + +// CreateInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the CreateInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateInstanceProfile for more information on using the CreateInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateInstanceProfileRequest method. +// req, resp := client.CreateInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateInstanceProfile +func (c *IAM) CreateInstanceProfileRequest(input *CreateInstanceProfileInput) (req *request.Request, output *CreateInstanceProfileOutput) { + op := &request.Operation{ + Name: opCreateInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateInstanceProfileInput{} + } + + output = &CreateInstanceProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateInstanceProfile API operation for AWS Identity and Access Management. +// +// Creates a new instance profile. For information about instance profiles, +// go to About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// +// For information about the number of instance profiles you can create, see +// Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateInstanceProfile +func (c *IAM) CreateInstanceProfile(input *CreateInstanceProfileInput) (*CreateInstanceProfileOutput, error) { + req, out := c.CreateInstanceProfileRequest(input) + return out, req.Send() +} + +// CreateInstanceProfileWithContext is the same as CreateInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See CreateInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateInstanceProfileWithContext(ctx aws.Context, input *CreateInstanceProfileInput, opts ...request.Option) (*CreateInstanceProfileOutput, error) { + req, out := c.CreateInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateLoginProfile = "CreateLoginProfile" + +// CreateLoginProfileRequest generates a "aws/request.Request" representing the +// client's request for the CreateLoginProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLoginProfile for more information on using the CreateLoginProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLoginProfileRequest method. +// req, resp := client.CreateLoginProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateLoginProfile +func (c *IAM) CreateLoginProfileRequest(input *CreateLoginProfileInput) (req *request.Request, output *CreateLoginProfileOutput) { + op := &request.Operation{ + Name: opCreateLoginProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLoginProfileInput{} + } + + output = &CreateLoginProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateLoginProfile API operation for AWS Identity and Access Management. +// +// Creates a password for the specified user, giving the user the ability to +// access AWS services through the AWS Management Console. For more information +// about managing passwords, see Managing Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateLoginProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodePasswordPolicyViolationException "PasswordPolicyViolation" +// The request was rejected because the provided password did not meet the requirements +// imposed by the account password policy. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateLoginProfile +func (c *IAM) CreateLoginProfile(input *CreateLoginProfileInput) (*CreateLoginProfileOutput, error) { + req, out := c.CreateLoginProfileRequest(input) + return out, req.Send() +} + +// CreateLoginProfileWithContext is the same as CreateLoginProfile with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLoginProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateLoginProfileWithContext(ctx aws.Context, input *CreateLoginProfileInput, opts ...request.Option) (*CreateLoginProfileOutput, error) { + req, out := c.CreateLoginProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateOpenIDConnectProvider = "CreateOpenIDConnectProvider" + +// CreateOpenIDConnectProviderRequest generates a "aws/request.Request" representing the +// client's request for the CreateOpenIDConnectProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateOpenIDConnectProvider for more information on using the CreateOpenIDConnectProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateOpenIDConnectProviderRequest method. +// req, resp := client.CreateOpenIDConnectProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateOpenIDConnectProvider +func (c *IAM) CreateOpenIDConnectProviderRequest(input *CreateOpenIDConnectProviderInput) (req *request.Request, output *CreateOpenIDConnectProviderOutput) { + op := &request.Operation{ + Name: opCreateOpenIDConnectProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateOpenIDConnectProviderInput{} + } + + output = &CreateOpenIDConnectProviderOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateOpenIDConnectProvider API operation for AWS Identity and Access Management. +// +// Creates an IAM entity to describe an identity provider (IdP) that supports +// OpenID Connect (OIDC) (http://openid.net/connect/). +// +// The OIDC provider that you create with this operation can be used as a principal +// in a role's trust policy. Such a policy establishes a trust relationship +// between AWS and the OIDC provider. +// +// When you create the IAM OIDC provider, you specify the following: +// +// * The URL of the OIDC identity provider (IdP) to trust +// +// * A list of client IDs (also known as audiences) that identify the application +// or applications that are allowed to authenticate using the OIDC provider +// +// * A list of thumbprints of the server certificate(s) that the IdP uses. +// +// You get all of this information from the OIDC IdP that you want to use to +// access AWS. +// +// Because trust for the OIDC provider is derived from the IAM provider that +// this operation creates, it is best to limit access to the CreateOpenIDConnectProvider +// operation to highly privileged users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateOpenIDConnectProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateOpenIDConnectProvider +func (c *IAM) CreateOpenIDConnectProvider(input *CreateOpenIDConnectProviderInput) (*CreateOpenIDConnectProviderOutput, error) { + req, out := c.CreateOpenIDConnectProviderRequest(input) + return out, req.Send() +} + +// CreateOpenIDConnectProviderWithContext is the same as CreateOpenIDConnectProvider with the addition of +// the ability to pass a context and additional request options. +// +// See CreateOpenIDConnectProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateOpenIDConnectProviderWithContext(ctx aws.Context, input *CreateOpenIDConnectProviderInput, opts ...request.Option) (*CreateOpenIDConnectProviderOutput, error) { + req, out := c.CreateOpenIDConnectProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreatePolicy = "CreatePolicy" + +// CreatePolicyRequest generates a "aws/request.Request" representing the +// client's request for the CreatePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreatePolicy for more information on using the CreatePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreatePolicyRequest method. +// req, resp := client.CreatePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreatePolicy +func (c *IAM) CreatePolicyRequest(input *CreatePolicyInput) (req *request.Request, output *CreatePolicyOutput) { + op := &request.Operation{ + Name: opCreatePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreatePolicyInput{} + } + + output = &CreatePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreatePolicy API operation for AWS Identity and Access Management. +// +// Creates a new managed policy for your AWS account. +// +// This operation creates a policy version with a version identifier of v1 and +// sets v1 as the policy's default version. For more information about policy +// versions, see Versioning for Managed Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) +// in the IAM User Guide. +// +// For more information about managed policies in general, see Managed Policies +// and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreatePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreatePolicy +func (c *IAM) CreatePolicy(input *CreatePolicyInput) (*CreatePolicyOutput, error) { + req, out := c.CreatePolicyRequest(input) + return out, req.Send() +} + +// CreatePolicyWithContext is the same as CreatePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See CreatePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreatePolicyWithContext(ctx aws.Context, input *CreatePolicyInput, opts ...request.Option) (*CreatePolicyOutput, error) { + req, out := c.CreatePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreatePolicyVersion = "CreatePolicyVersion" + +// CreatePolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the CreatePolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreatePolicyVersion for more information on using the CreatePolicyVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreatePolicyVersionRequest method. +// req, resp := client.CreatePolicyVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreatePolicyVersion +func (c *IAM) CreatePolicyVersionRequest(input *CreatePolicyVersionInput) (req *request.Request, output *CreatePolicyVersionOutput) { + op := &request.Operation{ + Name: opCreatePolicyVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreatePolicyVersionInput{} + } + + output = &CreatePolicyVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreatePolicyVersion API operation for AWS Identity and Access Management. +// +// Creates a new version of the specified managed policy. To update a managed +// policy, you create a new policy version. A managed policy can have up to +// five versions. If the policy has five versions, you must delete an existing +// version using DeletePolicyVersion before you create a new version. +// +// Optionally, you can set the new version as the policy's default version. +// The default version is the version that is in effect for the IAM users, groups, +// and roles to which the policy is attached. +// +// For more information about managed policy versions, see Versioning for Managed +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreatePolicyVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreatePolicyVersion +func (c *IAM) CreatePolicyVersion(input *CreatePolicyVersionInput) (*CreatePolicyVersionOutput, error) { + req, out := c.CreatePolicyVersionRequest(input) + return out, req.Send() +} + +// CreatePolicyVersionWithContext is the same as CreatePolicyVersion with the addition of +// the ability to pass a context and additional request options. +// +// See CreatePolicyVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreatePolicyVersionWithContext(ctx aws.Context, input *CreatePolicyVersionInput, opts ...request.Option) (*CreatePolicyVersionOutput, error) { + req, out := c.CreatePolicyVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateRole = "CreateRole" + +// CreateRoleRequest generates a "aws/request.Request" representing the +// client's request for the CreateRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateRole for more information on using the CreateRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateRoleRequest method. +// req, resp := client.CreateRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateRole +func (c *IAM) CreateRoleRequest(input *CreateRoleInput) (req *request.Request, output *CreateRoleOutput) { + op := &request.Operation{ + Name: opCreateRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateRoleInput{} + } + + output = &CreateRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateRole API operation for AWS Identity and Access Management. +// +// Creates a new role for your AWS account. For more information about roles, +// go to IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// For information about limitations on role names and the number of roles you +// can create, go to Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateRole +func (c *IAM) CreateRole(input *CreateRoleInput) (*CreateRoleOutput, error) { + req, out := c.CreateRoleRequest(input) + return out, req.Send() +} + +// CreateRoleWithContext is the same as CreateRole with the addition of +// the ability to pass a context and additional request options. +// +// See CreateRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateRoleWithContext(ctx aws.Context, input *CreateRoleInput, opts ...request.Option) (*CreateRoleOutput, error) { + req, out := c.CreateRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSAMLProvider = "CreateSAMLProvider" + +// CreateSAMLProviderRequest generates a "aws/request.Request" representing the +// client's request for the CreateSAMLProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSAMLProvider for more information on using the CreateSAMLProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSAMLProviderRequest method. +// req, resp := client.CreateSAMLProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateSAMLProvider +func (c *IAM) CreateSAMLProviderRequest(input *CreateSAMLProviderInput) (req *request.Request, output *CreateSAMLProviderOutput) { + op := &request.Operation{ + Name: opCreateSAMLProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSAMLProviderInput{} + } + + output = &CreateSAMLProviderOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSAMLProvider API operation for AWS Identity and Access Management. +// +// Creates an IAM resource that describes an identity provider (IdP) that supports +// SAML 2.0. +// +// The SAML provider resource that you create with this operation can be used +// as a principal in an IAM role's trust policy. Such a policy can enable federated +// users who sign-in using the SAML IdP to assume the role. You can create an +// IAM role that supports Web-based single sign-on (SSO) to the AWS Management +// Console or one that supports API access to AWS. +// +// When you create the SAML provider resource, you upload a SAML metadata document +// that you get from your IdP. That document includes the issuer's name, expiration +// information, and keys that can be used to validate the SAML authentication +// response (assertions) that the IdP sends. You must generate the metadata +// document using the identity management software that is used as your organization's +// IdP. +// +// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// +// For more information, see Enabling SAML 2.0 Federated Users to Access the +// AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html) +// and About SAML 2.0-based Federation (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateSAMLProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateSAMLProvider +func (c *IAM) CreateSAMLProvider(input *CreateSAMLProviderInput) (*CreateSAMLProviderOutput, error) { + req, out := c.CreateSAMLProviderRequest(input) + return out, req.Send() +} + +// CreateSAMLProviderWithContext is the same as CreateSAMLProvider with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSAMLProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateSAMLProviderWithContext(ctx aws.Context, input *CreateSAMLProviderInput, opts ...request.Option) (*CreateSAMLProviderOutput, error) { + req, out := c.CreateSAMLProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateServiceLinkedRole = "CreateServiceLinkedRole" + +// CreateServiceLinkedRoleRequest generates a "aws/request.Request" representing the +// client's request for the CreateServiceLinkedRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateServiceLinkedRole for more information on using the CreateServiceLinkedRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateServiceLinkedRoleRequest method. +// req, resp := client.CreateServiceLinkedRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateServiceLinkedRole +func (c *IAM) CreateServiceLinkedRoleRequest(input *CreateServiceLinkedRoleInput) (req *request.Request, output *CreateServiceLinkedRoleOutput) { + op := &request.Operation{ + Name: opCreateServiceLinkedRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateServiceLinkedRoleInput{} + } + + output = &CreateServiceLinkedRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateServiceLinkedRole API operation for AWS Identity and Access Management. +// +// Creates an IAM role that is linked to a specific AWS service. The service +// controls the attached policies and when the role can be deleted. This helps +// ensure that the service is not broken by an unexpectedly changed or deleted +// role, which could put your AWS resources into an unknown state. Allowing +// the service to control the role helps improve service stability and proper +// cleanup when a service and its role are no longer needed. +// +// The name of the role is generated by combining the string that you specify +// for the AWSServiceName parameter with the string that you specify for the +// CustomSuffix parameter. The resulting name must be unique in your account +// or the request fails. +// +// To attach a policy to this service-linked role, you must make the request +// using the AWS service that depends on this role. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateServiceLinkedRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateServiceLinkedRole +func (c *IAM) CreateServiceLinkedRole(input *CreateServiceLinkedRoleInput) (*CreateServiceLinkedRoleOutput, error) { + req, out := c.CreateServiceLinkedRoleRequest(input) + return out, req.Send() +} + +// CreateServiceLinkedRoleWithContext is the same as CreateServiceLinkedRole with the addition of +// the ability to pass a context and additional request options. +// +// See CreateServiceLinkedRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateServiceLinkedRoleWithContext(ctx aws.Context, input *CreateServiceLinkedRoleInput, opts ...request.Option) (*CreateServiceLinkedRoleOutput, error) { + req, out := c.CreateServiceLinkedRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateServiceSpecificCredential = "CreateServiceSpecificCredential" + +// CreateServiceSpecificCredentialRequest generates a "aws/request.Request" representing the +// client's request for the CreateServiceSpecificCredential operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateServiceSpecificCredential for more information on using the CreateServiceSpecificCredential +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateServiceSpecificCredentialRequest method. +// req, resp := client.CreateServiceSpecificCredentialRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateServiceSpecificCredential +func (c *IAM) CreateServiceSpecificCredentialRequest(input *CreateServiceSpecificCredentialInput) (req *request.Request, output *CreateServiceSpecificCredentialOutput) { + op := &request.Operation{ + Name: opCreateServiceSpecificCredential, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateServiceSpecificCredentialInput{} + } + + output = &CreateServiceSpecificCredentialOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateServiceSpecificCredential API operation for AWS Identity and Access Management. +// +// Generates a set of credentials consisting of a user name and password that +// can be used to access the service specified in the request. These credentials +// are generated by IAM, and can be used only for the specified service. +// +// You can have a maximum of two sets of service-specific credentials for each +// supported service per user. +// +// The only supported service at this time is AWS CodeCommit. +// +// You can reset the password to a new service-generated value by calling ResetServiceSpecificCredential. +// +// For more information about service-specific credentials, see Using IAM with +// AWS CodeCommit: Git Credentials, SSH Keys, and AWS Access Keys (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_ssh-keys.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateServiceSpecificCredential for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceNotSupportedException "NotSupportedService" +// The specified service does not support service-specific credentials. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateServiceSpecificCredential +func (c *IAM) CreateServiceSpecificCredential(input *CreateServiceSpecificCredentialInput) (*CreateServiceSpecificCredentialOutput, error) { + req, out := c.CreateServiceSpecificCredentialRequest(input) + return out, req.Send() +} + +// CreateServiceSpecificCredentialWithContext is the same as CreateServiceSpecificCredential with the addition of +// the ability to pass a context and additional request options. +// +// See CreateServiceSpecificCredential for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateServiceSpecificCredentialWithContext(ctx aws.Context, input *CreateServiceSpecificCredentialInput, opts ...request.Option) (*CreateServiceSpecificCredentialOutput, error) { + req, out := c.CreateServiceSpecificCredentialRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateUser = "CreateUser" + +// CreateUserRequest generates a "aws/request.Request" representing the +// client's request for the CreateUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateUser for more information on using the CreateUser +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateUserRequest method. +// req, resp := client.CreateUserRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateUser +func (c *IAM) CreateUserRequest(input *CreateUserInput) (req *request.Request, output *CreateUserOutput) { + op := &request.Operation{ + Name: opCreateUser, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateUserInput{} + } + + output = &CreateUserOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateUser API operation for AWS Identity and Access Management. +// +// Creates a new IAM user for your AWS account. +// +// For information about limitations on the number of IAM users you can create, +// see Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateUser for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateUser +func (c *IAM) CreateUser(input *CreateUserInput) (*CreateUserOutput, error) { + req, out := c.CreateUserRequest(input) + return out, req.Send() +} + +// CreateUserWithContext is the same as CreateUser with the addition of +// the ability to pass a context and additional request options. +// +// See CreateUser for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateUserWithContext(ctx aws.Context, input *CreateUserInput, opts ...request.Option) (*CreateUserOutput, error) { + req, out := c.CreateUserRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateVirtualMFADevice = "CreateVirtualMFADevice" + +// CreateVirtualMFADeviceRequest generates a "aws/request.Request" representing the +// client's request for the CreateVirtualMFADevice operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateVirtualMFADevice for more information on using the CreateVirtualMFADevice +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateVirtualMFADeviceRequest method. +// req, resp := client.CreateVirtualMFADeviceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateVirtualMFADevice +func (c *IAM) CreateVirtualMFADeviceRequest(input *CreateVirtualMFADeviceInput) (req *request.Request, output *CreateVirtualMFADeviceOutput) { + op := &request.Operation{ + Name: opCreateVirtualMFADevice, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateVirtualMFADeviceInput{} + } + + output = &CreateVirtualMFADeviceOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateVirtualMFADevice API operation for AWS Identity and Access Management. +// +// Creates a new virtual MFA device for the AWS account. After creating the +// virtual MFA, use EnableMFADevice to attach the MFA device to an IAM user. +// For more information about creating and working with virtual MFA devices, +// go to Using a Virtual MFA Device (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html) +// in the IAM User Guide. +// +// For information about limits on the number of MFA devices you can create, +// see Limitations on Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// The seed information contained in the QR code and the Base32 string should +// be treated like any other secret access information, such as your AWS access +// keys or your passwords. After you provision your virtual device, you should +// ensure that the information is destroyed following secure procedures. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation CreateVirtualMFADevice for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateVirtualMFADevice +func (c *IAM) CreateVirtualMFADevice(input *CreateVirtualMFADeviceInput) (*CreateVirtualMFADeviceOutput, error) { + req, out := c.CreateVirtualMFADeviceRequest(input) + return out, req.Send() +} + +// CreateVirtualMFADeviceWithContext is the same as CreateVirtualMFADevice with the addition of +// the ability to pass a context and additional request options. +// +// See CreateVirtualMFADevice for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) CreateVirtualMFADeviceWithContext(ctx aws.Context, input *CreateVirtualMFADeviceInput, opts ...request.Option) (*CreateVirtualMFADeviceOutput, error) { + req, out := c.CreateVirtualMFADeviceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeactivateMFADevice = "DeactivateMFADevice" + +// DeactivateMFADeviceRequest generates a "aws/request.Request" representing the +// client's request for the DeactivateMFADevice operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeactivateMFADevice for more information on using the DeactivateMFADevice +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeactivateMFADeviceRequest method. +// req, resp := client.DeactivateMFADeviceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeactivateMFADevice +func (c *IAM) DeactivateMFADeviceRequest(input *DeactivateMFADeviceInput) (req *request.Request, output *DeactivateMFADeviceOutput) { + op := &request.Operation{ + Name: opDeactivateMFADevice, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeactivateMFADeviceInput{} + } + + output = &DeactivateMFADeviceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeactivateMFADevice API operation for AWS Identity and Access Management. +// +// Deactivates the specified MFA device and removes it from association with +// the user name for which it was originally enabled. +// +// For more information about creating and working with virtual MFA devices, +// go to Using a Virtual MFA Device (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeactivateMFADevice for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeactivateMFADevice +func (c *IAM) DeactivateMFADevice(input *DeactivateMFADeviceInput) (*DeactivateMFADeviceOutput, error) { + req, out := c.DeactivateMFADeviceRequest(input) + return out, req.Send() +} + +// DeactivateMFADeviceWithContext is the same as DeactivateMFADevice with the addition of +// the ability to pass a context and additional request options. +// +// See DeactivateMFADevice for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeactivateMFADeviceWithContext(ctx aws.Context, input *DeactivateMFADeviceInput, opts ...request.Option) (*DeactivateMFADeviceOutput, error) { + req, out := c.DeactivateMFADeviceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteAccessKey = "DeleteAccessKey" + +// DeleteAccessKeyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAccessKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAccessKey for more information on using the DeleteAccessKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAccessKeyRequest method. +// req, resp := client.DeleteAccessKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccessKey +func (c *IAM) DeleteAccessKeyRequest(input *DeleteAccessKeyInput) (req *request.Request, output *DeleteAccessKeyOutput) { + op := &request.Operation{ + Name: opDeleteAccessKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAccessKeyInput{} + } + + output = &DeleteAccessKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteAccessKey API operation for AWS Identity and Access Management. +// +// Deletes the access key pair associated with the specified IAM user. +// +// If you do not specify a user name, IAM determines the user name implicitly +// based on the AWS access key ID signing the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteAccessKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccessKey +func (c *IAM) DeleteAccessKey(input *DeleteAccessKeyInput) (*DeleteAccessKeyOutput, error) { + req, out := c.DeleteAccessKeyRequest(input) + return out, req.Send() +} + +// DeleteAccessKeyWithContext is the same as DeleteAccessKey with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAccessKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteAccessKeyWithContext(ctx aws.Context, input *DeleteAccessKeyInput, opts ...request.Option) (*DeleteAccessKeyOutput, error) { + req, out := c.DeleteAccessKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteAccountAlias = "DeleteAccountAlias" + +// DeleteAccountAliasRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAccountAlias operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAccountAlias for more information on using the DeleteAccountAlias +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAccountAliasRequest method. +// req, resp := client.DeleteAccountAliasRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccountAlias +func (c *IAM) DeleteAccountAliasRequest(input *DeleteAccountAliasInput) (req *request.Request, output *DeleteAccountAliasOutput) { + op := &request.Operation{ + Name: opDeleteAccountAlias, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAccountAliasInput{} + } + + output = &DeleteAccountAliasOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteAccountAlias API operation for AWS Identity and Access Management. +// +// Deletes the specified AWS account alias. For information about using an AWS +// account alias, see Using an Alias for Your AWS Account ID (http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteAccountAlias for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccountAlias +func (c *IAM) DeleteAccountAlias(input *DeleteAccountAliasInput) (*DeleteAccountAliasOutput, error) { + req, out := c.DeleteAccountAliasRequest(input) + return out, req.Send() +} + +// DeleteAccountAliasWithContext is the same as DeleteAccountAlias with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAccountAlias for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteAccountAliasWithContext(ctx aws.Context, input *DeleteAccountAliasInput, opts ...request.Option) (*DeleteAccountAliasOutput, error) { + req, out := c.DeleteAccountAliasRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteAccountPasswordPolicy = "DeleteAccountPasswordPolicy" + +// DeleteAccountPasswordPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAccountPasswordPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAccountPasswordPolicy for more information on using the DeleteAccountPasswordPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAccountPasswordPolicyRequest method. +// req, resp := client.DeleteAccountPasswordPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccountPasswordPolicy +func (c *IAM) DeleteAccountPasswordPolicyRequest(input *DeleteAccountPasswordPolicyInput) (req *request.Request, output *DeleteAccountPasswordPolicyOutput) { + op := &request.Operation{ + Name: opDeleteAccountPasswordPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAccountPasswordPolicyInput{} + } + + output = &DeleteAccountPasswordPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteAccountPasswordPolicy API operation for AWS Identity and Access Management. +// +// Deletes the password policy for the AWS account. There are no parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteAccountPasswordPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccountPasswordPolicy +func (c *IAM) DeleteAccountPasswordPolicy(input *DeleteAccountPasswordPolicyInput) (*DeleteAccountPasswordPolicyOutput, error) { + req, out := c.DeleteAccountPasswordPolicyRequest(input) + return out, req.Send() +} + +// DeleteAccountPasswordPolicyWithContext is the same as DeleteAccountPasswordPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAccountPasswordPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteAccountPasswordPolicyWithContext(ctx aws.Context, input *DeleteAccountPasswordPolicyInput, opts ...request.Option) (*DeleteAccountPasswordPolicyOutput, error) { + req, out := c.DeleteAccountPasswordPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteGroup = "DeleteGroup" + +// DeleteGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteGroup for more information on using the DeleteGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteGroupRequest method. +// req, resp := client.DeleteGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteGroup +func (c *IAM) DeleteGroupRequest(input *DeleteGroupInput) (req *request.Request, output *DeleteGroupOutput) { + op := &request.Operation{ + Name: opDeleteGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteGroupInput{} + } + + output = &DeleteGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteGroup API operation for AWS Identity and Access Management. +// +// Deletes the specified IAM group. The group must not contain any users or +// have any attached policies. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteGroup +func (c *IAM) DeleteGroup(input *DeleteGroupInput) (*DeleteGroupOutput, error) { + req, out := c.DeleteGroupRequest(input) + return out, req.Send() +} + +// DeleteGroupWithContext is the same as DeleteGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteGroupWithContext(ctx aws.Context, input *DeleteGroupInput, opts ...request.Option) (*DeleteGroupOutput, error) { + req, out := c.DeleteGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteGroupPolicy = "DeleteGroupPolicy" + +// DeleteGroupPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteGroupPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteGroupPolicy for more information on using the DeleteGroupPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteGroupPolicyRequest method. +// req, resp := client.DeleteGroupPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteGroupPolicy +func (c *IAM) DeleteGroupPolicyRequest(input *DeleteGroupPolicyInput) (req *request.Request, output *DeleteGroupPolicyOutput) { + op := &request.Operation{ + Name: opDeleteGroupPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteGroupPolicyInput{} + } + + output = &DeleteGroupPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteGroupPolicy API operation for AWS Identity and Access Management. +// +// Deletes the specified inline policy that is embedded in the specified IAM +// group. +// +// A group can also have managed policies attached to it. To detach a managed +// policy from a group, use DetachGroupPolicy. For more information about policies, +// refer to Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteGroupPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteGroupPolicy +func (c *IAM) DeleteGroupPolicy(input *DeleteGroupPolicyInput) (*DeleteGroupPolicyOutput, error) { + req, out := c.DeleteGroupPolicyRequest(input) + return out, req.Send() +} + +// DeleteGroupPolicyWithContext is the same as DeleteGroupPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteGroupPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteGroupPolicyWithContext(ctx aws.Context, input *DeleteGroupPolicyInput, opts ...request.Option) (*DeleteGroupPolicyOutput, error) { + req, out := c.DeleteGroupPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteInstanceProfile = "DeleteInstanceProfile" + +// DeleteInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the DeleteInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteInstanceProfile for more information on using the DeleteInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteInstanceProfileRequest method. +// req, resp := client.DeleteInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteInstanceProfile +func (c *IAM) DeleteInstanceProfileRequest(input *DeleteInstanceProfileInput) (req *request.Request, output *DeleteInstanceProfileOutput) { + op := &request.Operation{ + Name: opDeleteInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteInstanceProfileInput{} + } + + output = &DeleteInstanceProfileOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteInstanceProfile API operation for AWS Identity and Access Management. +// +// Deletes the specified instance profile. The instance profile must not have +// an associated role. +// +// Make sure that you do not have any Amazon EC2 instances running with the +// instance profile you are about to delete. Deleting a role or instance profile +// that is associated with a running instance will break any applications running +// on the instance. +// +// For more information about instance profiles, go to About Instance Profiles +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteInstanceProfile +func (c *IAM) DeleteInstanceProfile(input *DeleteInstanceProfileInput) (*DeleteInstanceProfileOutput, error) { + req, out := c.DeleteInstanceProfileRequest(input) + return out, req.Send() +} + +// DeleteInstanceProfileWithContext is the same as DeleteInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteInstanceProfileWithContext(ctx aws.Context, input *DeleteInstanceProfileInput, opts ...request.Option) (*DeleteInstanceProfileOutput, error) { + req, out := c.DeleteInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteLoginProfile = "DeleteLoginProfile" + +// DeleteLoginProfileRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLoginProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLoginProfile for more information on using the DeleteLoginProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLoginProfileRequest method. +// req, resp := client.DeleteLoginProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteLoginProfile +func (c *IAM) DeleteLoginProfileRequest(input *DeleteLoginProfileInput) (req *request.Request, output *DeleteLoginProfileOutput) { + op := &request.Operation{ + Name: opDeleteLoginProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLoginProfileInput{} + } + + output = &DeleteLoginProfileOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteLoginProfile API operation for AWS Identity and Access Management. +// +// Deletes the password for the specified IAM user, which terminates the user's +// ability to access AWS services through the AWS Management Console. +// +// Deleting a user's password does not prevent a user from accessing AWS through +// the command line interface or the API. To prevent all user access you must +// also either make any access keys inactive or delete them. For more information +// about making keys inactive or deleting them, see UpdateAccessKey and DeleteAccessKey. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteLoginProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteLoginProfile +func (c *IAM) DeleteLoginProfile(input *DeleteLoginProfileInput) (*DeleteLoginProfileOutput, error) { + req, out := c.DeleteLoginProfileRequest(input) + return out, req.Send() +} + +// DeleteLoginProfileWithContext is the same as DeleteLoginProfile with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLoginProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteLoginProfileWithContext(ctx aws.Context, input *DeleteLoginProfileInput, opts ...request.Option) (*DeleteLoginProfileOutput, error) { + req, out := c.DeleteLoginProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteOpenIDConnectProvider = "DeleteOpenIDConnectProvider" + +// DeleteOpenIDConnectProviderRequest generates a "aws/request.Request" representing the +// client's request for the DeleteOpenIDConnectProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteOpenIDConnectProvider for more information on using the DeleteOpenIDConnectProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteOpenIDConnectProviderRequest method. +// req, resp := client.DeleteOpenIDConnectProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteOpenIDConnectProvider +func (c *IAM) DeleteOpenIDConnectProviderRequest(input *DeleteOpenIDConnectProviderInput) (req *request.Request, output *DeleteOpenIDConnectProviderOutput) { + op := &request.Operation{ + Name: opDeleteOpenIDConnectProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteOpenIDConnectProviderInput{} + } + + output = &DeleteOpenIDConnectProviderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteOpenIDConnectProvider API operation for AWS Identity and Access Management. +// +// Deletes an OpenID Connect identity provider (IdP) resource object in IAM. +// +// Deleting an IAM OIDC provider resource does not update any roles that reference +// the provider as a principal in their trust policies. Any attempt to assume +// a role that references a deleted provider fails. +// +// This operation is idempotent; it does not fail or return an error if you +// call the operation for a provider that does not exist. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteOpenIDConnectProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteOpenIDConnectProvider +func (c *IAM) DeleteOpenIDConnectProvider(input *DeleteOpenIDConnectProviderInput) (*DeleteOpenIDConnectProviderOutput, error) { + req, out := c.DeleteOpenIDConnectProviderRequest(input) + return out, req.Send() +} + +// DeleteOpenIDConnectProviderWithContext is the same as DeleteOpenIDConnectProvider with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteOpenIDConnectProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteOpenIDConnectProviderWithContext(ctx aws.Context, input *DeleteOpenIDConnectProviderInput, opts ...request.Option) (*DeleteOpenIDConnectProviderOutput, error) { + req, out := c.DeleteOpenIDConnectProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeletePolicy = "DeletePolicy" + +// DeletePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeletePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePolicy for more information on using the DeletePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePolicyRequest method. +// req, resp := client.DeletePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeletePolicy +func (c *IAM) DeletePolicyRequest(input *DeletePolicyInput) (req *request.Request, output *DeletePolicyOutput) { + op := &request.Operation{ + Name: opDeletePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeletePolicyInput{} + } + + output = &DeletePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeletePolicy API operation for AWS Identity and Access Management. +// +// Deletes the specified managed policy. +// +// Before you can delete a managed policy, you must first detach the policy +// from all users, groups, and roles that it is attached to. In addition you +// must delete all the policy's versions. The following steps describe the process +// for deleting a managed policy: +// +// * Detach the policy from all users, groups, and roles that the policy +// is attached to, using the DetachUserPolicy, DetachGroupPolicy, or DetachRolePolicy +// API operations. To list all the users, groups, and roles that a policy +// is attached to, use ListEntitiesForPolicy. +// +// * Delete all versions of the policy using DeletePolicyVersion. To list +// the policy's versions, use ListPolicyVersions. You cannot use DeletePolicyVersion +// to delete the version that is marked as the default version. You delete +// the policy's default version in the next step of the process. +// +// * Delete the policy (this automatically deletes the policy's default version) +// using this API. +// +// For information about managed policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeletePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeletePolicy +func (c *IAM) DeletePolicy(input *DeletePolicyInput) (*DeletePolicyOutput, error) { + req, out := c.DeletePolicyRequest(input) + return out, req.Send() +} + +// DeletePolicyWithContext is the same as DeletePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeletePolicyWithContext(ctx aws.Context, input *DeletePolicyInput, opts ...request.Option) (*DeletePolicyOutput, error) { + req, out := c.DeletePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeletePolicyVersion = "DeletePolicyVersion" + +// DeletePolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the DeletePolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePolicyVersion for more information on using the DeletePolicyVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePolicyVersionRequest method. +// req, resp := client.DeletePolicyVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeletePolicyVersion +func (c *IAM) DeletePolicyVersionRequest(input *DeletePolicyVersionInput) (req *request.Request, output *DeletePolicyVersionOutput) { + op := &request.Operation{ + Name: opDeletePolicyVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeletePolicyVersionInput{} + } + + output = &DeletePolicyVersionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeletePolicyVersion API operation for AWS Identity and Access Management. +// +// Deletes the specified version from the specified managed policy. +// +// You cannot delete the default version from a policy using this API. To delete +// the default version from a policy, use DeletePolicy. To find out which version +// of a policy is marked as the default version, use ListPolicyVersions. +// +// For information about versions for managed policies, see Versioning for Managed +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeletePolicyVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeletePolicyVersion +func (c *IAM) DeletePolicyVersion(input *DeletePolicyVersionInput) (*DeletePolicyVersionOutput, error) { + req, out := c.DeletePolicyVersionRequest(input) + return out, req.Send() +} + +// DeletePolicyVersionWithContext is the same as DeletePolicyVersion with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePolicyVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeletePolicyVersionWithContext(ctx aws.Context, input *DeletePolicyVersionInput, opts ...request.Option) (*DeletePolicyVersionOutput, error) { + req, out := c.DeletePolicyVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteRole = "DeleteRole" + +// DeleteRoleRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteRole for more information on using the DeleteRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteRoleRequest method. +// req, resp := client.DeleteRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRole +func (c *IAM) DeleteRoleRequest(input *DeleteRoleInput) (req *request.Request, output *DeleteRoleOutput) { + op := &request.Operation{ + Name: opDeleteRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteRoleInput{} + } + + output = &DeleteRoleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteRole API operation for AWS Identity and Access Management. +// +// Deletes the specified role. The role must not have any policies attached. +// For more information about roles, go to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// +// Make sure that you do not have any Amazon EC2 instances running with the +// role you are about to delete. Deleting a role or instance profile that is +// associated with a running instance will break any applications running on +// the instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRole +func (c *IAM) DeleteRole(input *DeleteRoleInput) (*DeleteRoleOutput, error) { + req, out := c.DeleteRoleRequest(input) + return out, req.Send() +} + +// DeleteRoleWithContext is the same as DeleteRole with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteRoleWithContext(ctx aws.Context, input *DeleteRoleInput, opts ...request.Option) (*DeleteRoleOutput, error) { + req, out := c.DeleteRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteRolePolicy = "DeleteRolePolicy" + +// DeleteRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteRolePolicy for more information on using the DeleteRolePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteRolePolicyRequest method. +// req, resp := client.DeleteRolePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRolePolicy +func (c *IAM) DeleteRolePolicyRequest(input *DeleteRolePolicyInput) (req *request.Request, output *DeleteRolePolicyOutput) { + op := &request.Operation{ + Name: opDeleteRolePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteRolePolicyInput{} + } + + output = &DeleteRolePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteRolePolicy API operation for AWS Identity and Access Management. +// +// Deletes the specified inline policy that is embedded in the specified IAM +// role. +// +// A role can also have managed policies attached to it. To detach a managed +// policy from a role, use DetachRolePolicy. For more information about policies, +// refer to Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteRolePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRolePolicy +func (c *IAM) DeleteRolePolicy(input *DeleteRolePolicyInput) (*DeleteRolePolicyOutput, error) { + req, out := c.DeleteRolePolicyRequest(input) + return out, req.Send() +} + +// DeleteRolePolicyWithContext is the same as DeleteRolePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteRolePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteRolePolicyWithContext(ctx aws.Context, input *DeleteRolePolicyInput, opts ...request.Option) (*DeleteRolePolicyOutput, error) { + req, out := c.DeleteRolePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSAMLProvider = "DeleteSAMLProvider" + +// DeleteSAMLProviderRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSAMLProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSAMLProvider for more information on using the DeleteSAMLProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSAMLProviderRequest method. +// req, resp := client.DeleteSAMLProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSAMLProvider +func (c *IAM) DeleteSAMLProviderRequest(input *DeleteSAMLProviderInput) (req *request.Request, output *DeleteSAMLProviderOutput) { + op := &request.Operation{ + Name: opDeleteSAMLProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSAMLProviderInput{} + } + + output = &DeleteSAMLProviderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteSAMLProvider API operation for AWS Identity and Access Management. +// +// Deletes a SAML provider resource in IAM. +// +// Deleting the provider resource from IAM does not update any roles that reference +// the SAML provider resource's ARN as a principal in their trust policies. +// Any attempt to assume a role that references a non-existent provider resource +// ARN fails. +// +// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteSAMLProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSAMLProvider +func (c *IAM) DeleteSAMLProvider(input *DeleteSAMLProviderInput) (*DeleteSAMLProviderOutput, error) { + req, out := c.DeleteSAMLProviderRequest(input) + return out, req.Send() +} + +// DeleteSAMLProviderWithContext is the same as DeleteSAMLProvider with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSAMLProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteSAMLProviderWithContext(ctx aws.Context, input *DeleteSAMLProviderInput, opts ...request.Option) (*DeleteSAMLProviderOutput, error) { + req, out := c.DeleteSAMLProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSSHPublicKey = "DeleteSSHPublicKey" + +// DeleteSSHPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSSHPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSSHPublicKey for more information on using the DeleteSSHPublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSSHPublicKeyRequest method. +// req, resp := client.DeleteSSHPublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSSHPublicKey +func (c *IAM) DeleteSSHPublicKeyRequest(input *DeleteSSHPublicKeyInput) (req *request.Request, output *DeleteSSHPublicKeyOutput) { + op := &request.Operation{ + Name: opDeleteSSHPublicKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSSHPublicKeyInput{} + } + + output = &DeleteSSHPublicKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteSSHPublicKey API operation for AWS Identity and Access Management. +// +// Deletes the specified SSH public key. +// +// The SSH public key deleted by this operation is used only for authenticating +// the associated IAM user to an AWS CodeCommit repository. For more information +// about using SSH keys to authenticate to an AWS CodeCommit repository, see +// Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) +// in the AWS CodeCommit User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteSSHPublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSSHPublicKey +func (c *IAM) DeleteSSHPublicKey(input *DeleteSSHPublicKeyInput) (*DeleteSSHPublicKeyOutput, error) { + req, out := c.DeleteSSHPublicKeyRequest(input) + return out, req.Send() +} + +// DeleteSSHPublicKeyWithContext is the same as DeleteSSHPublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSSHPublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteSSHPublicKeyWithContext(ctx aws.Context, input *DeleteSSHPublicKeyInput, opts ...request.Option) (*DeleteSSHPublicKeyOutput, error) { + req, out := c.DeleteSSHPublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteServerCertificate = "DeleteServerCertificate" + +// DeleteServerCertificateRequest generates a "aws/request.Request" representing the +// client's request for the DeleteServerCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteServerCertificate for more information on using the DeleteServerCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteServerCertificateRequest method. +// req, resp := client.DeleteServerCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServerCertificate +func (c *IAM) DeleteServerCertificateRequest(input *DeleteServerCertificateInput) (req *request.Request, output *DeleteServerCertificateOutput) { + op := &request.Operation{ + Name: opDeleteServerCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteServerCertificateInput{} + } + + output = &DeleteServerCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteServerCertificate API operation for AWS Identity and Access Management. +// +// Deletes the specified server certificate. +// +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic also includes a list of AWS services that +// can use the server certificates that you manage with IAM. +// +// If you are using a server certificate with Elastic Load Balancing, deleting +// the certificate could have implications for your application. If Elastic +// Load Balancing doesn't detect the deletion of bound certificates, it may +// continue to use the certificates. This could cause Elastic Load Balancing +// to stop accepting traffic. We recommend that you remove the reference to +// the certificate from Elastic Load Balancing before using this command to +// delete the certificate. For more information, go to DeleteLoadBalancerListeners +// (http://docs.aws.amazon.com/ElasticLoadBalancing/latest/APIReference/API_DeleteLoadBalancerListeners.html) +// in the Elastic Load Balancing API Reference. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteServerCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServerCertificate +func (c *IAM) DeleteServerCertificate(input *DeleteServerCertificateInput) (*DeleteServerCertificateOutput, error) { + req, out := c.DeleteServerCertificateRequest(input) + return out, req.Send() +} + +// DeleteServerCertificateWithContext is the same as DeleteServerCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteServerCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteServerCertificateWithContext(ctx aws.Context, input *DeleteServerCertificateInput, opts ...request.Option) (*DeleteServerCertificateOutput, error) { + req, out := c.DeleteServerCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteServiceLinkedRole = "DeleteServiceLinkedRole" + +// DeleteServiceLinkedRoleRequest generates a "aws/request.Request" representing the +// client's request for the DeleteServiceLinkedRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteServiceLinkedRole for more information on using the DeleteServiceLinkedRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteServiceLinkedRoleRequest method. +// req, resp := client.DeleteServiceLinkedRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServiceLinkedRole +func (c *IAM) DeleteServiceLinkedRoleRequest(input *DeleteServiceLinkedRoleInput) (req *request.Request, output *DeleteServiceLinkedRoleOutput) { + op := &request.Operation{ + Name: opDeleteServiceLinkedRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteServiceLinkedRoleInput{} + } + + output = &DeleteServiceLinkedRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteServiceLinkedRole API operation for AWS Identity and Access Management. +// +// Submits a service-linked role deletion request and returns a DeletionTaskId, +// which you can use to check the status of the deletion. Before you call this +// operation, confirm that the role has no active sessions and that any resources +// used by the role in the linked service are deleted. If you call this operation +// more than once for the same service-linked role and an earlier deletion task +// is not complete, then the DeletionTaskId of the earlier request is returned. +// +// If you submit a deletion request for a service-linked role whose linked service +// is still accessing a resource, then the deletion task fails. If it fails, +// the GetServiceLinkedRoleDeletionStatus API operation returns the reason for +// the failure, usually including the resources that must be deleted. To delete +// the service-linked role, you must first remove those resources from the linked +// service and then submit the deletion request again. Resources are specific +// to the service that is linked to the role. For more information about removing +// resources from a service, see the AWS documentation (http://docs.aws.amazon.com/) +// for your service. +// +// For more information about service-linked roles, see Roles Terms and Concepts: +// AWS Service-Linked Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteServiceLinkedRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServiceLinkedRole +func (c *IAM) DeleteServiceLinkedRole(input *DeleteServiceLinkedRoleInput) (*DeleteServiceLinkedRoleOutput, error) { + req, out := c.DeleteServiceLinkedRoleRequest(input) + return out, req.Send() +} + +// DeleteServiceLinkedRoleWithContext is the same as DeleteServiceLinkedRole with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteServiceLinkedRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteServiceLinkedRoleWithContext(ctx aws.Context, input *DeleteServiceLinkedRoleInput, opts ...request.Option) (*DeleteServiceLinkedRoleOutput, error) { + req, out := c.DeleteServiceLinkedRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteServiceSpecificCredential = "DeleteServiceSpecificCredential" + +// DeleteServiceSpecificCredentialRequest generates a "aws/request.Request" representing the +// client's request for the DeleteServiceSpecificCredential operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteServiceSpecificCredential for more information on using the DeleteServiceSpecificCredential +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteServiceSpecificCredentialRequest method. +// req, resp := client.DeleteServiceSpecificCredentialRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServiceSpecificCredential +func (c *IAM) DeleteServiceSpecificCredentialRequest(input *DeleteServiceSpecificCredentialInput) (req *request.Request, output *DeleteServiceSpecificCredentialOutput) { + op := &request.Operation{ + Name: opDeleteServiceSpecificCredential, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteServiceSpecificCredentialInput{} + } + + output = &DeleteServiceSpecificCredentialOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteServiceSpecificCredential API operation for AWS Identity and Access Management. +// +// Deletes the specified service-specific credential. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteServiceSpecificCredential for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServiceSpecificCredential +func (c *IAM) DeleteServiceSpecificCredential(input *DeleteServiceSpecificCredentialInput) (*DeleteServiceSpecificCredentialOutput, error) { + req, out := c.DeleteServiceSpecificCredentialRequest(input) + return out, req.Send() +} + +// DeleteServiceSpecificCredentialWithContext is the same as DeleteServiceSpecificCredential with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteServiceSpecificCredential for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteServiceSpecificCredentialWithContext(ctx aws.Context, input *DeleteServiceSpecificCredentialInput, opts ...request.Option) (*DeleteServiceSpecificCredentialOutput, error) { + req, out := c.DeleteServiceSpecificCredentialRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSigningCertificate = "DeleteSigningCertificate" + +// DeleteSigningCertificateRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSigningCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSigningCertificate for more information on using the DeleteSigningCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSigningCertificateRequest method. +// req, resp := client.DeleteSigningCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSigningCertificate +func (c *IAM) DeleteSigningCertificateRequest(input *DeleteSigningCertificateInput) (req *request.Request, output *DeleteSigningCertificateOutput) { + op := &request.Operation{ + Name: opDeleteSigningCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSigningCertificateInput{} + } + + output = &DeleteSigningCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteSigningCertificate API operation for AWS Identity and Access Management. +// +// Deletes a signing certificate associated with the specified IAM user. +// +// If you do not specify a user name, IAM determines the user name implicitly +// based on the AWS access key ID signing the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// IAM users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteSigningCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSigningCertificate +func (c *IAM) DeleteSigningCertificate(input *DeleteSigningCertificateInput) (*DeleteSigningCertificateOutput, error) { + req, out := c.DeleteSigningCertificateRequest(input) + return out, req.Send() +} + +// DeleteSigningCertificateWithContext is the same as DeleteSigningCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSigningCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteSigningCertificateWithContext(ctx aws.Context, input *DeleteSigningCertificateInput, opts ...request.Option) (*DeleteSigningCertificateOutput, error) { + req, out := c.DeleteSigningCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteUser = "DeleteUser" + +// DeleteUserRequest generates a "aws/request.Request" representing the +// client's request for the DeleteUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteUser for more information on using the DeleteUser +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteUserRequest method. +// req, resp := client.DeleteUserRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUser +func (c *IAM) DeleteUserRequest(input *DeleteUserInput) (req *request.Request, output *DeleteUserOutput) { + op := &request.Operation{ + Name: opDeleteUser, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteUserInput{} + } + + output = &DeleteUserOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteUser API operation for AWS Identity and Access Management. +// +// Deletes the specified IAM user. The user must not belong to any groups or +// have any access keys, signing certificates, or attached policies. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteUser for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUser +func (c *IAM) DeleteUser(input *DeleteUserInput) (*DeleteUserOutput, error) { + req, out := c.DeleteUserRequest(input) + return out, req.Send() +} + +// DeleteUserWithContext is the same as DeleteUser with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteUser for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteUserWithContext(ctx aws.Context, input *DeleteUserInput, opts ...request.Option) (*DeleteUserOutput, error) { + req, out := c.DeleteUserRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteUserPolicy = "DeleteUserPolicy" + +// DeleteUserPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteUserPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteUserPolicy for more information on using the DeleteUserPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteUserPolicyRequest method. +// req, resp := client.DeleteUserPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUserPolicy +func (c *IAM) DeleteUserPolicyRequest(input *DeleteUserPolicyInput) (req *request.Request, output *DeleteUserPolicyOutput) { + op := &request.Operation{ + Name: opDeleteUserPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteUserPolicyInput{} + } + + output = &DeleteUserPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteUserPolicy API operation for AWS Identity and Access Management. +// +// Deletes the specified inline policy that is embedded in the specified IAM +// user. +// +// A user can also have managed policies attached to it. To detach a managed +// policy from a user, use DetachUserPolicy. For more information about policies, +// refer to Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteUserPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUserPolicy +func (c *IAM) DeleteUserPolicy(input *DeleteUserPolicyInput) (*DeleteUserPolicyOutput, error) { + req, out := c.DeleteUserPolicyRequest(input) + return out, req.Send() +} + +// DeleteUserPolicyWithContext is the same as DeleteUserPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteUserPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteUserPolicyWithContext(ctx aws.Context, input *DeleteUserPolicyInput, opts ...request.Option) (*DeleteUserPolicyOutput, error) { + req, out := c.DeleteUserPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteVirtualMFADevice = "DeleteVirtualMFADevice" + +// DeleteVirtualMFADeviceRequest generates a "aws/request.Request" representing the +// client's request for the DeleteVirtualMFADevice operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteVirtualMFADevice for more information on using the DeleteVirtualMFADevice +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteVirtualMFADeviceRequest method. +// req, resp := client.DeleteVirtualMFADeviceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteVirtualMFADevice +func (c *IAM) DeleteVirtualMFADeviceRequest(input *DeleteVirtualMFADeviceInput) (req *request.Request, output *DeleteVirtualMFADeviceOutput) { + op := &request.Operation{ + Name: opDeleteVirtualMFADevice, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteVirtualMFADeviceInput{} + } + + output = &DeleteVirtualMFADeviceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteVirtualMFADevice API operation for AWS Identity and Access Management. +// +// Deletes a virtual MFA device. +// +// You must deactivate a user's virtual MFA device before you can delete it. +// For information about deactivating MFA devices, see DeactivateMFADevice. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteVirtualMFADevice for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeDeleteConflictException "DeleteConflict" +// The request was rejected because it attempted to delete a resource that has +// attached subordinate entities. The error message describes these entities. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteVirtualMFADevice +func (c *IAM) DeleteVirtualMFADevice(input *DeleteVirtualMFADeviceInput) (*DeleteVirtualMFADeviceOutput, error) { + req, out := c.DeleteVirtualMFADeviceRequest(input) + return out, req.Send() +} + +// DeleteVirtualMFADeviceWithContext is the same as DeleteVirtualMFADevice with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteVirtualMFADevice for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteVirtualMFADeviceWithContext(ctx aws.Context, input *DeleteVirtualMFADeviceInput, opts ...request.Option) (*DeleteVirtualMFADeviceOutput, error) { + req, out := c.DeleteVirtualMFADeviceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachGroupPolicy = "DetachGroupPolicy" + +// DetachGroupPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DetachGroupPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DetachGroupPolicy for more information on using the DetachGroupPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DetachGroupPolicyRequest method. +// req, resp := client.DetachGroupPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachGroupPolicy +func (c *IAM) DetachGroupPolicyRequest(input *DetachGroupPolicyInput) (req *request.Request, output *DetachGroupPolicyOutput) { + op := &request.Operation{ + Name: opDetachGroupPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DetachGroupPolicyInput{} + } + + output = &DetachGroupPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DetachGroupPolicy API operation for AWS Identity and Access Management. +// +// Removes the specified managed policy from the specified IAM group. +// +// A group can also have inline policies embedded with it. To delete an inline +// policy, use the DeleteGroupPolicy API. For information about policies, see +// Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DetachGroupPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachGroupPolicy +func (c *IAM) DetachGroupPolicy(input *DetachGroupPolicyInput) (*DetachGroupPolicyOutput, error) { + req, out := c.DetachGroupPolicyRequest(input) + return out, req.Send() +} + +// DetachGroupPolicyWithContext is the same as DetachGroupPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DetachGroupPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DetachGroupPolicyWithContext(ctx aws.Context, input *DetachGroupPolicyInput, opts ...request.Option) (*DetachGroupPolicyOutput, error) { + req, out := c.DetachGroupPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachRolePolicy = "DetachRolePolicy" + +// DetachRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DetachRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DetachRolePolicy for more information on using the DetachRolePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DetachRolePolicyRequest method. +// req, resp := client.DetachRolePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachRolePolicy +func (c *IAM) DetachRolePolicyRequest(input *DetachRolePolicyInput) (req *request.Request, output *DetachRolePolicyOutput) { + op := &request.Operation{ + Name: opDetachRolePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DetachRolePolicyInput{} + } + + output = &DetachRolePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DetachRolePolicy API operation for AWS Identity and Access Management. +// +// Removes the specified managed policy from the specified role. +// +// A role can also have inline policies embedded with it. To delete an inline +// policy, use the DeleteRolePolicy API. For information about policies, see +// Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DetachRolePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachRolePolicy +func (c *IAM) DetachRolePolicy(input *DetachRolePolicyInput) (*DetachRolePolicyOutput, error) { + req, out := c.DetachRolePolicyRequest(input) + return out, req.Send() +} + +// DetachRolePolicyWithContext is the same as DetachRolePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DetachRolePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DetachRolePolicyWithContext(ctx aws.Context, input *DetachRolePolicyInput, opts ...request.Option) (*DetachRolePolicyOutput, error) { + req, out := c.DetachRolePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachUserPolicy = "DetachUserPolicy" + +// DetachUserPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DetachUserPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DetachUserPolicy for more information on using the DetachUserPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DetachUserPolicyRequest method. +// req, resp := client.DetachUserPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachUserPolicy +func (c *IAM) DetachUserPolicyRequest(input *DetachUserPolicyInput) (req *request.Request, output *DetachUserPolicyOutput) { + op := &request.Operation{ + Name: opDetachUserPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DetachUserPolicyInput{} + } + + output = &DetachUserPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DetachUserPolicy API operation for AWS Identity and Access Management. +// +// Removes the specified managed policy from the specified user. +// +// A user can also have inline policies embedded with it. To delete an inline +// policy, use the DeleteUserPolicy API. For information about policies, see +// Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DetachUserPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachUserPolicy +func (c *IAM) DetachUserPolicy(input *DetachUserPolicyInput) (*DetachUserPolicyOutput, error) { + req, out := c.DetachUserPolicyRequest(input) + return out, req.Send() +} + +// DetachUserPolicyWithContext is the same as DetachUserPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DetachUserPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DetachUserPolicyWithContext(ctx aws.Context, input *DetachUserPolicyInput, opts ...request.Option) (*DetachUserPolicyOutput, error) { + req, out := c.DetachUserPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opEnableMFADevice = "EnableMFADevice" + +// EnableMFADeviceRequest generates a "aws/request.Request" representing the +// client's request for the EnableMFADevice operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See EnableMFADevice for more information on using the EnableMFADevice +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the EnableMFADeviceRequest method. +// req, resp := client.EnableMFADeviceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/EnableMFADevice +func (c *IAM) EnableMFADeviceRequest(input *EnableMFADeviceInput) (req *request.Request, output *EnableMFADeviceOutput) { + op := &request.Operation{ + Name: opEnableMFADevice, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &EnableMFADeviceInput{} + } + + output = &EnableMFADeviceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// EnableMFADevice API operation for AWS Identity and Access Management. +// +// Enables the specified MFA device and associates it with the specified IAM +// user. When enabled, the MFA device is required for every subsequent login +// by the IAM user associated with the device. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation EnableMFADevice for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodeInvalidAuthenticationCodeException "InvalidAuthenticationCode" +// The request was rejected because the authentication code was not recognized. +// The error message describes the specific error. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/EnableMFADevice +func (c *IAM) EnableMFADevice(input *EnableMFADeviceInput) (*EnableMFADeviceOutput, error) { + req, out := c.EnableMFADeviceRequest(input) + return out, req.Send() +} + +// EnableMFADeviceWithContext is the same as EnableMFADevice with the addition of +// the ability to pass a context and additional request options. +// +// See EnableMFADevice for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) EnableMFADeviceWithContext(ctx aws.Context, input *EnableMFADeviceInput, opts ...request.Option) (*EnableMFADeviceOutput, error) { + req, out := c.EnableMFADeviceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGenerateCredentialReport = "GenerateCredentialReport" + +// GenerateCredentialReportRequest generates a "aws/request.Request" representing the +// client's request for the GenerateCredentialReport operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GenerateCredentialReport for more information on using the GenerateCredentialReport +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GenerateCredentialReportRequest method. +// req, resp := client.GenerateCredentialReportRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GenerateCredentialReport +func (c *IAM) GenerateCredentialReportRequest(input *GenerateCredentialReportInput) (req *request.Request, output *GenerateCredentialReportOutput) { + op := &request.Operation{ + Name: opGenerateCredentialReport, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GenerateCredentialReportInput{} + } + + output = &GenerateCredentialReportOutput{} + req = c.newRequest(op, input, output) + return +} + +// GenerateCredentialReport API operation for AWS Identity and Access Management. +// +// Generates a credential report for the AWS account. For more information about +// the credential report, see Getting Credential Reports (http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GenerateCredentialReport for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GenerateCredentialReport +func (c *IAM) GenerateCredentialReport(input *GenerateCredentialReportInput) (*GenerateCredentialReportOutput, error) { + req, out := c.GenerateCredentialReportRequest(input) + return out, req.Send() +} + +// GenerateCredentialReportWithContext is the same as GenerateCredentialReport with the addition of +// the ability to pass a context and additional request options. +// +// See GenerateCredentialReport for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GenerateCredentialReportWithContext(ctx aws.Context, input *GenerateCredentialReportInput, opts ...request.Option) (*GenerateCredentialReportOutput, error) { + req, out := c.GenerateCredentialReportRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAccessKeyLastUsed = "GetAccessKeyLastUsed" + +// GetAccessKeyLastUsedRequest generates a "aws/request.Request" representing the +// client's request for the GetAccessKeyLastUsed operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAccessKeyLastUsed for more information on using the GetAccessKeyLastUsed +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAccessKeyLastUsedRequest method. +// req, resp := client.GetAccessKeyLastUsedRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccessKeyLastUsed +func (c *IAM) GetAccessKeyLastUsedRequest(input *GetAccessKeyLastUsedInput) (req *request.Request, output *GetAccessKeyLastUsedOutput) { + op := &request.Operation{ + Name: opGetAccessKeyLastUsed, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetAccessKeyLastUsedInput{} + } + + output = &GetAccessKeyLastUsedOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAccessKeyLastUsed API operation for AWS Identity and Access Management. +// +// Retrieves information about when the specified access key was last used. +// The information includes the date and time of last use, along with the AWS +// service and region that were specified in the last request made with that +// key. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetAccessKeyLastUsed for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccessKeyLastUsed +func (c *IAM) GetAccessKeyLastUsed(input *GetAccessKeyLastUsedInput) (*GetAccessKeyLastUsedOutput, error) { + req, out := c.GetAccessKeyLastUsedRequest(input) + return out, req.Send() +} + +// GetAccessKeyLastUsedWithContext is the same as GetAccessKeyLastUsed with the addition of +// the ability to pass a context and additional request options. +// +// See GetAccessKeyLastUsed for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetAccessKeyLastUsedWithContext(ctx aws.Context, input *GetAccessKeyLastUsedInput, opts ...request.Option) (*GetAccessKeyLastUsedOutput, error) { + req, out := c.GetAccessKeyLastUsedRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAccountAuthorizationDetails = "GetAccountAuthorizationDetails" + +// GetAccountAuthorizationDetailsRequest generates a "aws/request.Request" representing the +// client's request for the GetAccountAuthorizationDetails operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAccountAuthorizationDetails for more information on using the GetAccountAuthorizationDetails +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAccountAuthorizationDetailsRequest method. +// req, resp := client.GetAccountAuthorizationDetailsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccountAuthorizationDetails +func (c *IAM) GetAccountAuthorizationDetailsRequest(input *GetAccountAuthorizationDetailsInput) (req *request.Request, output *GetAccountAuthorizationDetailsOutput) { + op := &request.Operation{ + Name: opGetAccountAuthorizationDetails, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &GetAccountAuthorizationDetailsInput{} + } + + output = &GetAccountAuthorizationDetailsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAccountAuthorizationDetails API operation for AWS Identity and Access Management. +// +// Retrieves information about all IAM users, groups, roles, and policies in +// your AWS account, including their relationships to one another. Use this +// API to obtain a snapshot of the configuration of IAM permissions (users, +// groups, roles, and policies) in your account. +// +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// +// You can optionally filter the results using the Filter parameter. You can +// paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetAccountAuthorizationDetails for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccountAuthorizationDetails +func (c *IAM) GetAccountAuthorizationDetails(input *GetAccountAuthorizationDetailsInput) (*GetAccountAuthorizationDetailsOutput, error) { + req, out := c.GetAccountAuthorizationDetailsRequest(input) + return out, req.Send() +} + +// GetAccountAuthorizationDetailsWithContext is the same as GetAccountAuthorizationDetails with the addition of +// the ability to pass a context and additional request options. +// +// See GetAccountAuthorizationDetails for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetAccountAuthorizationDetailsWithContext(ctx aws.Context, input *GetAccountAuthorizationDetailsInput, opts ...request.Option) (*GetAccountAuthorizationDetailsOutput, error) { + req, out := c.GetAccountAuthorizationDetailsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetAccountAuthorizationDetailsPages iterates over the pages of a GetAccountAuthorizationDetails operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetAccountAuthorizationDetails method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetAccountAuthorizationDetails operation. +// pageNum := 0 +// err := client.GetAccountAuthorizationDetailsPages(params, +// func(page *GetAccountAuthorizationDetailsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) GetAccountAuthorizationDetailsPages(input *GetAccountAuthorizationDetailsInput, fn func(*GetAccountAuthorizationDetailsOutput, bool) bool) error { + return c.GetAccountAuthorizationDetailsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetAccountAuthorizationDetailsPagesWithContext same as GetAccountAuthorizationDetailsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetAccountAuthorizationDetailsPagesWithContext(ctx aws.Context, input *GetAccountAuthorizationDetailsInput, fn func(*GetAccountAuthorizationDetailsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetAccountAuthorizationDetailsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetAccountAuthorizationDetailsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetAccountAuthorizationDetailsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetAccountPasswordPolicy = "GetAccountPasswordPolicy" + +// GetAccountPasswordPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetAccountPasswordPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAccountPasswordPolicy for more information on using the GetAccountPasswordPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAccountPasswordPolicyRequest method. +// req, resp := client.GetAccountPasswordPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccountPasswordPolicy +func (c *IAM) GetAccountPasswordPolicyRequest(input *GetAccountPasswordPolicyInput) (req *request.Request, output *GetAccountPasswordPolicyOutput) { + op := &request.Operation{ + Name: opGetAccountPasswordPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetAccountPasswordPolicyInput{} + } + + output = &GetAccountPasswordPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAccountPasswordPolicy API operation for AWS Identity and Access Management. +// +// Retrieves the password policy for the AWS account. For more information about +// using a password policy, go to Managing an IAM Password Policy (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetAccountPasswordPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccountPasswordPolicy +func (c *IAM) GetAccountPasswordPolicy(input *GetAccountPasswordPolicyInput) (*GetAccountPasswordPolicyOutput, error) { + req, out := c.GetAccountPasswordPolicyRequest(input) + return out, req.Send() +} + +// GetAccountPasswordPolicyWithContext is the same as GetAccountPasswordPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetAccountPasswordPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetAccountPasswordPolicyWithContext(ctx aws.Context, input *GetAccountPasswordPolicyInput, opts ...request.Option) (*GetAccountPasswordPolicyOutput, error) { + req, out := c.GetAccountPasswordPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAccountSummary = "GetAccountSummary" + +// GetAccountSummaryRequest generates a "aws/request.Request" representing the +// client's request for the GetAccountSummary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAccountSummary for more information on using the GetAccountSummary +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAccountSummaryRequest method. +// req, resp := client.GetAccountSummaryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccountSummary +func (c *IAM) GetAccountSummaryRequest(input *GetAccountSummaryInput) (req *request.Request, output *GetAccountSummaryOutput) { + op := &request.Operation{ + Name: opGetAccountSummary, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetAccountSummaryInput{} + } + + output = &GetAccountSummaryOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAccountSummary API operation for AWS Identity and Access Management. +// +// Retrieves information about IAM entity usage and IAM quotas in the AWS account. +// +// For information about limitations on IAM entities, see Limitations on IAM +// Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetAccountSummary for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetAccountSummary +func (c *IAM) GetAccountSummary(input *GetAccountSummaryInput) (*GetAccountSummaryOutput, error) { + req, out := c.GetAccountSummaryRequest(input) + return out, req.Send() +} + +// GetAccountSummaryWithContext is the same as GetAccountSummary with the addition of +// the ability to pass a context and additional request options. +// +// See GetAccountSummary for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetAccountSummaryWithContext(ctx aws.Context, input *GetAccountSummaryInput, opts ...request.Option) (*GetAccountSummaryOutput, error) { + req, out := c.GetAccountSummaryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetContextKeysForCustomPolicy = "GetContextKeysForCustomPolicy" + +// GetContextKeysForCustomPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetContextKeysForCustomPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetContextKeysForCustomPolicy for more information on using the GetContextKeysForCustomPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetContextKeysForCustomPolicyRequest method. +// req, resp := client.GetContextKeysForCustomPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetContextKeysForCustomPolicy +func (c *IAM) GetContextKeysForCustomPolicyRequest(input *GetContextKeysForCustomPolicyInput) (req *request.Request, output *GetContextKeysForPolicyResponse) { + op := &request.Operation{ + Name: opGetContextKeysForCustomPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetContextKeysForCustomPolicyInput{} + } + + output = &GetContextKeysForPolicyResponse{} + req = c.newRequest(op, input, output) + return +} + +// GetContextKeysForCustomPolicy API operation for AWS Identity and Access Management. +// +// Gets a list of all of the context keys referenced in the input policies. +// The policies are supplied as a list of one or more strings. To get the context +// keys from policies associated with an IAM user, group, or role, use GetContextKeysForPrincipalPolicy. +// +// Context keys are variables maintained by AWS and its services that provide +// details about the context of an API query request. Context keys can be evaluated +// by testing against a value specified in an IAM policy. Use GetContextKeysForCustomPolicy +// to understand what key names and values you must supply when you call SimulateCustomPolicy. +// Note that all parameters are shown in unencoded form here for clarity but +// must be URL encoded to be included as a part of a real HTML request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetContextKeysForCustomPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetContextKeysForCustomPolicy +func (c *IAM) GetContextKeysForCustomPolicy(input *GetContextKeysForCustomPolicyInput) (*GetContextKeysForPolicyResponse, error) { + req, out := c.GetContextKeysForCustomPolicyRequest(input) + return out, req.Send() +} + +// GetContextKeysForCustomPolicyWithContext is the same as GetContextKeysForCustomPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetContextKeysForCustomPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetContextKeysForCustomPolicyWithContext(ctx aws.Context, input *GetContextKeysForCustomPolicyInput, opts ...request.Option) (*GetContextKeysForPolicyResponse, error) { + req, out := c.GetContextKeysForCustomPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetContextKeysForPrincipalPolicy = "GetContextKeysForPrincipalPolicy" + +// GetContextKeysForPrincipalPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetContextKeysForPrincipalPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetContextKeysForPrincipalPolicy for more information on using the GetContextKeysForPrincipalPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetContextKeysForPrincipalPolicyRequest method. +// req, resp := client.GetContextKeysForPrincipalPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetContextKeysForPrincipalPolicy +func (c *IAM) GetContextKeysForPrincipalPolicyRequest(input *GetContextKeysForPrincipalPolicyInput) (req *request.Request, output *GetContextKeysForPolicyResponse) { + op := &request.Operation{ + Name: opGetContextKeysForPrincipalPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetContextKeysForPrincipalPolicyInput{} + } + + output = &GetContextKeysForPolicyResponse{} + req = c.newRequest(op, input, output) + return +} + +// GetContextKeysForPrincipalPolicy API operation for AWS Identity and Access Management. +// +// Gets a list of all of the context keys referenced in all the IAM policies +// that are attached to the specified IAM entity. The entity can be an IAM user, +// group, or role. If you specify a user, then the request also includes all +// of the policies attached to groups that the user is a member of. +// +// You can optionally include a list of one or more additional policies, specified +// as strings. If you want to include only a list of policies by string, use +// GetContextKeysForCustomPolicy instead. +// +// Note: This API discloses information about the permissions granted to other +// users. If you do not want users to see other user's permissions, then consider +// allowing them to use GetContextKeysForCustomPolicy instead. +// +// Context keys are variables maintained by AWS and its services that provide +// details about the context of an API query request. Context keys can be evaluated +// by testing against a value in an IAM policy. Use GetContextKeysForPrincipalPolicy +// to understand what key names and values you must supply when you call SimulatePrincipalPolicy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetContextKeysForPrincipalPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetContextKeysForPrincipalPolicy +func (c *IAM) GetContextKeysForPrincipalPolicy(input *GetContextKeysForPrincipalPolicyInput) (*GetContextKeysForPolicyResponse, error) { + req, out := c.GetContextKeysForPrincipalPolicyRequest(input) + return out, req.Send() +} + +// GetContextKeysForPrincipalPolicyWithContext is the same as GetContextKeysForPrincipalPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetContextKeysForPrincipalPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetContextKeysForPrincipalPolicyWithContext(ctx aws.Context, input *GetContextKeysForPrincipalPolicyInput, opts ...request.Option) (*GetContextKeysForPolicyResponse, error) { + req, out := c.GetContextKeysForPrincipalPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCredentialReport = "GetCredentialReport" + +// GetCredentialReportRequest generates a "aws/request.Request" representing the +// client's request for the GetCredentialReport operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCredentialReport for more information on using the GetCredentialReport +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCredentialReportRequest method. +// req, resp := client.GetCredentialReportRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetCredentialReport +func (c *IAM) GetCredentialReportRequest(input *GetCredentialReportInput) (req *request.Request, output *GetCredentialReportOutput) { + op := &request.Operation{ + Name: opGetCredentialReport, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetCredentialReportInput{} + } + + output = &GetCredentialReportOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCredentialReport API operation for AWS Identity and Access Management. +// +// Retrieves a credential report for the AWS account. For more information about +// the credential report, see Getting Credential Reports (http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetCredentialReport for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCredentialReportNotPresentException "ReportNotPresent" +// The request was rejected because the credential report does not exist. To +// generate a credential report, use GenerateCredentialReport. +// +// * ErrCodeCredentialReportExpiredException "ReportExpired" +// The request was rejected because the most recent credential report has expired. +// To generate a new credential report, use GenerateCredentialReport. For more +// information about credential report expiration, see Getting Credential Reports +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html) +// in the IAM User Guide. +// +// * ErrCodeCredentialReportNotReadyException "ReportInProgress" +// The request was rejected because the credential report is still being generated. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetCredentialReport +func (c *IAM) GetCredentialReport(input *GetCredentialReportInput) (*GetCredentialReportOutput, error) { + req, out := c.GetCredentialReportRequest(input) + return out, req.Send() +} + +// GetCredentialReportWithContext is the same as GetCredentialReport with the addition of +// the ability to pass a context and additional request options. +// +// See GetCredentialReport for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetCredentialReportWithContext(ctx aws.Context, input *GetCredentialReportInput, opts ...request.Option) (*GetCredentialReportOutput, error) { + req, out := c.GetCredentialReportRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetGroup = "GetGroup" + +// GetGroupRequest generates a "aws/request.Request" representing the +// client's request for the GetGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetGroup for more information on using the GetGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetGroupRequest method. +// req, resp := client.GetGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetGroup +func (c *IAM) GetGroupRequest(input *GetGroupInput) (req *request.Request, output *GetGroupOutput) { + op := &request.Operation{ + Name: opGetGroup, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &GetGroupInput{} + } + + output = &GetGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetGroup API operation for AWS Identity and Access Management. +// +// Returns a list of IAM users that are in the specified IAM group. You can +// paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetGroup +func (c *IAM) GetGroup(input *GetGroupInput) (*GetGroupOutput, error) { + req, out := c.GetGroupRequest(input) + return out, req.Send() +} + +// GetGroupWithContext is the same as GetGroup with the addition of +// the ability to pass a context and additional request options. +// +// See GetGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetGroupWithContext(ctx aws.Context, input *GetGroupInput, opts ...request.Option) (*GetGroupOutput, error) { + req, out := c.GetGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetGroupPages iterates over the pages of a GetGroup operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetGroup method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetGroup operation. +// pageNum := 0 +// err := client.GetGroupPages(params, +// func(page *GetGroupOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) GetGroupPages(input *GetGroupInput, fn func(*GetGroupOutput, bool) bool) error { + return c.GetGroupPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetGroupPagesWithContext same as GetGroupPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetGroupPagesWithContext(ctx aws.Context, input *GetGroupInput, fn func(*GetGroupOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetGroupInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetGroupRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetGroupOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetGroupPolicy = "GetGroupPolicy" + +// GetGroupPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetGroupPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetGroupPolicy for more information on using the GetGroupPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetGroupPolicyRequest method. +// req, resp := client.GetGroupPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetGroupPolicy +func (c *IAM) GetGroupPolicyRequest(input *GetGroupPolicyInput) (req *request.Request, output *GetGroupPolicyOutput) { + op := &request.Operation{ + Name: opGetGroupPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetGroupPolicyInput{} + } + + output = &GetGroupPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetGroupPolicy API operation for AWS Identity and Access Management. +// +// Retrieves the specified inline policy document that is embedded in the specified +// IAM group. +// +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// +// An IAM group can also have managed policies attached to it. To retrieve a +// managed policy document that is attached to a group, use GetPolicy to determine +// the policy's default version, then use GetPolicyVersion to retrieve the policy +// document. +// +// For more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetGroupPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetGroupPolicy +func (c *IAM) GetGroupPolicy(input *GetGroupPolicyInput) (*GetGroupPolicyOutput, error) { + req, out := c.GetGroupPolicyRequest(input) + return out, req.Send() +} + +// GetGroupPolicyWithContext is the same as GetGroupPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetGroupPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetGroupPolicyWithContext(ctx aws.Context, input *GetGroupPolicyInput, opts ...request.Option) (*GetGroupPolicyOutput, error) { + req, out := c.GetGroupPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetInstanceProfile = "GetInstanceProfile" + +// GetInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the GetInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetInstanceProfile for more information on using the GetInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetInstanceProfileRequest method. +// req, resp := client.GetInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetInstanceProfile +func (c *IAM) GetInstanceProfileRequest(input *GetInstanceProfileInput) (req *request.Request, output *GetInstanceProfileOutput) { + op := &request.Operation{ + Name: opGetInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetInstanceProfileInput{} + } + + output = &GetInstanceProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetInstanceProfile API operation for AWS Identity and Access Management. +// +// Retrieves information about the specified instance profile, including the +// instance profile's path, GUID, ARN, and role. For more information about +// instance profiles, see About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetInstanceProfile +func (c *IAM) GetInstanceProfile(input *GetInstanceProfileInput) (*GetInstanceProfileOutput, error) { + req, out := c.GetInstanceProfileRequest(input) + return out, req.Send() +} + +// GetInstanceProfileWithContext is the same as GetInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See GetInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetInstanceProfileWithContext(ctx aws.Context, input *GetInstanceProfileInput, opts ...request.Option) (*GetInstanceProfileOutput, error) { + req, out := c.GetInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetLoginProfile = "GetLoginProfile" + +// GetLoginProfileRequest generates a "aws/request.Request" representing the +// client's request for the GetLoginProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLoginProfile for more information on using the GetLoginProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLoginProfileRequest method. +// req, resp := client.GetLoginProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetLoginProfile +func (c *IAM) GetLoginProfileRequest(input *GetLoginProfileInput) (req *request.Request, output *GetLoginProfileOutput) { + op := &request.Operation{ + Name: opGetLoginProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetLoginProfileInput{} + } + + output = &GetLoginProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLoginProfile API operation for AWS Identity and Access Management. +// +// Retrieves the user name and password-creation date for the specified IAM +// user. If the user has not been assigned a password, the operation returns +// a 404 (NoSuchEntity) error. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetLoginProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetLoginProfile +func (c *IAM) GetLoginProfile(input *GetLoginProfileInput) (*GetLoginProfileOutput, error) { + req, out := c.GetLoginProfileRequest(input) + return out, req.Send() +} + +// GetLoginProfileWithContext is the same as GetLoginProfile with the addition of +// the ability to pass a context and additional request options. +// +// See GetLoginProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetLoginProfileWithContext(ctx aws.Context, input *GetLoginProfileInput, opts ...request.Option) (*GetLoginProfileOutput, error) { + req, out := c.GetLoginProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetOpenIDConnectProvider = "GetOpenIDConnectProvider" + +// GetOpenIDConnectProviderRequest generates a "aws/request.Request" representing the +// client's request for the GetOpenIDConnectProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetOpenIDConnectProvider for more information on using the GetOpenIDConnectProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetOpenIDConnectProviderRequest method. +// req, resp := client.GetOpenIDConnectProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetOpenIDConnectProvider +func (c *IAM) GetOpenIDConnectProviderRequest(input *GetOpenIDConnectProviderInput) (req *request.Request, output *GetOpenIDConnectProviderOutput) { + op := &request.Operation{ + Name: opGetOpenIDConnectProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetOpenIDConnectProviderInput{} + } + + output = &GetOpenIDConnectProviderOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetOpenIDConnectProvider API operation for AWS Identity and Access Management. +// +// Returns information about the specified OpenID Connect (OIDC) provider resource +// object in IAM. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetOpenIDConnectProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetOpenIDConnectProvider +func (c *IAM) GetOpenIDConnectProvider(input *GetOpenIDConnectProviderInput) (*GetOpenIDConnectProviderOutput, error) { + req, out := c.GetOpenIDConnectProviderRequest(input) + return out, req.Send() +} + +// GetOpenIDConnectProviderWithContext is the same as GetOpenIDConnectProvider with the addition of +// the ability to pass a context and additional request options. +// +// See GetOpenIDConnectProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetOpenIDConnectProviderWithContext(ctx aws.Context, input *GetOpenIDConnectProviderInput, opts ...request.Option) (*GetOpenIDConnectProviderOutput, error) { + req, out := c.GetOpenIDConnectProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetPolicy = "GetPolicy" + +// GetPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPolicy for more information on using the GetPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPolicyRequest method. +// req, resp := client.GetPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetPolicy +func (c *IAM) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, output *GetPolicyOutput) { + op := &request.Operation{ + Name: opGetPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPolicyInput{} + } + + output = &GetPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPolicy API operation for AWS Identity and Access Management. +// +// Retrieves information about the specified managed policy, including the policy's +// default version and the total number of IAM users, groups, and roles to which +// the policy is attached. To retrieve the list of the specific users, groups, +// and roles that the policy is attached to, use the ListEntitiesForPolicy API. +// This API returns metadata about the policy. To retrieve the actual policy +// document for a specific version of the policy, use GetPolicyVersion. +// +// This API retrieves information about managed policies. To retrieve information +// about an inline policy that is embedded with an IAM user, group, or role, +// use the GetUserPolicy, GetGroupPolicy, or GetRolePolicy API. +// +// For more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetPolicy +func (c *IAM) GetPolicy(input *GetPolicyInput) (*GetPolicyOutput, error) { + req, out := c.GetPolicyRequest(input) + return out, req.Send() +} + +// GetPolicyWithContext is the same as GetPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetPolicyWithContext(ctx aws.Context, input *GetPolicyInput, opts ...request.Option) (*GetPolicyOutput, error) { + req, out := c.GetPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetPolicyVersion = "GetPolicyVersion" + +// GetPolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the GetPolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPolicyVersion for more information on using the GetPolicyVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPolicyVersionRequest method. +// req, resp := client.GetPolicyVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetPolicyVersion +func (c *IAM) GetPolicyVersionRequest(input *GetPolicyVersionInput) (req *request.Request, output *GetPolicyVersionOutput) { + op := &request.Operation{ + Name: opGetPolicyVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPolicyVersionInput{} + } + + output = &GetPolicyVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPolicyVersion API operation for AWS Identity and Access Management. +// +// Retrieves information about the specified version of the specified managed +// policy, including the policy document. +// +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// +// To list the available versions for a policy, use ListPolicyVersions. +// +// This API retrieves information about managed policies. To retrieve information +// about an inline policy that is embedded in a user, group, or role, use the +// GetUserPolicy, GetGroupPolicy, or GetRolePolicy API. +// +// For more information about the types of policies, see Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// For more information about managed policy versions, see Versioning for Managed +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetPolicyVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetPolicyVersion +func (c *IAM) GetPolicyVersion(input *GetPolicyVersionInput) (*GetPolicyVersionOutput, error) { + req, out := c.GetPolicyVersionRequest(input) + return out, req.Send() +} + +// GetPolicyVersionWithContext is the same as GetPolicyVersion with the addition of +// the ability to pass a context and additional request options. +// +// See GetPolicyVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetPolicyVersionWithContext(ctx aws.Context, input *GetPolicyVersionInput, opts ...request.Option) (*GetPolicyVersionOutput, error) { + req, out := c.GetPolicyVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetRole = "GetRole" + +// GetRoleRequest generates a "aws/request.Request" representing the +// client's request for the GetRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetRole for more information on using the GetRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetRoleRequest method. +// req, resp := client.GetRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetRole +func (c *IAM) GetRoleRequest(input *GetRoleInput) (req *request.Request, output *GetRoleOutput) { + op := &request.Operation{ + Name: opGetRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetRoleInput{} + } + + output = &GetRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetRole API operation for AWS Identity and Access Management. +// +// Retrieves information about the specified role, including the role's path, +// GUID, ARN, and the role's trust policy that grants permission to assume the +// role. For more information about roles, see Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetRole +func (c *IAM) GetRole(input *GetRoleInput) (*GetRoleOutput, error) { + req, out := c.GetRoleRequest(input) + return out, req.Send() +} + +// GetRoleWithContext is the same as GetRole with the addition of +// the ability to pass a context and additional request options. +// +// See GetRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetRoleWithContext(ctx aws.Context, input *GetRoleInput, opts ...request.Option) (*GetRoleOutput, error) { + req, out := c.GetRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetRolePolicy = "GetRolePolicy" + +// GetRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetRolePolicy for more information on using the GetRolePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetRolePolicyRequest method. +// req, resp := client.GetRolePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetRolePolicy +func (c *IAM) GetRolePolicyRequest(input *GetRolePolicyInput) (req *request.Request, output *GetRolePolicyOutput) { + op := &request.Operation{ + Name: opGetRolePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetRolePolicyInput{} + } + + output = &GetRolePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetRolePolicy API operation for AWS Identity and Access Management. +// +// Retrieves the specified inline policy document that is embedded with the +// specified IAM role. +// +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// +// An IAM role can also have managed policies attached to it. To retrieve a +// managed policy document that is attached to a role, use GetPolicy to determine +// the policy's default version, then use GetPolicyVersion to retrieve the policy +// document. +// +// For more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// For more information about roles, see Using Roles to Delegate Permissions +// and Federate Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetRolePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetRolePolicy +func (c *IAM) GetRolePolicy(input *GetRolePolicyInput) (*GetRolePolicyOutput, error) { + req, out := c.GetRolePolicyRequest(input) + return out, req.Send() +} + +// GetRolePolicyWithContext is the same as GetRolePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetRolePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetRolePolicyWithContext(ctx aws.Context, input *GetRolePolicyInput, opts ...request.Option) (*GetRolePolicyOutput, error) { + req, out := c.GetRolePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSAMLProvider = "GetSAMLProvider" + +// GetSAMLProviderRequest generates a "aws/request.Request" representing the +// client's request for the GetSAMLProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSAMLProvider for more information on using the GetSAMLProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSAMLProviderRequest method. +// req, resp := client.GetSAMLProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetSAMLProvider +func (c *IAM) GetSAMLProviderRequest(input *GetSAMLProviderInput) (req *request.Request, output *GetSAMLProviderOutput) { + op := &request.Operation{ + Name: opGetSAMLProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetSAMLProviderInput{} + } + + output = &GetSAMLProviderOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSAMLProvider API operation for AWS Identity and Access Management. +// +// Returns the SAML provider metadocument that was uploaded when the IAM SAML +// provider resource object was created or updated. +// +// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetSAMLProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetSAMLProvider +func (c *IAM) GetSAMLProvider(input *GetSAMLProviderInput) (*GetSAMLProviderOutput, error) { + req, out := c.GetSAMLProviderRequest(input) + return out, req.Send() +} + +// GetSAMLProviderWithContext is the same as GetSAMLProvider with the addition of +// the ability to pass a context and additional request options. +// +// See GetSAMLProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetSAMLProviderWithContext(ctx aws.Context, input *GetSAMLProviderInput, opts ...request.Option) (*GetSAMLProviderOutput, error) { + req, out := c.GetSAMLProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSSHPublicKey = "GetSSHPublicKey" + +// GetSSHPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the GetSSHPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSSHPublicKey for more information on using the GetSSHPublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSSHPublicKeyRequest method. +// req, resp := client.GetSSHPublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetSSHPublicKey +func (c *IAM) GetSSHPublicKeyRequest(input *GetSSHPublicKeyInput) (req *request.Request, output *GetSSHPublicKeyOutput) { + op := &request.Operation{ + Name: opGetSSHPublicKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetSSHPublicKeyInput{} + } + + output = &GetSSHPublicKeyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSSHPublicKey API operation for AWS Identity and Access Management. +// +// Retrieves the specified SSH public key, including metadata about the key. +// +// The SSH public key retrieved by this operation is used only for authenticating +// the associated IAM user to an AWS CodeCommit repository. For more information +// about using SSH keys to authenticate to an AWS CodeCommit repository, see +// Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) +// in the AWS CodeCommit User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetSSHPublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeUnrecognizedPublicKeyEncodingException "UnrecognizedPublicKeyEncoding" +// The request was rejected because the public key encoding format is unsupported +// or unrecognized. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetSSHPublicKey +func (c *IAM) GetSSHPublicKey(input *GetSSHPublicKeyInput) (*GetSSHPublicKeyOutput, error) { + req, out := c.GetSSHPublicKeyRequest(input) + return out, req.Send() +} + +// GetSSHPublicKeyWithContext is the same as GetSSHPublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See GetSSHPublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetSSHPublicKeyWithContext(ctx aws.Context, input *GetSSHPublicKeyInput, opts ...request.Option) (*GetSSHPublicKeyOutput, error) { + req, out := c.GetSSHPublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetServerCertificate = "GetServerCertificate" + +// GetServerCertificateRequest generates a "aws/request.Request" representing the +// client's request for the GetServerCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetServerCertificate for more information on using the GetServerCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetServerCertificateRequest method. +// req, resp := client.GetServerCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetServerCertificate +func (c *IAM) GetServerCertificateRequest(input *GetServerCertificateInput) (req *request.Request, output *GetServerCertificateOutput) { + op := &request.Operation{ + Name: opGetServerCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetServerCertificateInput{} + } + + output = &GetServerCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetServerCertificate API operation for AWS Identity and Access Management. +// +// Retrieves information about the specified server certificate stored in IAM. +// +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic includes a list of AWS services that can +// use the server certificates that you manage with IAM. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetServerCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetServerCertificate +func (c *IAM) GetServerCertificate(input *GetServerCertificateInput) (*GetServerCertificateOutput, error) { + req, out := c.GetServerCertificateRequest(input) + return out, req.Send() +} + +// GetServerCertificateWithContext is the same as GetServerCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See GetServerCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetServerCertificateWithContext(ctx aws.Context, input *GetServerCertificateInput, opts ...request.Option) (*GetServerCertificateOutput, error) { + req, out := c.GetServerCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetServiceLinkedRoleDeletionStatus = "GetServiceLinkedRoleDeletionStatus" + +// GetServiceLinkedRoleDeletionStatusRequest generates a "aws/request.Request" representing the +// client's request for the GetServiceLinkedRoleDeletionStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetServiceLinkedRoleDeletionStatus for more information on using the GetServiceLinkedRoleDeletionStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetServiceLinkedRoleDeletionStatusRequest method. +// req, resp := client.GetServiceLinkedRoleDeletionStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetServiceLinkedRoleDeletionStatus +func (c *IAM) GetServiceLinkedRoleDeletionStatusRequest(input *GetServiceLinkedRoleDeletionStatusInput) (req *request.Request, output *GetServiceLinkedRoleDeletionStatusOutput) { + op := &request.Operation{ + Name: opGetServiceLinkedRoleDeletionStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetServiceLinkedRoleDeletionStatusInput{} + } + + output = &GetServiceLinkedRoleDeletionStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetServiceLinkedRoleDeletionStatus API operation for AWS Identity and Access Management. +// +// Retrieves the status of your service-linked role deletion. After you use +// the DeleteServiceLinkedRole API operation to submit a service-linked role +// for deletion, you can use the DeletionTaskId parameter in GetServiceLinkedRoleDeletionStatus +// to check the status of the deletion. If the deletion fails, this operation +// returns the reason that it failed, if that information is returned by the +// service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetServiceLinkedRoleDeletionStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetServiceLinkedRoleDeletionStatus +func (c *IAM) GetServiceLinkedRoleDeletionStatus(input *GetServiceLinkedRoleDeletionStatusInput) (*GetServiceLinkedRoleDeletionStatusOutput, error) { + req, out := c.GetServiceLinkedRoleDeletionStatusRequest(input) + return out, req.Send() +} + +// GetServiceLinkedRoleDeletionStatusWithContext is the same as GetServiceLinkedRoleDeletionStatus with the addition of +// the ability to pass a context and additional request options. +// +// See GetServiceLinkedRoleDeletionStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetServiceLinkedRoleDeletionStatusWithContext(ctx aws.Context, input *GetServiceLinkedRoleDeletionStatusInput, opts ...request.Option) (*GetServiceLinkedRoleDeletionStatusOutput, error) { + req, out := c.GetServiceLinkedRoleDeletionStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetUser = "GetUser" + +// GetUserRequest generates a "aws/request.Request" representing the +// client's request for the GetUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetUser for more information on using the GetUser +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetUserRequest method. +// req, resp := client.GetUserRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetUser +func (c *IAM) GetUserRequest(input *GetUserInput) (req *request.Request, output *GetUserOutput) { + op := &request.Operation{ + Name: opGetUser, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetUserInput{} + } + + output = &GetUserOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetUser API operation for AWS Identity and Access Management. +// +// Retrieves information about the specified IAM user, including the user's +// creation date, path, unique ID, and ARN. +// +// If you do not specify a user name, IAM determines the user name implicitly +// based on the AWS access key ID used to sign the request to this API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetUser for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetUser +func (c *IAM) GetUser(input *GetUserInput) (*GetUserOutput, error) { + req, out := c.GetUserRequest(input) + return out, req.Send() +} + +// GetUserWithContext is the same as GetUser with the addition of +// the ability to pass a context and additional request options. +// +// See GetUser for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetUserWithContext(ctx aws.Context, input *GetUserInput, opts ...request.Option) (*GetUserOutput, error) { + req, out := c.GetUserRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetUserPolicy = "GetUserPolicy" + +// GetUserPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetUserPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetUserPolicy for more information on using the GetUserPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetUserPolicyRequest method. +// req, resp := client.GetUserPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetUserPolicy +func (c *IAM) GetUserPolicyRequest(input *GetUserPolicyInput) (req *request.Request, output *GetUserPolicyOutput) { + op := &request.Operation{ + Name: opGetUserPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetUserPolicyInput{} + } + + output = &GetUserPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetUserPolicy API operation for AWS Identity and Access Management. +// +// Retrieves the specified inline policy document that is embedded in the specified +// IAM user. +// +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// +// An IAM user can also have managed policies attached to it. To retrieve a +// managed policy document that is attached to a user, use GetPolicy to determine +// the policy's default version, then use GetPolicyVersion to retrieve the policy +// document. +// +// For more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation GetUserPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/GetUserPolicy +func (c *IAM) GetUserPolicy(input *GetUserPolicyInput) (*GetUserPolicyOutput, error) { + req, out := c.GetUserPolicyRequest(input) + return out, req.Send() +} + +// GetUserPolicyWithContext is the same as GetUserPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetUserPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) GetUserPolicyWithContext(ctx aws.Context, input *GetUserPolicyInput, opts ...request.Option) (*GetUserPolicyOutput, error) { + req, out := c.GetUserPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListAccessKeys = "ListAccessKeys" + +// ListAccessKeysRequest generates a "aws/request.Request" representing the +// client's request for the ListAccessKeys operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAccessKeys for more information on using the ListAccessKeys +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAccessKeysRequest method. +// req, resp := client.ListAccessKeysRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAccessKeys +func (c *IAM) ListAccessKeysRequest(input *ListAccessKeysInput) (req *request.Request, output *ListAccessKeysOutput) { + op := &request.Operation{ + Name: opListAccessKeys, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListAccessKeysInput{} + } + + output = &ListAccessKeysOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAccessKeys API operation for AWS Identity and Access Management. +// +// Returns information about the access key IDs associated with the specified +// IAM user. If there are none, the operation returns an empty list. +// +// Although each user is limited to a small number of keys, you can still paginate +// the results using the MaxItems and Marker parameters. +// +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// To ensure the security of your AWS account, the secret access key is accessible +// only during key and user creation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListAccessKeys for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAccessKeys +func (c *IAM) ListAccessKeys(input *ListAccessKeysInput) (*ListAccessKeysOutput, error) { + req, out := c.ListAccessKeysRequest(input) + return out, req.Send() +} + +// ListAccessKeysWithContext is the same as ListAccessKeys with the addition of +// the ability to pass a context and additional request options. +// +// See ListAccessKeys for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAccessKeysWithContext(ctx aws.Context, input *ListAccessKeysInput, opts ...request.Option) (*ListAccessKeysOutput, error) { + req, out := c.ListAccessKeysRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListAccessKeysPages iterates over the pages of a ListAccessKeys operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListAccessKeys method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListAccessKeys operation. +// pageNum := 0 +// err := client.ListAccessKeysPages(params, +// func(page *ListAccessKeysOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListAccessKeysPages(input *ListAccessKeysInput, fn func(*ListAccessKeysOutput, bool) bool) error { + return c.ListAccessKeysPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListAccessKeysPagesWithContext same as ListAccessKeysPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAccessKeysPagesWithContext(ctx aws.Context, input *ListAccessKeysInput, fn func(*ListAccessKeysOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListAccessKeysInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListAccessKeysRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListAccessKeysOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListAccountAliases = "ListAccountAliases" + +// ListAccountAliasesRequest generates a "aws/request.Request" representing the +// client's request for the ListAccountAliases operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAccountAliases for more information on using the ListAccountAliases +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAccountAliasesRequest method. +// req, resp := client.ListAccountAliasesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAccountAliases +func (c *IAM) ListAccountAliasesRequest(input *ListAccountAliasesInput) (req *request.Request, output *ListAccountAliasesOutput) { + op := &request.Operation{ + Name: opListAccountAliases, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListAccountAliasesInput{} + } + + output = &ListAccountAliasesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAccountAliases API operation for AWS Identity and Access Management. +// +// Lists the account alias associated with the AWS account (Note: you can have +// only one). For information about using an AWS account alias, see Using an +// Alias for Your AWS Account ID (http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListAccountAliases for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAccountAliases +func (c *IAM) ListAccountAliases(input *ListAccountAliasesInput) (*ListAccountAliasesOutput, error) { + req, out := c.ListAccountAliasesRequest(input) + return out, req.Send() +} + +// ListAccountAliasesWithContext is the same as ListAccountAliases with the addition of +// the ability to pass a context and additional request options. +// +// See ListAccountAliases for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAccountAliasesWithContext(ctx aws.Context, input *ListAccountAliasesInput, opts ...request.Option) (*ListAccountAliasesOutput, error) { + req, out := c.ListAccountAliasesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListAccountAliasesPages iterates over the pages of a ListAccountAliases operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListAccountAliases method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListAccountAliases operation. +// pageNum := 0 +// err := client.ListAccountAliasesPages(params, +// func(page *ListAccountAliasesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListAccountAliasesPages(input *ListAccountAliasesInput, fn func(*ListAccountAliasesOutput, bool) bool) error { + return c.ListAccountAliasesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListAccountAliasesPagesWithContext same as ListAccountAliasesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAccountAliasesPagesWithContext(ctx aws.Context, input *ListAccountAliasesInput, fn func(*ListAccountAliasesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListAccountAliasesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListAccountAliasesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListAccountAliasesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListAttachedGroupPolicies = "ListAttachedGroupPolicies" + +// ListAttachedGroupPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListAttachedGroupPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAttachedGroupPolicies for more information on using the ListAttachedGroupPolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAttachedGroupPoliciesRequest method. +// req, resp := client.ListAttachedGroupPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAttachedGroupPolicies +func (c *IAM) ListAttachedGroupPoliciesRequest(input *ListAttachedGroupPoliciesInput) (req *request.Request, output *ListAttachedGroupPoliciesOutput) { + op := &request.Operation{ + Name: opListAttachedGroupPolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListAttachedGroupPoliciesInput{} + } + + output = &ListAttachedGroupPoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAttachedGroupPolicies API operation for AWS Identity and Access Management. +// +// Lists all managed policies that are attached to the specified IAM group. +// +// An IAM group can also have inline policies embedded with it. To list the +// inline policies for a group, use the ListGroupPolicies API. For information +// about policies, see Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// You can paginate the results using the MaxItems and Marker parameters. You +// can use the PathPrefix parameter to limit the list of policies to only those +// matching the specified path prefix. If there are no policies attached to +// the specified group (or none that match the specified path prefix), the operation +// returns an empty list. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListAttachedGroupPolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAttachedGroupPolicies +func (c *IAM) ListAttachedGroupPolicies(input *ListAttachedGroupPoliciesInput) (*ListAttachedGroupPoliciesOutput, error) { + req, out := c.ListAttachedGroupPoliciesRequest(input) + return out, req.Send() +} + +// ListAttachedGroupPoliciesWithContext is the same as ListAttachedGroupPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListAttachedGroupPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAttachedGroupPoliciesWithContext(ctx aws.Context, input *ListAttachedGroupPoliciesInput, opts ...request.Option) (*ListAttachedGroupPoliciesOutput, error) { + req, out := c.ListAttachedGroupPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListAttachedGroupPoliciesPages iterates over the pages of a ListAttachedGroupPolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListAttachedGroupPolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListAttachedGroupPolicies operation. +// pageNum := 0 +// err := client.ListAttachedGroupPoliciesPages(params, +// func(page *ListAttachedGroupPoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListAttachedGroupPoliciesPages(input *ListAttachedGroupPoliciesInput, fn func(*ListAttachedGroupPoliciesOutput, bool) bool) error { + return c.ListAttachedGroupPoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListAttachedGroupPoliciesPagesWithContext same as ListAttachedGroupPoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAttachedGroupPoliciesPagesWithContext(ctx aws.Context, input *ListAttachedGroupPoliciesInput, fn func(*ListAttachedGroupPoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListAttachedGroupPoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListAttachedGroupPoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListAttachedGroupPoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListAttachedRolePolicies = "ListAttachedRolePolicies" + +// ListAttachedRolePoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListAttachedRolePolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAttachedRolePolicies for more information on using the ListAttachedRolePolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAttachedRolePoliciesRequest method. +// req, resp := client.ListAttachedRolePoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAttachedRolePolicies +func (c *IAM) ListAttachedRolePoliciesRequest(input *ListAttachedRolePoliciesInput) (req *request.Request, output *ListAttachedRolePoliciesOutput) { + op := &request.Operation{ + Name: opListAttachedRolePolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListAttachedRolePoliciesInput{} + } + + output = &ListAttachedRolePoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAttachedRolePolicies API operation for AWS Identity and Access Management. +// +// Lists all managed policies that are attached to the specified IAM role. +// +// An IAM role can also have inline policies embedded with it. To list the inline +// policies for a role, use the ListRolePolicies API. For information about +// policies, see Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// You can paginate the results using the MaxItems and Marker parameters. You +// can use the PathPrefix parameter to limit the list of policies to only those +// matching the specified path prefix. If there are no policies attached to +// the specified role (or none that match the specified path prefix), the operation +// returns an empty list. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListAttachedRolePolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAttachedRolePolicies +func (c *IAM) ListAttachedRolePolicies(input *ListAttachedRolePoliciesInput) (*ListAttachedRolePoliciesOutput, error) { + req, out := c.ListAttachedRolePoliciesRequest(input) + return out, req.Send() +} + +// ListAttachedRolePoliciesWithContext is the same as ListAttachedRolePolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListAttachedRolePolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAttachedRolePoliciesWithContext(ctx aws.Context, input *ListAttachedRolePoliciesInput, opts ...request.Option) (*ListAttachedRolePoliciesOutput, error) { + req, out := c.ListAttachedRolePoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListAttachedRolePoliciesPages iterates over the pages of a ListAttachedRolePolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListAttachedRolePolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListAttachedRolePolicies operation. +// pageNum := 0 +// err := client.ListAttachedRolePoliciesPages(params, +// func(page *ListAttachedRolePoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListAttachedRolePoliciesPages(input *ListAttachedRolePoliciesInput, fn func(*ListAttachedRolePoliciesOutput, bool) bool) error { + return c.ListAttachedRolePoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListAttachedRolePoliciesPagesWithContext same as ListAttachedRolePoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAttachedRolePoliciesPagesWithContext(ctx aws.Context, input *ListAttachedRolePoliciesInput, fn func(*ListAttachedRolePoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListAttachedRolePoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListAttachedRolePoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListAttachedRolePoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListAttachedUserPolicies = "ListAttachedUserPolicies" + +// ListAttachedUserPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListAttachedUserPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAttachedUserPolicies for more information on using the ListAttachedUserPolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAttachedUserPoliciesRequest method. +// req, resp := client.ListAttachedUserPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAttachedUserPolicies +func (c *IAM) ListAttachedUserPoliciesRequest(input *ListAttachedUserPoliciesInput) (req *request.Request, output *ListAttachedUserPoliciesOutput) { + op := &request.Operation{ + Name: opListAttachedUserPolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListAttachedUserPoliciesInput{} + } + + output = &ListAttachedUserPoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAttachedUserPolicies API operation for AWS Identity and Access Management. +// +// Lists all managed policies that are attached to the specified IAM user. +// +// An IAM user can also have inline policies embedded with it. To list the inline +// policies for a user, use the ListUserPolicies API. For information about +// policies, see Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// You can paginate the results using the MaxItems and Marker parameters. You +// can use the PathPrefix parameter to limit the list of policies to only those +// matching the specified path prefix. If there are no policies attached to +// the specified group (or none that match the specified path prefix), the operation +// returns an empty list. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListAttachedUserPolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListAttachedUserPolicies +func (c *IAM) ListAttachedUserPolicies(input *ListAttachedUserPoliciesInput) (*ListAttachedUserPoliciesOutput, error) { + req, out := c.ListAttachedUserPoliciesRequest(input) + return out, req.Send() +} + +// ListAttachedUserPoliciesWithContext is the same as ListAttachedUserPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListAttachedUserPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAttachedUserPoliciesWithContext(ctx aws.Context, input *ListAttachedUserPoliciesInput, opts ...request.Option) (*ListAttachedUserPoliciesOutput, error) { + req, out := c.ListAttachedUserPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListAttachedUserPoliciesPages iterates over the pages of a ListAttachedUserPolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListAttachedUserPolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListAttachedUserPolicies operation. +// pageNum := 0 +// err := client.ListAttachedUserPoliciesPages(params, +// func(page *ListAttachedUserPoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListAttachedUserPoliciesPages(input *ListAttachedUserPoliciesInput, fn func(*ListAttachedUserPoliciesOutput, bool) bool) error { + return c.ListAttachedUserPoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListAttachedUserPoliciesPagesWithContext same as ListAttachedUserPoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListAttachedUserPoliciesPagesWithContext(ctx aws.Context, input *ListAttachedUserPoliciesInput, fn func(*ListAttachedUserPoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListAttachedUserPoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListAttachedUserPoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListAttachedUserPoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListEntitiesForPolicy = "ListEntitiesForPolicy" + +// ListEntitiesForPolicyRequest generates a "aws/request.Request" representing the +// client's request for the ListEntitiesForPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListEntitiesForPolicy for more information on using the ListEntitiesForPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListEntitiesForPolicyRequest method. +// req, resp := client.ListEntitiesForPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListEntitiesForPolicy +func (c *IAM) ListEntitiesForPolicyRequest(input *ListEntitiesForPolicyInput) (req *request.Request, output *ListEntitiesForPolicyOutput) { + op := &request.Operation{ + Name: opListEntitiesForPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListEntitiesForPolicyInput{} + } + + output = &ListEntitiesForPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListEntitiesForPolicy API operation for AWS Identity and Access Management. +// +// Lists all IAM users, groups, and roles that the specified managed policy +// is attached to. +// +// You can use the optional EntityFilter parameter to limit the results to a +// particular type of entity (users, groups, or roles). For example, to list +// only the roles that are attached to the specified policy, set EntityFilter +// to Role. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListEntitiesForPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListEntitiesForPolicy +func (c *IAM) ListEntitiesForPolicy(input *ListEntitiesForPolicyInput) (*ListEntitiesForPolicyOutput, error) { + req, out := c.ListEntitiesForPolicyRequest(input) + return out, req.Send() +} + +// ListEntitiesForPolicyWithContext is the same as ListEntitiesForPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See ListEntitiesForPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListEntitiesForPolicyWithContext(ctx aws.Context, input *ListEntitiesForPolicyInput, opts ...request.Option) (*ListEntitiesForPolicyOutput, error) { + req, out := c.ListEntitiesForPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListEntitiesForPolicyPages iterates over the pages of a ListEntitiesForPolicy operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListEntitiesForPolicy method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListEntitiesForPolicy operation. +// pageNum := 0 +// err := client.ListEntitiesForPolicyPages(params, +// func(page *ListEntitiesForPolicyOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListEntitiesForPolicyPages(input *ListEntitiesForPolicyInput, fn func(*ListEntitiesForPolicyOutput, bool) bool) error { + return c.ListEntitiesForPolicyPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListEntitiesForPolicyPagesWithContext same as ListEntitiesForPolicyPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListEntitiesForPolicyPagesWithContext(ctx aws.Context, input *ListEntitiesForPolicyInput, fn func(*ListEntitiesForPolicyOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListEntitiesForPolicyInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListEntitiesForPolicyRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListEntitiesForPolicyOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListGroupPolicies = "ListGroupPolicies" + +// ListGroupPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListGroupPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGroupPolicies for more information on using the ListGroupPolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGroupPoliciesRequest method. +// req, resp := client.ListGroupPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroupPolicies +func (c *IAM) ListGroupPoliciesRequest(input *ListGroupPoliciesInput) (req *request.Request, output *ListGroupPoliciesOutput) { + op := &request.Operation{ + Name: opListGroupPolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListGroupPoliciesInput{} + } + + output = &ListGroupPoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGroupPolicies API operation for AWS Identity and Access Management. +// +// Lists the names of the inline policies that are embedded in the specified +// IAM group. +// +// An IAM group can also have managed policies attached to it. To list the managed +// policies that are attached to a group, use ListAttachedGroupPolicies. For +// more information about policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// You can paginate the results using the MaxItems and Marker parameters. If +// there are no inline policies embedded with the specified group, the operation +// returns an empty list. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListGroupPolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroupPolicies +func (c *IAM) ListGroupPolicies(input *ListGroupPoliciesInput) (*ListGroupPoliciesOutput, error) { + req, out := c.ListGroupPoliciesRequest(input) + return out, req.Send() +} + +// ListGroupPoliciesWithContext is the same as ListGroupPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListGroupPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListGroupPoliciesWithContext(ctx aws.Context, input *ListGroupPoliciesInput, opts ...request.Option) (*ListGroupPoliciesOutput, error) { + req, out := c.ListGroupPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListGroupPoliciesPages iterates over the pages of a ListGroupPolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListGroupPolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListGroupPolicies operation. +// pageNum := 0 +// err := client.ListGroupPoliciesPages(params, +// func(page *ListGroupPoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListGroupPoliciesPages(input *ListGroupPoliciesInput, fn func(*ListGroupPoliciesOutput, bool) bool) error { + return c.ListGroupPoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListGroupPoliciesPagesWithContext same as ListGroupPoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListGroupPoliciesPagesWithContext(ctx aws.Context, input *ListGroupPoliciesInput, fn func(*ListGroupPoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListGroupPoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListGroupPoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListGroupPoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListGroups = "ListGroups" + +// ListGroupsRequest generates a "aws/request.Request" representing the +// client's request for the ListGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGroups for more information on using the ListGroups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGroupsRequest method. +// req, resp := client.ListGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroups +func (c *IAM) ListGroupsRequest(input *ListGroupsInput) (req *request.Request, output *ListGroupsOutput) { + op := &request.Operation{ + Name: opListGroups, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListGroupsInput{} + } + + output = &ListGroupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGroups API operation for AWS Identity and Access Management. +// +// Lists the IAM groups that have the specified path prefix. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListGroups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroups +func (c *IAM) ListGroups(input *ListGroupsInput) (*ListGroupsOutput, error) { + req, out := c.ListGroupsRequest(input) + return out, req.Send() +} + +// ListGroupsWithContext is the same as ListGroups with the addition of +// the ability to pass a context and additional request options. +// +// See ListGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListGroupsWithContext(ctx aws.Context, input *ListGroupsInput, opts ...request.Option) (*ListGroupsOutput, error) { + req, out := c.ListGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListGroupsPages iterates over the pages of a ListGroups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListGroups method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListGroups operation. +// pageNum := 0 +// err := client.ListGroupsPages(params, +// func(page *ListGroupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListGroupsPages(input *ListGroupsInput, fn func(*ListGroupsOutput, bool) bool) error { + return c.ListGroupsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListGroupsPagesWithContext same as ListGroupsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListGroupsPagesWithContext(ctx aws.Context, input *ListGroupsInput, fn func(*ListGroupsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListGroupsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListGroupsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListGroupsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListGroupsForUser = "ListGroupsForUser" + +// ListGroupsForUserRequest generates a "aws/request.Request" representing the +// client's request for the ListGroupsForUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGroupsForUser for more information on using the ListGroupsForUser +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGroupsForUserRequest method. +// req, resp := client.ListGroupsForUserRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroupsForUser +func (c *IAM) ListGroupsForUserRequest(input *ListGroupsForUserInput) (req *request.Request, output *ListGroupsForUserOutput) { + op := &request.Operation{ + Name: opListGroupsForUser, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListGroupsForUserInput{} + } + + output = &ListGroupsForUserOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGroupsForUser API operation for AWS Identity and Access Management. +// +// Lists the IAM groups that the specified IAM user belongs to. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListGroupsForUser for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroupsForUser +func (c *IAM) ListGroupsForUser(input *ListGroupsForUserInput) (*ListGroupsForUserOutput, error) { + req, out := c.ListGroupsForUserRequest(input) + return out, req.Send() +} + +// ListGroupsForUserWithContext is the same as ListGroupsForUser with the addition of +// the ability to pass a context and additional request options. +// +// See ListGroupsForUser for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListGroupsForUserWithContext(ctx aws.Context, input *ListGroupsForUserInput, opts ...request.Option) (*ListGroupsForUserOutput, error) { + req, out := c.ListGroupsForUserRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListGroupsForUserPages iterates over the pages of a ListGroupsForUser operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListGroupsForUser method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListGroupsForUser operation. +// pageNum := 0 +// err := client.ListGroupsForUserPages(params, +// func(page *ListGroupsForUserOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListGroupsForUserPages(input *ListGroupsForUserInput, fn func(*ListGroupsForUserOutput, bool) bool) error { + return c.ListGroupsForUserPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListGroupsForUserPagesWithContext same as ListGroupsForUserPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListGroupsForUserPagesWithContext(ctx aws.Context, input *ListGroupsForUserInput, fn func(*ListGroupsForUserOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListGroupsForUserInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListGroupsForUserRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListGroupsForUserOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListInstanceProfiles = "ListInstanceProfiles" + +// ListInstanceProfilesRequest generates a "aws/request.Request" representing the +// client's request for the ListInstanceProfiles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListInstanceProfiles for more information on using the ListInstanceProfiles +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListInstanceProfilesRequest method. +// req, resp := client.ListInstanceProfilesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfiles +func (c *IAM) ListInstanceProfilesRequest(input *ListInstanceProfilesInput) (req *request.Request, output *ListInstanceProfilesOutput) { + op := &request.Operation{ + Name: opListInstanceProfiles, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListInstanceProfilesInput{} + } + + output = &ListInstanceProfilesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListInstanceProfiles API operation for AWS Identity and Access Management. +// +// Lists the instance profiles that have the specified path prefix. If there +// are none, the operation returns an empty list. For more information about +// instance profiles, go to About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListInstanceProfiles for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfiles +func (c *IAM) ListInstanceProfiles(input *ListInstanceProfilesInput) (*ListInstanceProfilesOutput, error) { + req, out := c.ListInstanceProfilesRequest(input) + return out, req.Send() +} + +// ListInstanceProfilesWithContext is the same as ListInstanceProfiles with the addition of +// the ability to pass a context and additional request options. +// +// See ListInstanceProfiles for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListInstanceProfilesWithContext(ctx aws.Context, input *ListInstanceProfilesInput, opts ...request.Option) (*ListInstanceProfilesOutput, error) { + req, out := c.ListInstanceProfilesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListInstanceProfilesPages iterates over the pages of a ListInstanceProfiles operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListInstanceProfiles method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListInstanceProfiles operation. +// pageNum := 0 +// err := client.ListInstanceProfilesPages(params, +// func(page *ListInstanceProfilesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListInstanceProfilesPages(input *ListInstanceProfilesInput, fn func(*ListInstanceProfilesOutput, bool) bool) error { + return c.ListInstanceProfilesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListInstanceProfilesPagesWithContext same as ListInstanceProfilesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListInstanceProfilesPagesWithContext(ctx aws.Context, input *ListInstanceProfilesInput, fn func(*ListInstanceProfilesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListInstanceProfilesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListInstanceProfilesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListInstanceProfilesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListInstanceProfilesForRole = "ListInstanceProfilesForRole" + +// ListInstanceProfilesForRoleRequest generates a "aws/request.Request" representing the +// client's request for the ListInstanceProfilesForRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListInstanceProfilesForRole for more information on using the ListInstanceProfilesForRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListInstanceProfilesForRoleRequest method. +// req, resp := client.ListInstanceProfilesForRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfilesForRole +func (c *IAM) ListInstanceProfilesForRoleRequest(input *ListInstanceProfilesForRoleInput) (req *request.Request, output *ListInstanceProfilesForRoleOutput) { + op := &request.Operation{ + Name: opListInstanceProfilesForRole, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListInstanceProfilesForRoleInput{} + } + + output = &ListInstanceProfilesForRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListInstanceProfilesForRole API operation for AWS Identity and Access Management. +// +// Lists the instance profiles that have the specified associated IAM role. +// If there are none, the operation returns an empty list. For more information +// about instance profiles, go to About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListInstanceProfilesForRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfilesForRole +func (c *IAM) ListInstanceProfilesForRole(input *ListInstanceProfilesForRoleInput) (*ListInstanceProfilesForRoleOutput, error) { + req, out := c.ListInstanceProfilesForRoleRequest(input) + return out, req.Send() +} + +// ListInstanceProfilesForRoleWithContext is the same as ListInstanceProfilesForRole with the addition of +// the ability to pass a context and additional request options. +// +// See ListInstanceProfilesForRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListInstanceProfilesForRoleWithContext(ctx aws.Context, input *ListInstanceProfilesForRoleInput, opts ...request.Option) (*ListInstanceProfilesForRoleOutput, error) { + req, out := c.ListInstanceProfilesForRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListInstanceProfilesForRolePages iterates over the pages of a ListInstanceProfilesForRole operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListInstanceProfilesForRole method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListInstanceProfilesForRole operation. +// pageNum := 0 +// err := client.ListInstanceProfilesForRolePages(params, +// func(page *ListInstanceProfilesForRoleOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListInstanceProfilesForRolePages(input *ListInstanceProfilesForRoleInput, fn func(*ListInstanceProfilesForRoleOutput, bool) bool) error { + return c.ListInstanceProfilesForRolePagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListInstanceProfilesForRolePagesWithContext same as ListInstanceProfilesForRolePages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListInstanceProfilesForRolePagesWithContext(ctx aws.Context, input *ListInstanceProfilesForRoleInput, fn func(*ListInstanceProfilesForRoleOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListInstanceProfilesForRoleInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListInstanceProfilesForRoleRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListInstanceProfilesForRoleOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListMFADevices = "ListMFADevices" + +// ListMFADevicesRequest generates a "aws/request.Request" representing the +// client's request for the ListMFADevices operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListMFADevices for more information on using the ListMFADevices +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListMFADevicesRequest method. +// req, resp := client.ListMFADevicesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListMFADevices +func (c *IAM) ListMFADevicesRequest(input *ListMFADevicesInput) (req *request.Request, output *ListMFADevicesOutput) { + op := &request.Operation{ + Name: opListMFADevices, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListMFADevicesInput{} + } + + output = &ListMFADevicesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListMFADevices API operation for AWS Identity and Access Management. +// +// Lists the MFA devices for an IAM user. If the request includes a IAM user +// name, then this operation lists all the MFA devices associated with the specified +// user. If you do not specify a user name, IAM determines the user name implicitly +// based on the AWS access key ID signing the request for this API. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListMFADevices for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListMFADevices +func (c *IAM) ListMFADevices(input *ListMFADevicesInput) (*ListMFADevicesOutput, error) { + req, out := c.ListMFADevicesRequest(input) + return out, req.Send() +} + +// ListMFADevicesWithContext is the same as ListMFADevices with the addition of +// the ability to pass a context and additional request options. +// +// See ListMFADevices for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListMFADevicesWithContext(ctx aws.Context, input *ListMFADevicesInput, opts ...request.Option) (*ListMFADevicesOutput, error) { + req, out := c.ListMFADevicesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListMFADevicesPages iterates over the pages of a ListMFADevices operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListMFADevices method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListMFADevices operation. +// pageNum := 0 +// err := client.ListMFADevicesPages(params, +// func(page *ListMFADevicesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListMFADevicesPages(input *ListMFADevicesInput, fn func(*ListMFADevicesOutput, bool) bool) error { + return c.ListMFADevicesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListMFADevicesPagesWithContext same as ListMFADevicesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListMFADevicesPagesWithContext(ctx aws.Context, input *ListMFADevicesInput, fn func(*ListMFADevicesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListMFADevicesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListMFADevicesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListMFADevicesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListOpenIDConnectProviders = "ListOpenIDConnectProviders" + +// ListOpenIDConnectProvidersRequest generates a "aws/request.Request" representing the +// client's request for the ListOpenIDConnectProviders operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListOpenIDConnectProviders for more information on using the ListOpenIDConnectProviders +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListOpenIDConnectProvidersRequest method. +// req, resp := client.ListOpenIDConnectProvidersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListOpenIDConnectProviders +func (c *IAM) ListOpenIDConnectProvidersRequest(input *ListOpenIDConnectProvidersInput) (req *request.Request, output *ListOpenIDConnectProvidersOutput) { + op := &request.Operation{ + Name: opListOpenIDConnectProviders, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListOpenIDConnectProvidersInput{} + } + + output = &ListOpenIDConnectProvidersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListOpenIDConnectProviders API operation for AWS Identity and Access Management. +// +// Lists information about the IAM OpenID Connect (OIDC) provider resource objects +// defined in the AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListOpenIDConnectProviders for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListOpenIDConnectProviders +func (c *IAM) ListOpenIDConnectProviders(input *ListOpenIDConnectProvidersInput) (*ListOpenIDConnectProvidersOutput, error) { + req, out := c.ListOpenIDConnectProvidersRequest(input) + return out, req.Send() +} + +// ListOpenIDConnectProvidersWithContext is the same as ListOpenIDConnectProviders with the addition of +// the ability to pass a context and additional request options. +// +// See ListOpenIDConnectProviders for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListOpenIDConnectProvidersWithContext(ctx aws.Context, input *ListOpenIDConnectProvidersInput, opts ...request.Option) (*ListOpenIDConnectProvidersOutput, error) { + req, out := c.ListOpenIDConnectProvidersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListPolicies = "ListPolicies" + +// ListPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListPolicies for more information on using the ListPolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListPoliciesRequest method. +// req, resp := client.ListPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicies +func (c *IAM) ListPoliciesRequest(input *ListPoliciesInput) (req *request.Request, output *ListPoliciesOutput) { + op := &request.Operation{ + Name: opListPolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListPoliciesInput{} + } + + output = &ListPoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListPolicies API operation for AWS Identity and Access Management. +// +// Lists all the managed policies that are available in your AWS account, including +// your own customer-defined managed policies and all AWS managed policies. +// +// You can filter the list of policies that is returned using the optional OnlyAttached, +// Scope, and PathPrefix parameters. For example, to list only the customer +// managed policies in your AWS account, set Scope to Local. To list only AWS +// managed policies, set Scope to AWS. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// For more information about managed policies, see Managed Policies and Inline +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListPolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicies +func (c *IAM) ListPolicies(input *ListPoliciesInput) (*ListPoliciesOutput, error) { + req, out := c.ListPoliciesRequest(input) + return out, req.Send() +} + +// ListPoliciesWithContext is the same as ListPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListPoliciesWithContext(ctx aws.Context, input *ListPoliciesInput, opts ...request.Option) (*ListPoliciesOutput, error) { + req, out := c.ListPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListPoliciesPages iterates over the pages of a ListPolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListPolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListPolicies operation. +// pageNum := 0 +// err := client.ListPoliciesPages(params, +// func(page *ListPoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListPoliciesPages(input *ListPoliciesInput, fn func(*ListPoliciesOutput, bool) bool) error { + return c.ListPoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListPoliciesPagesWithContext same as ListPoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListPoliciesPagesWithContext(ctx aws.Context, input *ListPoliciesInput, fn func(*ListPoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListPoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListPoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListPoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListPolicyVersions = "ListPolicyVersions" + +// ListPolicyVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListPolicyVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListPolicyVersions for more information on using the ListPolicyVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListPolicyVersionsRequest method. +// req, resp := client.ListPolicyVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicyVersions +func (c *IAM) ListPolicyVersionsRequest(input *ListPolicyVersionsInput) (req *request.Request, output *ListPolicyVersionsOutput) { + op := &request.Operation{ + Name: opListPolicyVersions, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListPolicyVersionsInput{} + } + + output = &ListPolicyVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListPolicyVersions API operation for AWS Identity and Access Management. +// +// Lists information about the versions of the specified managed policy, including +// the version that is currently set as the policy's default version. +// +// For more information about managed policies, see Managed Policies and Inline +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListPolicyVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicyVersions +func (c *IAM) ListPolicyVersions(input *ListPolicyVersionsInput) (*ListPolicyVersionsOutput, error) { + req, out := c.ListPolicyVersionsRequest(input) + return out, req.Send() +} + +// ListPolicyVersionsWithContext is the same as ListPolicyVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListPolicyVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListPolicyVersionsWithContext(ctx aws.Context, input *ListPolicyVersionsInput, opts ...request.Option) (*ListPolicyVersionsOutput, error) { + req, out := c.ListPolicyVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListPolicyVersionsPages iterates over the pages of a ListPolicyVersions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListPolicyVersions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListPolicyVersions operation. +// pageNum := 0 +// err := client.ListPolicyVersionsPages(params, +// func(page *ListPolicyVersionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListPolicyVersionsPages(input *ListPolicyVersionsInput, fn func(*ListPolicyVersionsOutput, bool) bool) error { + return c.ListPolicyVersionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListPolicyVersionsPagesWithContext same as ListPolicyVersionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListPolicyVersionsPagesWithContext(ctx aws.Context, input *ListPolicyVersionsInput, fn func(*ListPolicyVersionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListPolicyVersionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListPolicyVersionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListPolicyVersionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListRolePolicies = "ListRolePolicies" + +// ListRolePoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListRolePolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListRolePolicies for more information on using the ListRolePolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListRolePoliciesRequest method. +// req, resp := client.ListRolePoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRolePolicies +func (c *IAM) ListRolePoliciesRequest(input *ListRolePoliciesInput) (req *request.Request, output *ListRolePoliciesOutput) { + op := &request.Operation{ + Name: opListRolePolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListRolePoliciesInput{} + } + + output = &ListRolePoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListRolePolicies API operation for AWS Identity and Access Management. +// +// Lists the names of the inline policies that are embedded in the specified +// IAM role. +// +// An IAM role can also have managed policies attached to it. To list the managed +// policies that are attached to a role, use ListAttachedRolePolicies. For more +// information about policies, see Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// You can paginate the results using the MaxItems and Marker parameters. If +// there are no inline policies embedded with the specified role, the operation +// returns an empty list. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListRolePolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRolePolicies +func (c *IAM) ListRolePolicies(input *ListRolePoliciesInput) (*ListRolePoliciesOutput, error) { + req, out := c.ListRolePoliciesRequest(input) + return out, req.Send() +} + +// ListRolePoliciesWithContext is the same as ListRolePolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListRolePolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListRolePoliciesWithContext(ctx aws.Context, input *ListRolePoliciesInput, opts ...request.Option) (*ListRolePoliciesOutput, error) { + req, out := c.ListRolePoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListRolePoliciesPages iterates over the pages of a ListRolePolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListRolePolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListRolePolicies operation. +// pageNum := 0 +// err := client.ListRolePoliciesPages(params, +// func(page *ListRolePoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListRolePoliciesPages(input *ListRolePoliciesInput, fn func(*ListRolePoliciesOutput, bool) bool) error { + return c.ListRolePoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListRolePoliciesPagesWithContext same as ListRolePoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListRolePoliciesPagesWithContext(ctx aws.Context, input *ListRolePoliciesInput, fn func(*ListRolePoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListRolePoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListRolePoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListRolePoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListRoles = "ListRoles" + +// ListRolesRequest generates a "aws/request.Request" representing the +// client's request for the ListRoles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListRoles for more information on using the ListRoles +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListRolesRequest method. +// req, resp := client.ListRolesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoles +func (c *IAM) ListRolesRequest(input *ListRolesInput) (req *request.Request, output *ListRolesOutput) { + op := &request.Operation{ + Name: opListRoles, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListRolesInput{} + } + + output = &ListRolesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListRoles API operation for AWS Identity and Access Management. +// +// Lists the IAM roles that have the specified path prefix. If there are none, +// the operation returns an empty list. For more information about roles, go +// to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListRoles for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoles +func (c *IAM) ListRoles(input *ListRolesInput) (*ListRolesOutput, error) { + req, out := c.ListRolesRequest(input) + return out, req.Send() +} + +// ListRolesWithContext is the same as ListRoles with the addition of +// the ability to pass a context and additional request options. +// +// See ListRoles for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListRolesWithContext(ctx aws.Context, input *ListRolesInput, opts ...request.Option) (*ListRolesOutput, error) { + req, out := c.ListRolesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListRolesPages iterates over the pages of a ListRoles operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListRoles method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListRoles operation. +// pageNum := 0 +// err := client.ListRolesPages(params, +// func(page *ListRolesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListRolesPages(input *ListRolesInput, fn func(*ListRolesOutput, bool) bool) error { + return c.ListRolesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListRolesPagesWithContext same as ListRolesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListRolesPagesWithContext(ctx aws.Context, input *ListRolesInput, fn func(*ListRolesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListRolesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListRolesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListRolesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListSAMLProviders = "ListSAMLProviders" + +// ListSAMLProvidersRequest generates a "aws/request.Request" representing the +// client's request for the ListSAMLProviders operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListSAMLProviders for more information on using the ListSAMLProviders +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListSAMLProvidersRequest method. +// req, resp := client.ListSAMLProvidersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSAMLProviders +func (c *IAM) ListSAMLProvidersRequest(input *ListSAMLProvidersInput) (req *request.Request, output *ListSAMLProvidersOutput) { + op := &request.Operation{ + Name: opListSAMLProviders, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListSAMLProvidersInput{} + } + + output = &ListSAMLProvidersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListSAMLProviders API operation for AWS Identity and Access Management. +// +// Lists the SAML provider resource objects defined in IAM in the account. +// +// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListSAMLProviders for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSAMLProviders +func (c *IAM) ListSAMLProviders(input *ListSAMLProvidersInput) (*ListSAMLProvidersOutput, error) { + req, out := c.ListSAMLProvidersRequest(input) + return out, req.Send() +} + +// ListSAMLProvidersWithContext is the same as ListSAMLProviders with the addition of +// the ability to pass a context and additional request options. +// +// See ListSAMLProviders for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListSAMLProvidersWithContext(ctx aws.Context, input *ListSAMLProvidersInput, opts ...request.Option) (*ListSAMLProvidersOutput, error) { + req, out := c.ListSAMLProvidersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListSSHPublicKeys = "ListSSHPublicKeys" + +// ListSSHPublicKeysRequest generates a "aws/request.Request" representing the +// client's request for the ListSSHPublicKeys operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListSSHPublicKeys for more information on using the ListSSHPublicKeys +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListSSHPublicKeysRequest method. +// req, resp := client.ListSSHPublicKeysRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSSHPublicKeys +func (c *IAM) ListSSHPublicKeysRequest(input *ListSSHPublicKeysInput) (req *request.Request, output *ListSSHPublicKeysOutput) { + op := &request.Operation{ + Name: opListSSHPublicKeys, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListSSHPublicKeysInput{} + } + + output = &ListSSHPublicKeysOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListSSHPublicKeys API operation for AWS Identity and Access Management. +// +// Returns information about the SSH public keys associated with the specified +// IAM user. If there are none, the operation returns an empty list. +// +// The SSH public keys returned by this operation are used only for authenticating +// the IAM user to an AWS CodeCommit repository. For more information about +// using SSH keys to authenticate to an AWS CodeCommit repository, see Set up +// AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) +// in the AWS CodeCommit User Guide. +// +// Although each user is limited to a small number of keys, you can still paginate +// the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListSSHPublicKeys for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSSHPublicKeys +func (c *IAM) ListSSHPublicKeys(input *ListSSHPublicKeysInput) (*ListSSHPublicKeysOutput, error) { + req, out := c.ListSSHPublicKeysRequest(input) + return out, req.Send() +} + +// ListSSHPublicKeysWithContext is the same as ListSSHPublicKeys with the addition of +// the ability to pass a context and additional request options. +// +// See ListSSHPublicKeys for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListSSHPublicKeysWithContext(ctx aws.Context, input *ListSSHPublicKeysInput, opts ...request.Option) (*ListSSHPublicKeysOutput, error) { + req, out := c.ListSSHPublicKeysRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListSSHPublicKeysPages iterates over the pages of a ListSSHPublicKeys operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListSSHPublicKeys method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListSSHPublicKeys operation. +// pageNum := 0 +// err := client.ListSSHPublicKeysPages(params, +// func(page *ListSSHPublicKeysOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListSSHPublicKeysPages(input *ListSSHPublicKeysInput, fn func(*ListSSHPublicKeysOutput, bool) bool) error { + return c.ListSSHPublicKeysPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListSSHPublicKeysPagesWithContext same as ListSSHPublicKeysPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListSSHPublicKeysPagesWithContext(ctx aws.Context, input *ListSSHPublicKeysInput, fn func(*ListSSHPublicKeysOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListSSHPublicKeysInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListSSHPublicKeysRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListSSHPublicKeysOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListServerCertificates = "ListServerCertificates" + +// ListServerCertificatesRequest generates a "aws/request.Request" representing the +// client's request for the ListServerCertificates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListServerCertificates for more information on using the ListServerCertificates +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListServerCertificatesRequest method. +// req, resp := client.ListServerCertificatesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServerCertificates +func (c *IAM) ListServerCertificatesRequest(input *ListServerCertificatesInput) (req *request.Request, output *ListServerCertificatesOutput) { + op := &request.Operation{ + Name: opListServerCertificates, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListServerCertificatesInput{} + } + + output = &ListServerCertificatesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListServerCertificates API operation for AWS Identity and Access Management. +// +// Lists the server certificates stored in IAM that have the specified path +// prefix. If none exist, the operation returns an empty list. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic also includes a list of AWS services that +// can use the server certificates that you manage with IAM. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListServerCertificates for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServerCertificates +func (c *IAM) ListServerCertificates(input *ListServerCertificatesInput) (*ListServerCertificatesOutput, error) { + req, out := c.ListServerCertificatesRequest(input) + return out, req.Send() +} + +// ListServerCertificatesWithContext is the same as ListServerCertificates with the addition of +// the ability to pass a context and additional request options. +// +// See ListServerCertificates for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListServerCertificatesWithContext(ctx aws.Context, input *ListServerCertificatesInput, opts ...request.Option) (*ListServerCertificatesOutput, error) { + req, out := c.ListServerCertificatesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListServerCertificatesPages iterates over the pages of a ListServerCertificates operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListServerCertificates method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListServerCertificates operation. +// pageNum := 0 +// err := client.ListServerCertificatesPages(params, +// func(page *ListServerCertificatesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListServerCertificatesPages(input *ListServerCertificatesInput, fn func(*ListServerCertificatesOutput, bool) bool) error { + return c.ListServerCertificatesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListServerCertificatesPagesWithContext same as ListServerCertificatesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListServerCertificatesPagesWithContext(ctx aws.Context, input *ListServerCertificatesInput, fn func(*ListServerCertificatesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListServerCertificatesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListServerCertificatesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListServerCertificatesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListServiceSpecificCredentials = "ListServiceSpecificCredentials" + +// ListServiceSpecificCredentialsRequest generates a "aws/request.Request" representing the +// client's request for the ListServiceSpecificCredentials operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListServiceSpecificCredentials for more information on using the ListServiceSpecificCredentials +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListServiceSpecificCredentialsRequest method. +// req, resp := client.ListServiceSpecificCredentialsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServiceSpecificCredentials +func (c *IAM) ListServiceSpecificCredentialsRequest(input *ListServiceSpecificCredentialsInput) (req *request.Request, output *ListServiceSpecificCredentialsOutput) { + op := &request.Operation{ + Name: opListServiceSpecificCredentials, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListServiceSpecificCredentialsInput{} + } + + output = &ListServiceSpecificCredentialsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListServiceSpecificCredentials API operation for AWS Identity and Access Management. +// +// Returns information about the service-specific credentials associated with +// the specified IAM user. If there are none, the operation returns an empty +// list. The service-specific credentials returned by this operation are used +// only for authenticating the IAM user to a specific service. For more information +// about using service-specific credentials to authenticate to an AWS service, +// see Set Up service-specific credentials (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) +// in the AWS CodeCommit User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListServiceSpecificCredentials for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceNotSupportedException "NotSupportedService" +// The specified service does not support service-specific credentials. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServiceSpecificCredentials +func (c *IAM) ListServiceSpecificCredentials(input *ListServiceSpecificCredentialsInput) (*ListServiceSpecificCredentialsOutput, error) { + req, out := c.ListServiceSpecificCredentialsRequest(input) + return out, req.Send() +} + +// ListServiceSpecificCredentialsWithContext is the same as ListServiceSpecificCredentials with the addition of +// the ability to pass a context and additional request options. +// +// See ListServiceSpecificCredentials for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListServiceSpecificCredentialsWithContext(ctx aws.Context, input *ListServiceSpecificCredentialsInput, opts ...request.Option) (*ListServiceSpecificCredentialsOutput, error) { + req, out := c.ListServiceSpecificCredentialsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListSigningCertificates = "ListSigningCertificates" + +// ListSigningCertificatesRequest generates a "aws/request.Request" representing the +// client's request for the ListSigningCertificates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListSigningCertificates for more information on using the ListSigningCertificates +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListSigningCertificatesRequest method. +// req, resp := client.ListSigningCertificatesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSigningCertificates +func (c *IAM) ListSigningCertificatesRequest(input *ListSigningCertificatesInput) (req *request.Request, output *ListSigningCertificatesOutput) { + op := &request.Operation{ + Name: opListSigningCertificates, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListSigningCertificatesInput{} + } + + output = &ListSigningCertificatesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListSigningCertificates API operation for AWS Identity and Access Management. +// +// Returns information about the signing certificates associated with the specified +// IAM user. If there are none, the operation returns an empty list. +// +// Although each user is limited to a small number of signing certificates, +// you can still paginate the results using the MaxItems and Marker parameters. +// +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request for this API. Because +// this operation works for access keys under the AWS account, you can use this +// operation to manage AWS account root user credentials even if the AWS account +// has no associated users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListSigningCertificates for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSigningCertificates +func (c *IAM) ListSigningCertificates(input *ListSigningCertificatesInput) (*ListSigningCertificatesOutput, error) { + req, out := c.ListSigningCertificatesRequest(input) + return out, req.Send() +} + +// ListSigningCertificatesWithContext is the same as ListSigningCertificates with the addition of +// the ability to pass a context and additional request options. +// +// See ListSigningCertificates for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListSigningCertificatesWithContext(ctx aws.Context, input *ListSigningCertificatesInput, opts ...request.Option) (*ListSigningCertificatesOutput, error) { + req, out := c.ListSigningCertificatesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListSigningCertificatesPages iterates over the pages of a ListSigningCertificates operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListSigningCertificates method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListSigningCertificates operation. +// pageNum := 0 +// err := client.ListSigningCertificatesPages(params, +// func(page *ListSigningCertificatesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListSigningCertificatesPages(input *ListSigningCertificatesInput, fn func(*ListSigningCertificatesOutput, bool) bool) error { + return c.ListSigningCertificatesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListSigningCertificatesPagesWithContext same as ListSigningCertificatesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListSigningCertificatesPagesWithContext(ctx aws.Context, input *ListSigningCertificatesInput, fn func(*ListSigningCertificatesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListSigningCertificatesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListSigningCertificatesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListSigningCertificatesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListUserPolicies = "ListUserPolicies" + +// ListUserPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListUserPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListUserPolicies for more information on using the ListUserPolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListUserPoliciesRequest method. +// req, resp := client.ListUserPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUserPolicies +func (c *IAM) ListUserPoliciesRequest(input *ListUserPoliciesInput) (req *request.Request, output *ListUserPoliciesOutput) { + op := &request.Operation{ + Name: opListUserPolicies, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListUserPoliciesInput{} + } + + output = &ListUserPoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListUserPolicies API operation for AWS Identity and Access Management. +// +// Lists the names of the inline policies embedded in the specified IAM user. +// +// An IAM user can also have managed policies attached to it. To list the managed +// policies that are attached to a user, use ListAttachedUserPolicies. For more +// information about policies, see Managed Policies and Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// You can paginate the results using the MaxItems and Marker parameters. If +// there are no inline policies embedded with the specified user, the operation +// returns an empty list. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListUserPolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUserPolicies +func (c *IAM) ListUserPolicies(input *ListUserPoliciesInput) (*ListUserPoliciesOutput, error) { + req, out := c.ListUserPoliciesRequest(input) + return out, req.Send() +} + +// ListUserPoliciesWithContext is the same as ListUserPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListUserPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListUserPoliciesWithContext(ctx aws.Context, input *ListUserPoliciesInput, opts ...request.Option) (*ListUserPoliciesOutput, error) { + req, out := c.ListUserPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListUserPoliciesPages iterates over the pages of a ListUserPolicies operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListUserPolicies method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListUserPolicies operation. +// pageNum := 0 +// err := client.ListUserPoliciesPages(params, +// func(page *ListUserPoliciesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListUserPoliciesPages(input *ListUserPoliciesInput, fn func(*ListUserPoliciesOutput, bool) bool) error { + return c.ListUserPoliciesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListUserPoliciesPagesWithContext same as ListUserPoliciesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListUserPoliciesPagesWithContext(ctx aws.Context, input *ListUserPoliciesInput, fn func(*ListUserPoliciesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListUserPoliciesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListUserPoliciesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListUserPoliciesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListUsers = "ListUsers" + +// ListUsersRequest generates a "aws/request.Request" representing the +// client's request for the ListUsers operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListUsers for more information on using the ListUsers +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListUsersRequest method. +// req, resp := client.ListUsersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers +func (c *IAM) ListUsersRequest(input *ListUsersInput) (req *request.Request, output *ListUsersOutput) { + op := &request.Operation{ + Name: opListUsers, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListUsersInput{} + } + + output = &ListUsersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListUsers API operation for AWS Identity and Access Management. +// +// Lists the IAM users that have the specified path prefix. If no path prefix +// is specified, the operation returns all users in the AWS account. If there +// are none, the operation returns an empty list. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListUsers for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers +func (c *IAM) ListUsers(input *ListUsersInput) (*ListUsersOutput, error) { + req, out := c.ListUsersRequest(input) + return out, req.Send() +} + +// ListUsersWithContext is the same as ListUsers with the addition of +// the ability to pass a context and additional request options. +// +// See ListUsers for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListUsersWithContext(ctx aws.Context, input *ListUsersInput, opts ...request.Option) (*ListUsersOutput, error) { + req, out := c.ListUsersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListUsersPages iterates over the pages of a ListUsers operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListUsers method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListUsers operation. +// pageNum := 0 +// err := client.ListUsersPages(params, +// func(page *ListUsersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListUsersPages(input *ListUsersInput, fn func(*ListUsersOutput, bool) bool) error { + return c.ListUsersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListUsersPagesWithContext same as ListUsersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListUsersPagesWithContext(ctx aws.Context, input *ListUsersInput, fn func(*ListUsersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListUsersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListUsersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListUsersOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListVirtualMFADevices = "ListVirtualMFADevices" + +// ListVirtualMFADevicesRequest generates a "aws/request.Request" representing the +// client's request for the ListVirtualMFADevices operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListVirtualMFADevices for more information on using the ListVirtualMFADevices +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListVirtualMFADevicesRequest method. +// req, resp := client.ListVirtualMFADevicesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListVirtualMFADevices +func (c *IAM) ListVirtualMFADevicesRequest(input *ListVirtualMFADevicesInput) (req *request.Request, output *ListVirtualMFADevicesOutput) { + op := &request.Operation{ + Name: opListVirtualMFADevices, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListVirtualMFADevicesInput{} + } + + output = &ListVirtualMFADevicesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListVirtualMFADevices API operation for AWS Identity and Access Management. +// +// Lists the virtual MFA devices defined in the AWS account by assignment status. +// If you do not specify an assignment status, the operation returns a list +// of all virtual MFA devices. Assignment status can be Assigned, Unassigned, +// or Any. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListVirtualMFADevices for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListVirtualMFADevices +func (c *IAM) ListVirtualMFADevices(input *ListVirtualMFADevicesInput) (*ListVirtualMFADevicesOutput, error) { + req, out := c.ListVirtualMFADevicesRequest(input) + return out, req.Send() +} + +// ListVirtualMFADevicesWithContext is the same as ListVirtualMFADevices with the addition of +// the ability to pass a context and additional request options. +// +// See ListVirtualMFADevices for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListVirtualMFADevicesWithContext(ctx aws.Context, input *ListVirtualMFADevicesInput, opts ...request.Option) (*ListVirtualMFADevicesOutput, error) { + req, out := c.ListVirtualMFADevicesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListVirtualMFADevicesPages iterates over the pages of a ListVirtualMFADevices operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListVirtualMFADevices method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListVirtualMFADevices operation. +// pageNum := 0 +// err := client.ListVirtualMFADevicesPages(params, +// func(page *ListVirtualMFADevicesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) ListVirtualMFADevicesPages(input *ListVirtualMFADevicesInput, fn func(*ListVirtualMFADevicesOutput, bool) bool) error { + return c.ListVirtualMFADevicesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListVirtualMFADevicesPagesWithContext same as ListVirtualMFADevicesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListVirtualMFADevicesPagesWithContext(ctx aws.Context, input *ListVirtualMFADevicesInput, fn func(*ListVirtualMFADevicesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListVirtualMFADevicesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListVirtualMFADevicesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListVirtualMFADevicesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opPutGroupPolicy = "PutGroupPolicy" + +// PutGroupPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutGroupPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutGroupPolicy for more information on using the PutGroupPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutGroupPolicyRequest method. +// req, resp := client.PutGroupPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutGroupPolicy +func (c *IAM) PutGroupPolicyRequest(input *PutGroupPolicyInput) (req *request.Request, output *PutGroupPolicyOutput) { + op := &request.Operation{ + Name: opPutGroupPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutGroupPolicyInput{} + } + + output = &PutGroupPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutGroupPolicy API operation for AWS Identity and Access Management. +// +// Adds or updates an inline policy document that is embedded in the specified +// IAM group. +// +// A user can also have managed policies attached to it. To attach a managed +// policy to a group, use AttachGroupPolicy. To create a new managed policy, +// use CreatePolicy. For information about policies, see Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// For information about limits on the number of inline policies that you can +// embed in a group, see Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Because policy documents can be large, you should use POST rather than GET +// when calling PutGroupPolicy. For general information about using the Query +// API with IAM, go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation PutGroupPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutGroupPolicy +func (c *IAM) PutGroupPolicy(input *PutGroupPolicyInput) (*PutGroupPolicyOutput, error) { + req, out := c.PutGroupPolicyRequest(input) + return out, req.Send() +} + +// PutGroupPolicyWithContext is the same as PutGroupPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutGroupPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) PutGroupPolicyWithContext(ctx aws.Context, input *PutGroupPolicyInput, opts ...request.Option) (*PutGroupPolicyOutput, error) { + req, out := c.PutGroupPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutRolePolicy = "PutRolePolicy" + +// PutRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutRolePolicy for more information on using the PutRolePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutRolePolicyRequest method. +// req, resp := client.PutRolePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutRolePolicy +func (c *IAM) PutRolePolicyRequest(input *PutRolePolicyInput) (req *request.Request, output *PutRolePolicyOutput) { + op := &request.Operation{ + Name: opPutRolePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutRolePolicyInput{} + } + + output = &PutRolePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutRolePolicy API operation for AWS Identity and Access Management. +// +// Adds or updates an inline policy document that is embedded in the specified +// IAM role. +// +// When you embed an inline policy in a role, the inline policy is used as part +// of the role's access (permissions) policy. The role's trust policy is created +// at the same time as the role, using CreateRole. You can update a role's trust +// policy using UpdateAssumeRolePolicy. For more information about IAM roles, +// go to Using Roles to Delegate Permissions and Federate Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html). +// +// A role can also have a managed policy attached to it. To attach a managed +// policy to a role, use AttachRolePolicy. To create a new managed policy, use +// CreatePolicy. For information about policies, see Managed Policies and Inline +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// For information about limits on the number of inline policies that you can +// embed with a role, see Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Because policy documents can be large, you should use POST rather than GET +// when calling PutRolePolicy. For general information about using the Query +// API with IAM, go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation PutRolePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutRolePolicy +func (c *IAM) PutRolePolicy(input *PutRolePolicyInput) (*PutRolePolicyOutput, error) { + req, out := c.PutRolePolicyRequest(input) + return out, req.Send() +} + +// PutRolePolicyWithContext is the same as PutRolePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutRolePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) PutRolePolicyWithContext(ctx aws.Context, input *PutRolePolicyInput, opts ...request.Option) (*PutRolePolicyOutput, error) { + req, out := c.PutRolePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutUserPolicy = "PutUserPolicy" + +// PutUserPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutUserPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutUserPolicy for more information on using the PutUserPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutUserPolicyRequest method. +// req, resp := client.PutUserPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutUserPolicy +func (c *IAM) PutUserPolicyRequest(input *PutUserPolicyInput) (req *request.Request, output *PutUserPolicyOutput) { + op := &request.Operation{ + Name: opPutUserPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutUserPolicyInput{} + } + + output = &PutUserPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutUserPolicy API operation for AWS Identity and Access Management. +// +// Adds or updates an inline policy document that is embedded in the specified +// IAM user. +// +// An IAM user can also have a managed policy attached to it. To attach a managed +// policy to a user, use AttachUserPolicy. To create a new managed policy, use +// CreatePolicy. For information about policies, see Managed Policies and Inline +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// For information about limits on the number of inline policies that you can +// embed in a user, see Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) +// in the IAM User Guide. +// +// Because policy documents can be large, you should use POST rather than GET +// when calling PutUserPolicy. For general information about using the Query +// API with IAM, go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation PutUserPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutUserPolicy +func (c *IAM) PutUserPolicy(input *PutUserPolicyInput) (*PutUserPolicyOutput, error) { + req, out := c.PutUserPolicyRequest(input) + return out, req.Send() +} + +// PutUserPolicyWithContext is the same as PutUserPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutUserPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) PutUserPolicyWithContext(ctx aws.Context, input *PutUserPolicyInput, opts ...request.Option) (*PutUserPolicyOutput, error) { + req, out := c.PutUserPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveClientIDFromOpenIDConnectProvider = "RemoveClientIDFromOpenIDConnectProvider" + +// RemoveClientIDFromOpenIDConnectProviderRequest generates a "aws/request.Request" representing the +// client's request for the RemoveClientIDFromOpenIDConnectProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveClientIDFromOpenIDConnectProvider for more information on using the RemoveClientIDFromOpenIDConnectProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveClientIDFromOpenIDConnectProviderRequest method. +// req, resp := client.RemoveClientIDFromOpenIDConnectProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveClientIDFromOpenIDConnectProvider +func (c *IAM) RemoveClientIDFromOpenIDConnectProviderRequest(input *RemoveClientIDFromOpenIDConnectProviderInput) (req *request.Request, output *RemoveClientIDFromOpenIDConnectProviderOutput) { + op := &request.Operation{ + Name: opRemoveClientIDFromOpenIDConnectProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveClientIDFromOpenIDConnectProviderInput{} + } + + output = &RemoveClientIDFromOpenIDConnectProviderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// RemoveClientIDFromOpenIDConnectProvider API operation for AWS Identity and Access Management. +// +// Removes the specified client ID (also known as audience) from the list of +// client IDs registered for the specified IAM OpenID Connect (OIDC) provider +// resource object. +// +// This operation is idempotent; it does not fail or return an error if you +// try to remove a client ID that does not exist. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation RemoveClientIDFromOpenIDConnectProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveClientIDFromOpenIDConnectProvider +func (c *IAM) RemoveClientIDFromOpenIDConnectProvider(input *RemoveClientIDFromOpenIDConnectProviderInput) (*RemoveClientIDFromOpenIDConnectProviderOutput, error) { + req, out := c.RemoveClientIDFromOpenIDConnectProviderRequest(input) + return out, req.Send() +} + +// RemoveClientIDFromOpenIDConnectProviderWithContext is the same as RemoveClientIDFromOpenIDConnectProvider with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveClientIDFromOpenIDConnectProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) RemoveClientIDFromOpenIDConnectProviderWithContext(ctx aws.Context, input *RemoveClientIDFromOpenIDConnectProviderInput, opts ...request.Option) (*RemoveClientIDFromOpenIDConnectProviderOutput, error) { + req, out := c.RemoveClientIDFromOpenIDConnectProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveRoleFromInstanceProfile = "RemoveRoleFromInstanceProfile" + +// RemoveRoleFromInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the RemoveRoleFromInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveRoleFromInstanceProfile for more information on using the RemoveRoleFromInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveRoleFromInstanceProfileRequest method. +// req, resp := client.RemoveRoleFromInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveRoleFromInstanceProfile +func (c *IAM) RemoveRoleFromInstanceProfileRequest(input *RemoveRoleFromInstanceProfileInput) (req *request.Request, output *RemoveRoleFromInstanceProfileOutput) { + op := &request.Operation{ + Name: opRemoveRoleFromInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveRoleFromInstanceProfileInput{} + } + + output = &RemoveRoleFromInstanceProfileOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// RemoveRoleFromInstanceProfile API operation for AWS Identity and Access Management. +// +// Removes the specified IAM role from the specified EC2 instance profile. +// +// Make sure that you do not have any Amazon EC2 instances running with the +// role you are about to remove from the instance profile. Removing a role from +// an instance profile that is associated with a running instance might break +// any applications running on the instance. +// +// For more information about IAM roles, go to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// For more information about instance profiles, go to About Instance Profiles +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation RemoveRoleFromInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveRoleFromInstanceProfile +func (c *IAM) RemoveRoleFromInstanceProfile(input *RemoveRoleFromInstanceProfileInput) (*RemoveRoleFromInstanceProfileOutput, error) { + req, out := c.RemoveRoleFromInstanceProfileRequest(input) + return out, req.Send() +} + +// RemoveRoleFromInstanceProfileWithContext is the same as RemoveRoleFromInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveRoleFromInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) RemoveRoleFromInstanceProfileWithContext(ctx aws.Context, input *RemoveRoleFromInstanceProfileInput, opts ...request.Option) (*RemoveRoleFromInstanceProfileOutput, error) { + req, out := c.RemoveRoleFromInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveUserFromGroup = "RemoveUserFromGroup" + +// RemoveUserFromGroupRequest generates a "aws/request.Request" representing the +// client's request for the RemoveUserFromGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveUserFromGroup for more information on using the RemoveUserFromGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveUserFromGroupRequest method. +// req, resp := client.RemoveUserFromGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveUserFromGroup +func (c *IAM) RemoveUserFromGroupRequest(input *RemoveUserFromGroupInput) (req *request.Request, output *RemoveUserFromGroupOutput) { + op := &request.Operation{ + Name: opRemoveUserFromGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveUserFromGroupInput{} + } + + output = &RemoveUserFromGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// RemoveUserFromGroup API operation for AWS Identity and Access Management. +// +// Removes the specified user from the specified group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation RemoveUserFromGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveUserFromGroup +func (c *IAM) RemoveUserFromGroup(input *RemoveUserFromGroupInput) (*RemoveUserFromGroupOutput, error) { + req, out := c.RemoveUserFromGroupRequest(input) + return out, req.Send() +} + +// RemoveUserFromGroupWithContext is the same as RemoveUserFromGroup with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveUserFromGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) RemoveUserFromGroupWithContext(ctx aws.Context, input *RemoveUserFromGroupInput, opts ...request.Option) (*RemoveUserFromGroupOutput, error) { + req, out := c.RemoveUserFromGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opResetServiceSpecificCredential = "ResetServiceSpecificCredential" + +// ResetServiceSpecificCredentialRequest generates a "aws/request.Request" representing the +// client's request for the ResetServiceSpecificCredential operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ResetServiceSpecificCredential for more information on using the ResetServiceSpecificCredential +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ResetServiceSpecificCredentialRequest method. +// req, resp := client.ResetServiceSpecificCredentialRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ResetServiceSpecificCredential +func (c *IAM) ResetServiceSpecificCredentialRequest(input *ResetServiceSpecificCredentialInput) (req *request.Request, output *ResetServiceSpecificCredentialOutput) { + op := &request.Operation{ + Name: opResetServiceSpecificCredential, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ResetServiceSpecificCredentialInput{} + } + + output = &ResetServiceSpecificCredentialOutput{} + req = c.newRequest(op, input, output) + return +} + +// ResetServiceSpecificCredential API operation for AWS Identity and Access Management. +// +// Resets the password for a service-specific credential. The new password is +// AWS generated and cryptographically strong. It cannot be configured by the +// user. Resetting the password immediately invalidates the previous password +// associated with this user. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ResetServiceSpecificCredential for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ResetServiceSpecificCredential +func (c *IAM) ResetServiceSpecificCredential(input *ResetServiceSpecificCredentialInput) (*ResetServiceSpecificCredentialOutput, error) { + req, out := c.ResetServiceSpecificCredentialRequest(input) + return out, req.Send() +} + +// ResetServiceSpecificCredentialWithContext is the same as ResetServiceSpecificCredential with the addition of +// the ability to pass a context and additional request options. +// +// See ResetServiceSpecificCredential for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ResetServiceSpecificCredentialWithContext(ctx aws.Context, input *ResetServiceSpecificCredentialInput, opts ...request.Option) (*ResetServiceSpecificCredentialOutput, error) { + req, out := c.ResetServiceSpecificCredentialRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opResyncMFADevice = "ResyncMFADevice" + +// ResyncMFADeviceRequest generates a "aws/request.Request" representing the +// client's request for the ResyncMFADevice operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ResyncMFADevice for more information on using the ResyncMFADevice +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ResyncMFADeviceRequest method. +// req, resp := client.ResyncMFADeviceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ResyncMFADevice +func (c *IAM) ResyncMFADeviceRequest(input *ResyncMFADeviceInput) (req *request.Request, output *ResyncMFADeviceOutput) { + op := &request.Operation{ + Name: opResyncMFADevice, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ResyncMFADeviceInput{} + } + + output = &ResyncMFADeviceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// ResyncMFADevice API operation for AWS Identity and Access Management. +// +// Synchronizes the specified MFA device with its IAM resource object on the +// AWS servers. +// +// For more information about creating and working with virtual MFA devices, +// go to Using a Virtual MFA Device (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ResyncMFADevice for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidAuthenticationCodeException "InvalidAuthenticationCode" +// The request was rejected because the authentication code was not recognized. +// The error message describes the specific error. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ResyncMFADevice +func (c *IAM) ResyncMFADevice(input *ResyncMFADeviceInput) (*ResyncMFADeviceOutput, error) { + req, out := c.ResyncMFADeviceRequest(input) + return out, req.Send() +} + +// ResyncMFADeviceWithContext is the same as ResyncMFADevice with the addition of +// the ability to pass a context and additional request options. +// +// See ResyncMFADevice for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ResyncMFADeviceWithContext(ctx aws.Context, input *ResyncMFADeviceInput, opts ...request.Option) (*ResyncMFADeviceOutput, error) { + req, out := c.ResyncMFADeviceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetDefaultPolicyVersion = "SetDefaultPolicyVersion" + +// SetDefaultPolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the SetDefaultPolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetDefaultPolicyVersion for more information on using the SetDefaultPolicyVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetDefaultPolicyVersionRequest method. +// req, resp := client.SetDefaultPolicyVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SetDefaultPolicyVersion +func (c *IAM) SetDefaultPolicyVersionRequest(input *SetDefaultPolicyVersionInput) (req *request.Request, output *SetDefaultPolicyVersionOutput) { + op := &request.Operation{ + Name: opSetDefaultPolicyVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SetDefaultPolicyVersionInput{} + } + + output = &SetDefaultPolicyVersionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetDefaultPolicyVersion API operation for AWS Identity and Access Management. +// +// Sets the specified version of the specified policy as the policy's default +// (operative) version. +// +// This operation affects all users, groups, and roles that the policy is attached +// to. To list the users, groups, and roles that the policy is attached to, +// use the ListEntitiesForPolicy API. +// +// For information about managed policies, see Managed Policies and Inline Policies +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation SetDefaultPolicyVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SetDefaultPolicyVersion +func (c *IAM) SetDefaultPolicyVersion(input *SetDefaultPolicyVersionInput) (*SetDefaultPolicyVersionOutput, error) { + req, out := c.SetDefaultPolicyVersionRequest(input) + return out, req.Send() +} + +// SetDefaultPolicyVersionWithContext is the same as SetDefaultPolicyVersion with the addition of +// the ability to pass a context and additional request options. +// +// See SetDefaultPolicyVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) SetDefaultPolicyVersionWithContext(ctx aws.Context, input *SetDefaultPolicyVersionInput, opts ...request.Option) (*SetDefaultPolicyVersionOutput, error) { + req, out := c.SetDefaultPolicyVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSimulateCustomPolicy = "SimulateCustomPolicy" + +// SimulateCustomPolicyRequest generates a "aws/request.Request" representing the +// client's request for the SimulateCustomPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SimulateCustomPolicy for more information on using the SimulateCustomPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SimulateCustomPolicyRequest method. +// req, resp := client.SimulateCustomPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SimulateCustomPolicy +func (c *IAM) SimulateCustomPolicyRequest(input *SimulateCustomPolicyInput) (req *request.Request, output *SimulatePolicyResponse) { + op := &request.Operation{ + Name: opSimulateCustomPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &SimulateCustomPolicyInput{} + } + + output = &SimulatePolicyResponse{} + req = c.newRequest(op, input, output) + return +} + +// SimulateCustomPolicy API operation for AWS Identity and Access Management. +// +// Simulate how a set of IAM policies and optionally a resource-based policy +// works with a list of API operations and AWS resources to determine the policies' +// effective permissions. The policies are provided as strings. +// +// The simulation does not perform the API operations; it only checks the authorization +// to determine if the simulated policies allow or deny the operations. +// +// If you want to simulate existing policies attached to an IAM user, group, +// or role, use SimulatePrincipalPolicy instead. +// +// Context keys are variables maintained by AWS and its services that provide +// details about the context of an API query request. You can use the Condition +// element of an IAM policy to evaluate context keys. To get the list of context +// keys that the policies require for correct simulation, use GetContextKeysForCustomPolicy. +// +// If the output is long, you can use MaxItems and Marker parameters to paginate +// the results. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation SimulateCustomPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodePolicyEvaluationException "PolicyEvaluation" +// The request failed because a provided policy could not be successfully evaluated. +// An additional detailed message indicates the source of the failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SimulateCustomPolicy +func (c *IAM) SimulateCustomPolicy(input *SimulateCustomPolicyInput) (*SimulatePolicyResponse, error) { + req, out := c.SimulateCustomPolicyRequest(input) + return out, req.Send() +} + +// SimulateCustomPolicyWithContext is the same as SimulateCustomPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See SimulateCustomPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) SimulateCustomPolicyWithContext(ctx aws.Context, input *SimulateCustomPolicyInput, opts ...request.Option) (*SimulatePolicyResponse, error) { + req, out := c.SimulateCustomPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// SimulateCustomPolicyPages iterates over the pages of a SimulateCustomPolicy operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See SimulateCustomPolicy method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a SimulateCustomPolicy operation. +// pageNum := 0 +// err := client.SimulateCustomPolicyPages(params, +// func(page *SimulatePolicyResponse, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) SimulateCustomPolicyPages(input *SimulateCustomPolicyInput, fn func(*SimulatePolicyResponse, bool) bool) error { + return c.SimulateCustomPolicyPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// SimulateCustomPolicyPagesWithContext same as SimulateCustomPolicyPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) SimulateCustomPolicyPagesWithContext(ctx aws.Context, input *SimulateCustomPolicyInput, fn func(*SimulatePolicyResponse, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *SimulateCustomPolicyInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.SimulateCustomPolicyRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*SimulatePolicyResponse), !p.HasNextPage()) + } + return p.Err() +} + +const opSimulatePrincipalPolicy = "SimulatePrincipalPolicy" + +// SimulatePrincipalPolicyRequest generates a "aws/request.Request" representing the +// client's request for the SimulatePrincipalPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SimulatePrincipalPolicy for more information on using the SimulatePrincipalPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SimulatePrincipalPolicyRequest method. +// req, resp := client.SimulatePrincipalPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SimulatePrincipalPolicy +func (c *IAM) SimulatePrincipalPolicyRequest(input *SimulatePrincipalPolicyInput) (req *request.Request, output *SimulatePolicyResponse) { + op := &request.Operation{ + Name: opSimulatePrincipalPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &SimulatePrincipalPolicyInput{} + } + + output = &SimulatePolicyResponse{} + req = c.newRequest(op, input, output) + return +} + +// SimulatePrincipalPolicy API operation for AWS Identity and Access Management. +// +// Simulate how a set of IAM policies attached to an IAM entity works with a +// list of API operations and AWS resources to determine the policies' effective +// permissions. The entity can be an IAM user, group, or role. If you specify +// a user, then the simulation also includes all of the policies that are attached +// to groups that the user belongs to. +// +// You can optionally include a list of one or more additional policies specified +// as strings to include in the simulation. If you want to simulate only policies +// specified as strings, use SimulateCustomPolicy instead. +// +// You can also optionally include one resource-based policy to be evaluated +// with each of the resources included in the simulation. +// +// The simulation does not perform the API operations, it only checks the authorization +// to determine if the simulated policies allow or deny the operations. +// +// Note: This API discloses information about the permissions granted to other +// users. If you do not want users to see other user's permissions, then consider +// allowing them to use SimulateCustomPolicy instead. +// +// Context keys are variables maintained by AWS and its services that provide +// details about the context of an API query request. You can use the Condition +// element of an IAM policy to evaluate context keys. To get the list of context +// keys that the policies require for correct simulation, use GetContextKeysForPrincipalPolicy. +// +// If the output is long, you can use the MaxItems and Marker parameters to +// paginate the results. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation SimulatePrincipalPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodePolicyEvaluationException "PolicyEvaluation" +// The request failed because a provided policy could not be successfully evaluated. +// An additional detailed message indicates the source of the failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SimulatePrincipalPolicy +func (c *IAM) SimulatePrincipalPolicy(input *SimulatePrincipalPolicyInput) (*SimulatePolicyResponse, error) { + req, out := c.SimulatePrincipalPolicyRequest(input) + return out, req.Send() +} + +// SimulatePrincipalPolicyWithContext is the same as SimulatePrincipalPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See SimulatePrincipalPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) SimulatePrincipalPolicyWithContext(ctx aws.Context, input *SimulatePrincipalPolicyInput, opts ...request.Option) (*SimulatePolicyResponse, error) { + req, out := c.SimulatePrincipalPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// SimulatePrincipalPolicyPages iterates over the pages of a SimulatePrincipalPolicy operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See SimulatePrincipalPolicy method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a SimulatePrincipalPolicy operation. +// pageNum := 0 +// err := client.SimulatePrincipalPolicyPages(params, +// func(page *SimulatePolicyResponse, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *IAM) SimulatePrincipalPolicyPages(input *SimulatePrincipalPolicyInput, fn func(*SimulatePolicyResponse, bool) bool) error { + return c.SimulatePrincipalPolicyPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// SimulatePrincipalPolicyPagesWithContext same as SimulatePrincipalPolicyPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) SimulatePrincipalPolicyPagesWithContext(ctx aws.Context, input *SimulatePrincipalPolicyInput, fn func(*SimulatePolicyResponse, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *SimulatePrincipalPolicyInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.SimulatePrincipalPolicyRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*SimulatePolicyResponse), !p.HasNextPage()) + } + return p.Err() +} + +const opUpdateAccessKey = "UpdateAccessKey" + +// UpdateAccessKeyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAccessKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAccessKey for more information on using the UpdateAccessKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAccessKeyRequest method. +// req, resp := client.UpdateAccessKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey +func (c *IAM) UpdateAccessKeyRequest(input *UpdateAccessKeyInput) (req *request.Request, output *UpdateAccessKeyOutput) { + op := &request.Operation{ + Name: opUpdateAccessKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateAccessKeyInput{} + } + + output = &UpdateAccessKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateAccessKey API operation for AWS Identity and Access Management. +// +// Changes the status of the specified access key from Active to Inactive, or +// vice versa. This operation can be used to disable a user's key as part of +// a key rotation workflow. +// +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// For information about rotating keys, see Managing Keys and Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateAccessKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey +func (c *IAM) UpdateAccessKey(input *UpdateAccessKeyInput) (*UpdateAccessKeyOutput, error) { + req, out := c.UpdateAccessKeyRequest(input) + return out, req.Send() +} + +// UpdateAccessKeyWithContext is the same as UpdateAccessKey with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAccessKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateAccessKeyWithContext(ctx aws.Context, input *UpdateAccessKeyInput, opts ...request.Option) (*UpdateAccessKeyOutput, error) { + req, out := c.UpdateAccessKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAccountPasswordPolicy = "UpdateAccountPasswordPolicy" + +// UpdateAccountPasswordPolicyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAccountPasswordPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAccountPasswordPolicy for more information on using the UpdateAccountPasswordPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAccountPasswordPolicyRequest method. +// req, resp := client.UpdateAccountPasswordPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy +func (c *IAM) UpdateAccountPasswordPolicyRequest(input *UpdateAccountPasswordPolicyInput) (req *request.Request, output *UpdateAccountPasswordPolicyOutput) { + op := &request.Operation{ + Name: opUpdateAccountPasswordPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateAccountPasswordPolicyInput{} + } + + output = &UpdateAccountPasswordPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateAccountPasswordPolicy API operation for AWS Identity and Access Management. +// +// Updates the password policy settings for the AWS account. +// +// This operation does not support partial updates. No parameters are required, +// but if you do not specify a parameter, that parameter's value reverts to +// its default value. See the Request Parameters section for each parameter's +// default value. Also note that some parameters do not allow the default parameter +// to be explicitly set. Instead, to invoke the default value, do not include +// that parameter when you invoke the operation. +// +// For more information about using a password policy, see Managing an IAM Password +// Policy (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateAccountPasswordPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy +func (c *IAM) UpdateAccountPasswordPolicy(input *UpdateAccountPasswordPolicyInput) (*UpdateAccountPasswordPolicyOutput, error) { + req, out := c.UpdateAccountPasswordPolicyRequest(input) + return out, req.Send() +} + +// UpdateAccountPasswordPolicyWithContext is the same as UpdateAccountPasswordPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAccountPasswordPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateAccountPasswordPolicyWithContext(ctx aws.Context, input *UpdateAccountPasswordPolicyInput, opts ...request.Option) (*UpdateAccountPasswordPolicyOutput, error) { + req, out := c.UpdateAccountPasswordPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAssumeRolePolicy = "UpdateAssumeRolePolicy" + +// UpdateAssumeRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAssumeRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAssumeRolePolicy for more information on using the UpdateAssumeRolePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAssumeRolePolicyRequest method. +// req, resp := client.UpdateAssumeRolePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy +func (c *IAM) UpdateAssumeRolePolicyRequest(input *UpdateAssumeRolePolicyInput) (req *request.Request, output *UpdateAssumeRolePolicyOutput) { + op := &request.Operation{ + Name: opUpdateAssumeRolePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateAssumeRolePolicyInput{} + } + + output = &UpdateAssumeRolePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateAssumeRolePolicy API operation for AWS Identity and Access Management. +// +// Updates the policy that grants an IAM entity permission to assume a role. +// This is typically referred to as the "role trust policy". For more information +// about roles, go to Using Roles to Delegate Permissions and Federate Identities +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateAssumeRolePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy +func (c *IAM) UpdateAssumeRolePolicy(input *UpdateAssumeRolePolicyInput) (*UpdateAssumeRolePolicyOutput, error) { + req, out := c.UpdateAssumeRolePolicyRequest(input) + return out, req.Send() +} + +// UpdateAssumeRolePolicyWithContext is the same as UpdateAssumeRolePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAssumeRolePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateAssumeRolePolicyWithContext(ctx aws.Context, input *UpdateAssumeRolePolicyInput, opts ...request.Option) (*UpdateAssumeRolePolicyOutput, error) { + req, out := c.UpdateAssumeRolePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGroup = "UpdateGroup" + +// UpdateGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGroup for more information on using the UpdateGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGroupRequest method. +// req, resp := client.UpdateGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup +func (c *IAM) UpdateGroupRequest(input *UpdateGroupInput) (req *request.Request, output *UpdateGroupOutput) { + op := &request.Operation{ + Name: opUpdateGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateGroupInput{} + } + + output = &UpdateGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateGroup API operation for AWS Identity and Access Management. +// +// Updates the name and/or the path of the specified IAM group. +// +// You should understand the implications of changing a group's path or name. +// For more information, see Renaming Users and Groups (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html) +// in the IAM User Guide. +// +// The person making the request (the principal), must have permission to change +// the role group with the old name and the new name. For example, to change +// the group named Managers to MGRs, the principal must have a policy that allows +// them to update both groups. If the principal has permission to update the +// Managers group, but not the MGRs group, then the update fails. For more information +// about permissions, see Access Management (http://docs.aws.amazon.com/IAM/latest/UserGuide/access.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup +func (c *IAM) UpdateGroup(input *UpdateGroupInput) (*UpdateGroupOutput, error) { + req, out := c.UpdateGroupRequest(input) + return out, req.Send() +} + +// UpdateGroupWithContext is the same as UpdateGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateGroupWithContext(ctx aws.Context, input *UpdateGroupInput, opts ...request.Option) (*UpdateGroupOutput, error) { + req, out := c.UpdateGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateLoginProfile = "UpdateLoginProfile" + +// UpdateLoginProfileRequest generates a "aws/request.Request" representing the +// client's request for the UpdateLoginProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateLoginProfile for more information on using the UpdateLoginProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateLoginProfileRequest method. +// req, resp := client.UpdateLoginProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile +func (c *IAM) UpdateLoginProfileRequest(input *UpdateLoginProfileInput) (req *request.Request, output *UpdateLoginProfileOutput) { + op := &request.Operation{ + Name: opUpdateLoginProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateLoginProfileInput{} + } + + output = &UpdateLoginProfileOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateLoginProfile API operation for AWS Identity and Access Management. +// +// Changes the password for the specified IAM user. +// +// IAM users can change their own passwords by calling ChangePassword. For more +// information about modifying passwords, see Managing Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateLoginProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodePasswordPolicyViolationException "PasswordPolicyViolation" +// The request was rejected because the provided password did not meet the requirements +// imposed by the account password policy. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile +func (c *IAM) UpdateLoginProfile(input *UpdateLoginProfileInput) (*UpdateLoginProfileOutput, error) { + req, out := c.UpdateLoginProfileRequest(input) + return out, req.Send() +} + +// UpdateLoginProfileWithContext is the same as UpdateLoginProfile with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateLoginProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateLoginProfileWithContext(ctx aws.Context, input *UpdateLoginProfileInput, opts ...request.Option) (*UpdateLoginProfileOutput, error) { + req, out := c.UpdateLoginProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateOpenIDConnectProviderThumbprint = "UpdateOpenIDConnectProviderThumbprint" + +// UpdateOpenIDConnectProviderThumbprintRequest generates a "aws/request.Request" representing the +// client's request for the UpdateOpenIDConnectProviderThumbprint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateOpenIDConnectProviderThumbprint for more information on using the UpdateOpenIDConnectProviderThumbprint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateOpenIDConnectProviderThumbprintRequest method. +// req, resp := client.UpdateOpenIDConnectProviderThumbprintRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateOpenIDConnectProviderThumbprint +func (c *IAM) UpdateOpenIDConnectProviderThumbprintRequest(input *UpdateOpenIDConnectProviderThumbprintInput) (req *request.Request, output *UpdateOpenIDConnectProviderThumbprintOutput) { + op := &request.Operation{ + Name: opUpdateOpenIDConnectProviderThumbprint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateOpenIDConnectProviderThumbprintInput{} + } + + output = &UpdateOpenIDConnectProviderThumbprintOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateOpenIDConnectProviderThumbprint API operation for AWS Identity and Access Management. +// +// Replaces the existing list of server certificate thumbprints associated with +// an OpenID Connect (OIDC) provider resource object with a new list of thumbprints. +// +// The list that you pass with this operation completely replaces the existing +// list of thumbprints. (The lists are not merged.) +// +// Typically, you need to update a thumbprint only when the identity provider's +// certificate changes, which occurs rarely. However, if the provider's certificate +// does change, any attempt to assume an IAM role that specifies the OIDC provider +// as a principal fails until the certificate thumbprint is updated. +// +// Because trust for the OIDC provider is derived from the provider's certificate +// and is validated by the thumbprint, it is best to limit access to the UpdateOpenIDConnectProviderThumbprint +// operation to highly privileged users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateOpenIDConnectProviderThumbprint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateOpenIDConnectProviderThumbprint +func (c *IAM) UpdateOpenIDConnectProviderThumbprint(input *UpdateOpenIDConnectProviderThumbprintInput) (*UpdateOpenIDConnectProviderThumbprintOutput, error) { + req, out := c.UpdateOpenIDConnectProviderThumbprintRequest(input) + return out, req.Send() +} + +// UpdateOpenIDConnectProviderThumbprintWithContext is the same as UpdateOpenIDConnectProviderThumbprint with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateOpenIDConnectProviderThumbprint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateOpenIDConnectProviderThumbprintWithContext(ctx aws.Context, input *UpdateOpenIDConnectProviderThumbprintInput, opts ...request.Option) (*UpdateOpenIDConnectProviderThumbprintOutput, error) { + req, out := c.UpdateOpenIDConnectProviderThumbprintRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateRole = "UpdateRole" + +// UpdateRoleRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRole for more information on using the UpdateRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateRoleRequest method. +// req, resp := client.UpdateRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRole +func (c *IAM) UpdateRoleRequest(input *UpdateRoleInput) (req *request.Request, output *UpdateRoleOutput) { + op := &request.Operation{ + Name: opUpdateRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateRoleInput{} + } + + output = &UpdateRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateRole API operation for AWS Identity and Access Management. +// +// Updates the description or maximum session duration setting of a role. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRole +func (c *IAM) UpdateRole(input *UpdateRoleInput) (*UpdateRoleOutput, error) { + req, out := c.UpdateRoleRequest(input) + return out, req.Send() +} + +// UpdateRoleWithContext is the same as UpdateRole with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateRoleWithContext(ctx aws.Context, input *UpdateRoleInput, opts ...request.Option) (*UpdateRoleOutput, error) { + req, out := c.UpdateRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateRoleDescription = "UpdateRoleDescription" + +// UpdateRoleDescriptionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRoleDescription operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRoleDescription for more information on using the UpdateRoleDescription +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateRoleDescriptionRequest method. +// req, resp := client.UpdateRoleDescriptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRoleDescription +func (c *IAM) UpdateRoleDescriptionRequest(input *UpdateRoleDescriptionInput) (req *request.Request, output *UpdateRoleDescriptionOutput) { + op := &request.Operation{ + Name: opUpdateRoleDescription, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateRoleDescriptionInput{} + } + + output = &UpdateRoleDescriptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateRoleDescription API operation for AWS Identity and Access Management. +// +// Use instead. +// +// Modifies only the description of a role. This operation performs the same +// function as the Description parameter in the UpdateRole operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateRoleDescription for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRoleDescription +func (c *IAM) UpdateRoleDescription(input *UpdateRoleDescriptionInput) (*UpdateRoleDescriptionOutput, error) { + req, out := c.UpdateRoleDescriptionRequest(input) + return out, req.Send() +} + +// UpdateRoleDescriptionWithContext is the same as UpdateRoleDescription with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRoleDescription for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateRoleDescriptionWithContext(ctx aws.Context, input *UpdateRoleDescriptionInput, opts ...request.Option) (*UpdateRoleDescriptionOutput, error) { + req, out := c.UpdateRoleDescriptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSAMLProvider = "UpdateSAMLProvider" + +// UpdateSAMLProviderRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSAMLProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSAMLProvider for more information on using the UpdateSAMLProvider +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSAMLProviderRequest method. +// req, resp := client.UpdateSAMLProviderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider +func (c *IAM) UpdateSAMLProviderRequest(input *UpdateSAMLProviderInput) (req *request.Request, output *UpdateSAMLProviderOutput) { + op := &request.Operation{ + Name: opUpdateSAMLProvider, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSAMLProviderInput{} + } + + output = &UpdateSAMLProviderOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSAMLProvider API operation for AWS Identity and Access Management. +// +// Updates the metadata document for an existing SAML provider resource object. +// +// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateSAMLProvider for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider +func (c *IAM) UpdateSAMLProvider(input *UpdateSAMLProviderInput) (*UpdateSAMLProviderOutput, error) { + req, out := c.UpdateSAMLProviderRequest(input) + return out, req.Send() +} + +// UpdateSAMLProviderWithContext is the same as UpdateSAMLProvider with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSAMLProvider for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateSAMLProviderWithContext(ctx aws.Context, input *UpdateSAMLProviderInput, opts ...request.Option) (*UpdateSAMLProviderOutput, error) { + req, out := c.UpdateSAMLProviderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSSHPublicKey = "UpdateSSHPublicKey" + +// UpdateSSHPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSSHPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSSHPublicKey for more information on using the UpdateSSHPublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSSHPublicKeyRequest method. +// req, resp := client.UpdateSSHPublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSSHPublicKey +func (c *IAM) UpdateSSHPublicKeyRequest(input *UpdateSSHPublicKeyInput) (req *request.Request, output *UpdateSSHPublicKeyOutput) { + op := &request.Operation{ + Name: opUpdateSSHPublicKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSSHPublicKeyInput{} + } + + output = &UpdateSSHPublicKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateSSHPublicKey API operation for AWS Identity and Access Management. +// +// Sets the status of an IAM user's SSH public key to active or inactive. SSH +// public keys that are inactive cannot be used for authentication. This operation +// can be used to disable a user's SSH public key as part of a key rotation +// work flow. +// +// The SSH public key affected by this operation is used only for authenticating +// the associated IAM user to an AWS CodeCommit repository. For more information +// about using SSH keys to authenticate to an AWS CodeCommit repository, see +// Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) +// in the AWS CodeCommit User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateSSHPublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSSHPublicKey +func (c *IAM) UpdateSSHPublicKey(input *UpdateSSHPublicKeyInput) (*UpdateSSHPublicKeyOutput, error) { + req, out := c.UpdateSSHPublicKeyRequest(input) + return out, req.Send() +} + +// UpdateSSHPublicKeyWithContext is the same as UpdateSSHPublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSSHPublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateSSHPublicKeyWithContext(ctx aws.Context, input *UpdateSSHPublicKeyInput, opts ...request.Option) (*UpdateSSHPublicKeyOutput, error) { + req, out := c.UpdateSSHPublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateServerCertificate = "UpdateServerCertificate" + +// UpdateServerCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateServerCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateServerCertificate for more information on using the UpdateServerCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateServerCertificateRequest method. +// req, resp := client.UpdateServerCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate +func (c *IAM) UpdateServerCertificateRequest(input *UpdateServerCertificateInput) (req *request.Request, output *UpdateServerCertificateOutput) { + op := &request.Operation{ + Name: opUpdateServerCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateServerCertificateInput{} + } + + output = &UpdateServerCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateServerCertificate API operation for AWS Identity and Access Management. +// +// Updates the name and/or the path of the specified server certificate stored +// in IAM. +// +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic also includes a list of AWS services that +// can use the server certificates that you manage with IAM. +// +// You should understand the implications of changing a server certificate's +// path or name. For more information, see Renaming a Server Certificate (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs_manage.html#RenamingServerCerts) +// in the IAM User Guide. +// +// The person making the request (the principal), must have permission to change +// the server certificate with the old name and the new name. For example, to +// change the certificate named ProductionCert to ProdCert, the principal must +// have a policy that allows them to update both certificates. If the principal +// has permission to update the ProductionCert group, but not the ProdCert certificate, +// then the update fails. For more information about permissions, see Access +// Management (http://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateServerCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate +func (c *IAM) UpdateServerCertificate(input *UpdateServerCertificateInput) (*UpdateServerCertificateOutput, error) { + req, out := c.UpdateServerCertificateRequest(input) + return out, req.Send() +} + +// UpdateServerCertificateWithContext is the same as UpdateServerCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateServerCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateServerCertificateWithContext(ctx aws.Context, input *UpdateServerCertificateInput, opts ...request.Option) (*UpdateServerCertificateOutput, error) { + req, out := c.UpdateServerCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateServiceSpecificCredential = "UpdateServiceSpecificCredential" + +// UpdateServiceSpecificCredentialRequest generates a "aws/request.Request" representing the +// client's request for the UpdateServiceSpecificCredential operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateServiceSpecificCredential for more information on using the UpdateServiceSpecificCredential +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateServiceSpecificCredentialRequest method. +// req, resp := client.UpdateServiceSpecificCredentialRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServiceSpecificCredential +func (c *IAM) UpdateServiceSpecificCredentialRequest(input *UpdateServiceSpecificCredentialInput) (req *request.Request, output *UpdateServiceSpecificCredentialOutput) { + op := &request.Operation{ + Name: opUpdateServiceSpecificCredential, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateServiceSpecificCredentialInput{} + } + + output = &UpdateServiceSpecificCredentialOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateServiceSpecificCredential API operation for AWS Identity and Access Management. +// +// Sets the status of a service-specific credential to Active or Inactive. Service-specific +// credentials that are inactive cannot be used for authentication to the service. +// This operation can be used to disable a user’s service-specific credential +// as part of a credential rotation work flow. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateServiceSpecificCredential for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServiceSpecificCredential +func (c *IAM) UpdateServiceSpecificCredential(input *UpdateServiceSpecificCredentialInput) (*UpdateServiceSpecificCredentialOutput, error) { + req, out := c.UpdateServiceSpecificCredentialRequest(input) + return out, req.Send() +} + +// UpdateServiceSpecificCredentialWithContext is the same as UpdateServiceSpecificCredential with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateServiceSpecificCredential for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateServiceSpecificCredentialWithContext(ctx aws.Context, input *UpdateServiceSpecificCredentialInput, opts ...request.Option) (*UpdateServiceSpecificCredentialOutput, error) { + req, out := c.UpdateServiceSpecificCredentialRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSigningCertificate = "UpdateSigningCertificate" + +// UpdateSigningCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSigningCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSigningCertificate for more information on using the UpdateSigningCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSigningCertificateRequest method. +// req, resp := client.UpdateSigningCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate +func (c *IAM) UpdateSigningCertificateRequest(input *UpdateSigningCertificateInput) (req *request.Request, output *UpdateSigningCertificateOutput) { + op := &request.Operation{ + Name: opUpdateSigningCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSigningCertificateInput{} + } + + output = &UpdateSigningCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateSigningCertificate API operation for AWS Identity and Access Management. +// +// Changes the status of the specified user signing certificate from active +// to disabled, or vice versa. This operation can be used to disable an IAM +// user's signing certificate as part of a certificate rotation work flow. +// +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateSigningCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate +func (c *IAM) UpdateSigningCertificate(input *UpdateSigningCertificateInput) (*UpdateSigningCertificateOutput, error) { + req, out := c.UpdateSigningCertificateRequest(input) + return out, req.Send() +} + +// UpdateSigningCertificateWithContext is the same as UpdateSigningCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSigningCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateSigningCertificateWithContext(ctx aws.Context, input *UpdateSigningCertificateInput, opts ...request.Option) (*UpdateSigningCertificateOutput, error) { + req, out := c.UpdateSigningCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateUser = "UpdateUser" + +// UpdateUserRequest generates a "aws/request.Request" representing the +// client's request for the UpdateUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateUser for more information on using the UpdateUser +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateUserRequest method. +// req, resp := client.UpdateUserRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser +func (c *IAM) UpdateUserRequest(input *UpdateUserInput) (req *request.Request, output *UpdateUserOutput) { + op := &request.Operation{ + Name: opUpdateUser, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateUserInput{} + } + + output = &UpdateUserOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateUser API operation for AWS Identity and Access Management. +// +// Updates the name and/or the path of the specified IAM user. +// +// You should understand the implications of changing an IAM user's path or +// name. For more information, see Renaming an IAM User (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_renaming) +// and Renaming an IAM Group (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_rename.html) +// in the IAM User Guide. +// +// To change a user name, the requester must have appropriate permissions on +// both the source object and the target object. For example, to change Bob +// to Robert, the entity making the request must have permission on Bob and +// Robert, or must have permission on all (*). For more information about permissions, +// see Permissions and Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PermissionsAndPolicies.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateUser for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser +func (c *IAM) UpdateUser(input *UpdateUserInput) (*UpdateUserOutput, error) { + req, out := c.UpdateUserRequest(input) + return out, req.Send() +} + +// UpdateUserWithContext is the same as UpdateUser with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateUser for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateUserWithContext(ctx aws.Context, input *UpdateUserInput, opts ...request.Option) (*UpdateUserOutput, error) { + req, out := c.UpdateUserRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadSSHPublicKey = "UploadSSHPublicKey" + +// UploadSSHPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the UploadSSHPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadSSHPublicKey for more information on using the UploadSSHPublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadSSHPublicKeyRequest method. +// req, resp := client.UploadSSHPublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSSHPublicKey +func (c *IAM) UploadSSHPublicKeyRequest(input *UploadSSHPublicKeyInput) (req *request.Request, output *UploadSSHPublicKeyOutput) { + op := &request.Operation{ + Name: opUploadSSHPublicKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UploadSSHPublicKeyInput{} + } + + output = &UploadSSHPublicKeyOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadSSHPublicKey API operation for AWS Identity and Access Management. +// +// Uploads an SSH public key and associates it with the specified IAM user. +// +// The SSH public key uploaded by this operation can be used only for authenticating +// the associated IAM user to an AWS CodeCommit repository. For more information +// about using SSH keys to authenticate to an AWS CodeCommit repository, see +// Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) +// in the AWS CodeCommit User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UploadSSHPublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidPublicKeyException "InvalidPublicKey" +// The request was rejected because the public key is malformed or otherwise +// invalid. +// +// * ErrCodeDuplicateSSHPublicKeyException "DuplicateSSHPublicKey" +// The request was rejected because the SSH public key is already associated +// with the specified IAM user. +// +// * ErrCodeUnrecognizedPublicKeyEncodingException "UnrecognizedPublicKeyEncoding" +// The request was rejected because the public key encoding format is unsupported +// or unrecognized. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSSHPublicKey +func (c *IAM) UploadSSHPublicKey(input *UploadSSHPublicKeyInput) (*UploadSSHPublicKeyOutput, error) { + req, out := c.UploadSSHPublicKeyRequest(input) + return out, req.Send() +} + +// UploadSSHPublicKeyWithContext is the same as UploadSSHPublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See UploadSSHPublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UploadSSHPublicKeyWithContext(ctx aws.Context, input *UploadSSHPublicKeyInput, opts ...request.Option) (*UploadSSHPublicKeyOutput, error) { + req, out := c.UploadSSHPublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadServerCertificate = "UploadServerCertificate" + +// UploadServerCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UploadServerCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadServerCertificate for more information on using the UploadServerCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadServerCertificateRequest method. +// req, resp := client.UploadServerCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate +func (c *IAM) UploadServerCertificateRequest(input *UploadServerCertificateInput) (req *request.Request, output *UploadServerCertificateOutput) { + op := &request.Operation{ + Name: opUploadServerCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UploadServerCertificateInput{} + } + + output = &UploadServerCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadServerCertificate API operation for AWS Identity and Access Management. +// +// Uploads a server certificate entity for the AWS account. The server certificate +// entity includes a public key certificate, a private key, and an optional +// certificate chain, which should all be PEM-encoded. +// +// We recommend that you use AWS Certificate Manager (https://aws.amazon.com/certificate-manager/) +// to provision, manage, and deploy your server certificates. With ACM you can +// request a certificate, deploy it to AWS resources, and let ACM handle certificate +// renewals for you. Certificates provided by ACM are free. For more information +// about using ACM, see the AWS Certificate Manager User Guide (http://docs.aws.amazon.com/acm/latest/userguide/). +// +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic includes a list of AWS services that can +// use the server certificates that you manage with IAM. +// +// For information about the number of server certificates you can upload, see +// Limitations on IAM Entities and Objects (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html) +// in the IAM User Guide. +// +// Because the body of the public key certificate, private key, and the certificate +// chain can be large, you should use POST rather than GET when calling UploadServerCertificate. +// For information about setting up signatures and authorization through the +// API, go to Signing AWS API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) +// in the AWS General Reference. For general information about using the Query +// API with IAM, go to Calling the API by Making HTTP Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/programming.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UploadServerCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeMalformedCertificateException "MalformedCertificate" +// The request was rejected because the certificate was malformed or expired. +// The error message describes the specific error. +// +// * ErrCodeKeyPairMismatchException "KeyPairMismatch" +// The request was rejected because the public key certificate and the private +// key do not match. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate +func (c *IAM) UploadServerCertificate(input *UploadServerCertificateInput) (*UploadServerCertificateOutput, error) { + req, out := c.UploadServerCertificateRequest(input) + return out, req.Send() +} + +// UploadServerCertificateWithContext is the same as UploadServerCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UploadServerCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UploadServerCertificateWithContext(ctx aws.Context, input *UploadServerCertificateInput, opts ...request.Option) (*UploadServerCertificateOutput, error) { + req, out := c.UploadServerCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadSigningCertificate = "UploadSigningCertificate" + +// UploadSigningCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UploadSigningCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadSigningCertificate for more information on using the UploadSigningCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadSigningCertificateRequest method. +// req, resp := client.UploadSigningCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate +func (c *IAM) UploadSigningCertificateRequest(input *UploadSigningCertificateInput) (req *request.Request, output *UploadSigningCertificateOutput) { + op := &request.Operation{ + Name: opUploadSigningCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UploadSigningCertificateInput{} + } + + output = &UploadSigningCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadSigningCertificate API operation for AWS Identity and Access Management. +// +// Uploads an X.509 signing certificate and associates it with the specified +// IAM user. Some AWS services use X.509 signing certificates to validate requests +// that are signed with a corresponding private key. When you upload the certificate, +// its default status is Active. +// +// If the UserName field is not specified, the IAM user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// Because the body of an X.509 certificate can be large, you should use POST +// rather than GET when calling UploadSigningCertificate. For information about +// setting up signatures and authorization through the API, go to Signing AWS +// API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) +// in the AWS General Reference. For general information about using the Query +// API with IAM, go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UploadSigningCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeMalformedCertificateException "MalformedCertificate" +// The request was rejected because the certificate was malformed or expired. +// The error message describes the specific error. +// +// * ErrCodeInvalidCertificateException "InvalidCertificate" +// The request was rejected because the certificate is invalid. +// +// * ErrCodeDuplicateCertificateException "DuplicateCertificate" +// The request was rejected because the same certificate is associated with +// an IAM user in the account. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate +func (c *IAM) UploadSigningCertificate(input *UploadSigningCertificateInput) (*UploadSigningCertificateOutput, error) { + req, out := c.UploadSigningCertificateRequest(input) + return out, req.Send() +} + +// UploadSigningCertificateWithContext is the same as UploadSigningCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UploadSigningCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UploadSigningCertificateWithContext(ctx aws.Context, input *UploadSigningCertificateInput, opts ...request.Option) (*UploadSigningCertificateOutput, error) { + req, out := c.UploadSigningCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Contains information about an AWS access key. +// +// This data type is used as a response element in the CreateAccessKey and ListAccessKeys +// operations. +// +// The SecretAccessKey value is returned only in response to CreateAccessKey. +// You can get a secret access key only when you first create an access key; +// you cannot recover the secret access key later. If you lose a secret access +// key, you must create a new access key. +type AccessKey struct { + _ struct{} `type:"structure"` + + // The ID for this access key. + // + // AccessKeyId is a required field + AccessKeyId *string `min:"16" type:"string" required:"true"` + + // The date when the access key was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The secret key used to sign requests. + // + // SecretAccessKey is a required field + SecretAccessKey *string `type:"string" required:"true"` + + // The status of the access key. Active means that the key is valid for API + // calls, while Inactive means it is not. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user that the access key is associated with. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AccessKey) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessKey) GoString() string { + return s.String() +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *AccessKey) SetAccessKeyId(v string) *AccessKey { + s.AccessKeyId = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *AccessKey) SetCreateDate(v time.Time) *AccessKey { + s.CreateDate = &v + return s +} + +// SetSecretAccessKey sets the SecretAccessKey field's value. +func (s *AccessKey) SetSecretAccessKey(v string) *AccessKey { + s.SecretAccessKey = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AccessKey) SetStatus(v string) *AccessKey { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *AccessKey) SetUserName(v string) *AccessKey { + s.UserName = &v + return s +} + +// Contains information about the last time an AWS access key was used. +// +// This data type is used as a response element in the GetAccessKeyLastUsed +// operation. +type AccessKeyLastUsed struct { + _ struct{} `type:"structure"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the access key was most recently used. This field is null in the following + // situations: + // + // * The user does not have an access key. + // + // * An access key exists but has never been used, at least not since IAM + // started tracking this information on April 22nd, 2015. + // + // * There is no sign-in data associated with the user + // + // LastUsedDate is a required field + LastUsedDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The AWS region where this access key was most recently used. This field is + // displays "N/A" in the following situations: + // + // * The user does not have an access key. + // + // * An access key exists but has never been used, at least not since IAM + // started tracking this information on April 22nd, 2015. + // + // * There is no sign-in data associated with the user + // + // For more information about AWS regions, see Regions and Endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html) + // in the Amazon Web Services General Reference. + // + // Region is a required field + Region *string `type:"string" required:"true"` + + // The name of the AWS service with which this access key was most recently + // used. This field displays "N/A" in the following situations: + // + // * The user does not have an access key. + // + // * An access key exists but has never been used, at least not since IAM + // started tracking this information on April 22nd, 2015. + // + // * There is no sign-in data associated with the user + // + // ServiceName is a required field + ServiceName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s AccessKeyLastUsed) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessKeyLastUsed) GoString() string { + return s.String() +} + +// SetLastUsedDate sets the LastUsedDate field's value. +func (s *AccessKeyLastUsed) SetLastUsedDate(v time.Time) *AccessKeyLastUsed { + s.LastUsedDate = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *AccessKeyLastUsed) SetRegion(v string) *AccessKeyLastUsed { + s.Region = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *AccessKeyLastUsed) SetServiceName(v string) *AccessKeyLastUsed { + s.ServiceName = &v + return s +} + +// Contains information about an AWS access key, without its secret key. +// +// This data type is used as a response element in the ListAccessKeys operation. +type AccessKeyMetadata struct { + _ struct{} `type:"structure"` + + // The ID for this access key. + AccessKeyId *string `min:"16" type:"string"` + + // The date when the access key was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The status of the access key. Active means the key is valid for API calls; + // Inactive means it is not. + Status *string `type:"string" enum:"statusType"` + + // The name of the IAM user that the key is associated with. + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s AccessKeyMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessKeyMetadata) GoString() string { + return s.String() +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *AccessKeyMetadata) SetAccessKeyId(v string) *AccessKeyMetadata { + s.AccessKeyId = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *AccessKeyMetadata) SetCreateDate(v time.Time) *AccessKeyMetadata { + s.CreateDate = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AccessKeyMetadata) SetStatus(v string) *AccessKeyMetadata { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *AccessKeyMetadata) SetUserName(v string) *AccessKeyMetadata { + s.UserName = &v + return s +} + +type AddClientIDToOpenIDConnectProviderInput struct { + _ struct{} `type:"structure"` + + // The client ID (also known as audience) to add to the IAM OpenID Connect provider + // resource. + // + // ClientID is a required field + ClientID *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM OpenID Connect (OIDC) provider + // resource to add the client ID to. You can get a list of OIDC provider ARNs + // by using the ListOpenIDConnectProviders operation. + // + // OpenIDConnectProviderArn is a required field + OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s AddClientIDToOpenIDConnectProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddClientIDToOpenIDConnectProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddClientIDToOpenIDConnectProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddClientIDToOpenIDConnectProviderInput"} + if s.ClientID == nil { + invalidParams.Add(request.NewErrParamRequired("ClientID")) + } + if s.ClientID != nil && len(*s.ClientID) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientID", 1)) + } + if s.OpenIDConnectProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("OpenIDConnectProviderArn")) + } + if s.OpenIDConnectProviderArn != nil && len(*s.OpenIDConnectProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("OpenIDConnectProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientID sets the ClientID field's value. +func (s *AddClientIDToOpenIDConnectProviderInput) SetClientID(v string) *AddClientIDToOpenIDConnectProviderInput { + s.ClientID = &v + return s +} + +// SetOpenIDConnectProviderArn sets the OpenIDConnectProviderArn field's value. +func (s *AddClientIDToOpenIDConnectProviderInput) SetOpenIDConnectProviderArn(v string) *AddClientIDToOpenIDConnectProviderInput { + s.OpenIDConnectProviderArn = &v + return s +} + +type AddClientIDToOpenIDConnectProviderOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddClientIDToOpenIDConnectProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddClientIDToOpenIDConnectProviderOutput) GoString() string { + return s.String() +} + +type AddRoleToInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the instance profile to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // InstanceProfileName is a required field + InstanceProfileName *string `min:"1" type:"string" required:"true"` + + // The name of the role to add. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AddRoleToInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddRoleToInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddRoleToInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddRoleToInstanceProfileInput"} + if s.InstanceProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceProfileName")) + } + if s.InstanceProfileName != nil && len(*s.InstanceProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceProfileName", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceProfileName sets the InstanceProfileName field's value. +func (s *AddRoleToInstanceProfileInput) SetInstanceProfileName(v string) *AddRoleToInstanceProfileInput { + s.InstanceProfileName = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *AddRoleToInstanceProfileInput) SetRoleName(v string) *AddRoleToInstanceProfileInput { + s.RoleName = &v + return s +} + +type AddRoleToInstanceProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddRoleToInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddRoleToInstanceProfileOutput) GoString() string { + return s.String() +} + +type AddUserToGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the group to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The name of the user to add. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AddUserToGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddUserToGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddUserToGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddUserToGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *AddUserToGroupInput) SetGroupName(v string) *AddUserToGroupInput { + s.GroupName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *AddUserToGroupInput) SetUserName(v string) *AddUserToGroupInput { + s.UserName = &v + return s +} + +type AddUserToGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddUserToGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddUserToGroupOutput) GoString() string { + return s.String() +} + +type AttachGroupPolicyInput struct { + _ struct{} `type:"structure"` + + // The name (friendly name, not ARN) of the group to attach the policy to. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to attach. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachGroupPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachGroupPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachGroupPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachGroupPolicyInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *AttachGroupPolicyInput) SetGroupName(v string) *AttachGroupPolicyInput { + s.GroupName = &v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *AttachGroupPolicyInput) SetPolicyArn(v string) *AttachGroupPolicyInput { + s.PolicyArn = &v + return s +} + +type AttachGroupPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachGroupPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachGroupPolicyOutput) GoString() string { + return s.String() +} + +type AttachRolePolicyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to attach. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The name (friendly name, not ARN) of the role to attach the policy to. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachRolePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachRolePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachRolePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachRolePolicyInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *AttachRolePolicyInput) SetPolicyArn(v string) *AttachRolePolicyInput { + s.PolicyArn = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *AttachRolePolicyInput) SetRoleName(v string) *AttachRolePolicyInput { + s.RoleName = &v + return s +} + +type AttachRolePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachRolePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachRolePolicyOutput) GoString() string { + return s.String() +} + +type AttachUserPolicyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to attach. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The name (friendly name, not ARN) of the IAM user to attach the policy to. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachUserPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachUserPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachUserPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachUserPolicyInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *AttachUserPolicyInput) SetPolicyArn(v string) *AttachUserPolicyInput { + s.PolicyArn = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *AttachUserPolicyInput) SetUserName(v string) *AttachUserPolicyInput { + s.UserName = &v + return s +} + +type AttachUserPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachUserPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachUserPolicyOutput) GoString() string { + return s.String() +} + +// Contains information about an attached policy. +// +// An attached policy is a managed policy that has been attached to a user, +// group, or role. This data type is used as a response element in the ListAttachedGroupPolicies, +// ListAttachedRolePolicies, ListAttachedUserPolicies, and GetAccountAuthorizationDetails +// operations. +// +// For more information about managed policies, refer to Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type AttachedPolicy struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + PolicyArn *string `min:"20" type:"string"` + + // The friendly name of the attached policy. + PolicyName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s AttachedPolicy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachedPolicy) GoString() string { + return s.String() +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *AttachedPolicy) SetPolicyArn(v string) *AttachedPolicy { + s.PolicyArn = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *AttachedPolicy) SetPolicyName(v string) *AttachedPolicy { + s.PolicyName = &v + return s +} + +type ChangePasswordInput struct { + _ struct{} `type:"structure"` + + // The new password. The new password must conform to the AWS account's password + // policy, if one exists. + // + // The regex pattern (http://wikipedia.org/wiki/regex) that is used to validate + // this parameter is a string of characters. That string can include almost + // any printable ASCII character from the space (\u0020) through the end of + // the ASCII character range (\u00FF). You can also include the tab (\u0009), + // line feed (\u000A), and carriage return (\u000D) characters. Any of these + // characters are valid in a password. However, many tools, such as the AWS + // Management Console, might restrict the ability to type certain characters + // because they have special meaning within that tool. + // + // NewPassword is a required field + NewPassword *string `min:"1" type:"string" required:"true"` + + // The IAM user's current password. + // + // OldPassword is a required field + OldPassword *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ChangePasswordInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChangePasswordInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ChangePasswordInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ChangePasswordInput"} + if s.NewPassword == nil { + invalidParams.Add(request.NewErrParamRequired("NewPassword")) + } + if s.NewPassword != nil && len(*s.NewPassword) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewPassword", 1)) + } + if s.OldPassword == nil { + invalidParams.Add(request.NewErrParamRequired("OldPassword")) + } + if s.OldPassword != nil && len(*s.OldPassword) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OldPassword", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNewPassword sets the NewPassword field's value. +func (s *ChangePasswordInput) SetNewPassword(v string) *ChangePasswordInput { + s.NewPassword = &v + return s +} + +// SetOldPassword sets the OldPassword field's value. +func (s *ChangePasswordInput) SetOldPassword(v string) *ChangePasswordInput { + s.OldPassword = &v + return s +} + +type ChangePasswordOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ChangePasswordOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChangePasswordOutput) GoString() string { + return s.String() +} + +// Contains information about a condition context key. It includes the name +// of the key and specifies the value (or values, if the context key supports +// multiple values) to use in the simulation. This information is used when +// evaluating the Condition elements of the input policies. +// +// This data type is used as an input parameter to SimulateCustomPolicy and +// SimulateCustomPolicy. +type ContextEntry struct { + _ struct{} `type:"structure"` + + // The full name of a condition context key, including the service prefix. For + // example, aws:SourceIp or s3:VersionId. + ContextKeyName *string `min:"5" type:"string"` + + // The data type of the value (or values) specified in the ContextKeyValues + // parameter. + ContextKeyType *string `type:"string" enum:"ContextKeyTypeEnum"` + + // The value (or values, if the condition context key supports multiple values) + // to provide to the simulation when the key is referenced by a Condition element + // in an input policy. + ContextKeyValues []*string `type:"list"` +} + +// String returns the string representation +func (s ContextEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContextEntry) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContextEntry) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContextEntry"} + if s.ContextKeyName != nil && len(*s.ContextKeyName) < 5 { + invalidParams.Add(request.NewErrParamMinLen("ContextKeyName", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContextKeyName sets the ContextKeyName field's value. +func (s *ContextEntry) SetContextKeyName(v string) *ContextEntry { + s.ContextKeyName = &v + return s +} + +// SetContextKeyType sets the ContextKeyType field's value. +func (s *ContextEntry) SetContextKeyType(v string) *ContextEntry { + s.ContextKeyType = &v + return s +} + +// SetContextKeyValues sets the ContextKeyValues field's value. +func (s *ContextEntry) SetContextKeyValues(v []*string) *ContextEntry { + s.ContextKeyValues = v + return s +} + +type CreateAccessKeyInput struct { + _ struct{} `type:"structure"` + + // The name of the IAM user that the new key will belong to. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateAccessKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAccessKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAccessKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAccessKeyInput"} + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetUserName sets the UserName field's value. +func (s *CreateAccessKeyInput) SetUserName(v string) *CreateAccessKeyInput { + s.UserName = &v + return s +} + +// Contains the response to a successful CreateAccessKey request. +type CreateAccessKeyOutput struct { + _ struct{} `type:"structure"` + + // A structure with details about the access key. + // + // AccessKey is a required field + AccessKey *AccessKey `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateAccessKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAccessKeyOutput) GoString() string { + return s.String() +} + +// SetAccessKey sets the AccessKey field's value. +func (s *CreateAccessKeyOutput) SetAccessKey(v *AccessKey) *CreateAccessKeyOutput { + s.AccessKey = v + return s +} + +type CreateAccountAliasInput struct { + _ struct{} `type:"structure"` + + // The account alias to create. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of lowercase letters, digits, and dashes. + // You cannot start or finish with a dash, nor can you have two dashes in a + // row. + // + // AccountAlias is a required field + AccountAlias *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateAccountAliasInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAccountAliasInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAccountAliasInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAccountAliasInput"} + if s.AccountAlias == nil { + invalidParams.Add(request.NewErrParamRequired("AccountAlias")) + } + if s.AccountAlias != nil && len(*s.AccountAlias) < 3 { + invalidParams.Add(request.NewErrParamMinLen("AccountAlias", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountAlias sets the AccountAlias field's value. +func (s *CreateAccountAliasInput) SetAccountAlias(v string) *CreateAccountAliasInput { + s.AccountAlias = &v + return s +} + +type CreateAccountAliasOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateAccountAliasOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAccountAliasOutput) GoString() string { + return s.String() +} + +type CreateGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the group to create. Do not include the path in this value. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@-. + // The group name must be unique within the account. Group names are not distinguished + // by case. For example, you cannot create groups named both "ADMINS" and "admins". + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The path to the group. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + Path *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *CreateGroupInput) SetGroupName(v string) *CreateGroupInput { + s.GroupName = &v + return s +} + +// SetPath sets the Path field's value. +func (s *CreateGroupInput) SetPath(v string) *CreateGroupInput { + s.Path = &v + return s +} + +// Contains the response to a successful CreateGroup request. +type CreateGroupOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the new group. + // + // Group is a required field + Group *Group `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGroupOutput) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *CreateGroupOutput) SetGroup(v *Group) *CreateGroupOutput { + s.Group = v + return s +} + +type CreateInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the instance profile to create. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // InstanceProfileName is a required field + InstanceProfileName *string `min:"1" type:"string" required:"true"` + + // The path to the instance profile. For more information about paths, see IAM + // Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + Path *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateInstanceProfileInput"} + if s.InstanceProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceProfileName")) + } + if s.InstanceProfileName != nil && len(*s.InstanceProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceProfileName", 1)) + } + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceProfileName sets the InstanceProfileName field's value. +func (s *CreateInstanceProfileInput) SetInstanceProfileName(v string) *CreateInstanceProfileInput { + s.InstanceProfileName = &v + return s +} + +// SetPath sets the Path field's value. +func (s *CreateInstanceProfileInput) SetPath(v string) *CreateInstanceProfileInput { + s.Path = &v + return s +} + +// Contains the response to a successful CreateInstanceProfile request. +type CreateInstanceProfileOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the new instance profile. + // + // InstanceProfile is a required field + InstanceProfile *InstanceProfile `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstanceProfileOutput) GoString() string { + return s.String() +} + +// SetInstanceProfile sets the InstanceProfile field's value. +func (s *CreateInstanceProfileOutput) SetInstanceProfile(v *InstanceProfile) *CreateInstanceProfileOutput { + s.InstanceProfile = v + return s +} + +type CreateLoginProfileInput struct { + _ struct{} `type:"structure"` + + // The new password for the user. + // + // The regex pattern (http://wikipedia.org/wiki/regex) that is used to validate + // this parameter is a string of characters. That string can include almost + // any printable ASCII character from the space (\u0020) through the end of + // the ASCII character range (\u00FF). You can also include the tab (\u0009), + // line feed (\u000A), and carriage return (\u000D) characters. Any of these + // characters are valid in a password. However, many tools, such as the AWS + // Management Console, might restrict the ability to type certain characters + // because they have special meaning within that tool. + // + // Password is a required field + Password *string `min:"1" type:"string" required:"true"` + + // Specifies whether the user is required to set a new password on next sign-in. + PasswordResetRequired *bool `type:"boolean"` + + // The name of the IAM user to create a password for. The user must already + // exist. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateLoginProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLoginProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLoginProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLoginProfileInput"} + if s.Password == nil { + invalidParams.Add(request.NewErrParamRequired("Password")) + } + if s.Password != nil && len(*s.Password) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Password", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPassword sets the Password field's value. +func (s *CreateLoginProfileInput) SetPassword(v string) *CreateLoginProfileInput { + s.Password = &v + return s +} + +// SetPasswordResetRequired sets the PasswordResetRequired field's value. +func (s *CreateLoginProfileInput) SetPasswordResetRequired(v bool) *CreateLoginProfileInput { + s.PasswordResetRequired = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *CreateLoginProfileInput) SetUserName(v string) *CreateLoginProfileInput { + s.UserName = &v + return s +} + +// Contains the response to a successful CreateLoginProfile request. +type CreateLoginProfileOutput struct { + _ struct{} `type:"structure"` + + // A structure containing the user name and password create date. + // + // LoginProfile is a required field + LoginProfile *LoginProfile `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateLoginProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLoginProfileOutput) GoString() string { + return s.String() +} + +// SetLoginProfile sets the LoginProfile field's value. +func (s *CreateLoginProfileOutput) SetLoginProfile(v *LoginProfile) *CreateLoginProfileOutput { + s.LoginProfile = v + return s +} + +type CreateOpenIDConnectProviderInput struct { + _ struct{} `type:"structure"` + + // A list of client IDs (also known as audiences). When a mobile or web app + // registers with an OpenID Connect provider, they establish a value that identifies + // the application. (This is the value that's sent as the client_id parameter + // on OAuth requests.) + // + // You can register multiple client IDs with the same provider. For example, + // you might have multiple applications that use the same OIDC provider. You + // cannot register more than 100 client IDs with a single IAM OIDC provider. + // + // There is no defined format for a client ID. The CreateOpenIDConnectProviderRequest + // operation accepts client IDs up to 255 characters long. + ClientIDList []*string `type:"list"` + + // A list of server certificate thumbprints for the OpenID Connect (OIDC) identity + // provider's server certificates. Typically this list includes only one entry. + // However, IAM lets you have up to five thumbprints for an OIDC provider. This + // lets you maintain multiple thumbprints if the identity provider is rotating + // certificates. + // + // The server certificate thumbprint is the hex-encoded SHA-1 hash value of + // the X.509 certificate used by the domain where the OpenID Connect provider + // makes its keys available. It is always a 40-character string. + // + // You must provide at least one thumbprint when creating an IAM OIDC provider. + // For example, assume that the OIDC provider is server.example.com and the + // provider stores its keys at https://keys.server.example.com/openid-connect. + // In that case, the thumbprint string would be the hex-encoded SHA-1 hash value + // of the certificate used by https://keys.server.example.com. + // + // For more information about obtaining the OIDC provider's thumbprint, see + // Obtaining the Thumbprint for an OpenID Connect Provider (http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc-obtain-thumbprint.html) + // in the IAM User Guide. + // + // ThumbprintList is a required field + ThumbprintList []*string `type:"list" required:"true"` + + // The URL of the identity provider. The URL must begin with https:// and should + // correspond to the iss claim in the provider's OpenID Connect ID tokens. Per + // the OIDC standard, path components are allowed but query parameters are not. + // Typically the URL consists of only a hostname, like https://server.example.org + // or https://example.com. + // + // You cannot register the same provider multiple times in a single AWS account. + // If you try to submit a URL that has already been used for an OpenID Connect + // provider in the AWS account, you will get an error. + // + // Url is a required field + Url *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateOpenIDConnectProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateOpenIDConnectProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateOpenIDConnectProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateOpenIDConnectProviderInput"} + if s.ThumbprintList == nil { + invalidParams.Add(request.NewErrParamRequired("ThumbprintList")) + } + if s.Url == nil { + invalidParams.Add(request.NewErrParamRequired("Url")) + } + if s.Url != nil && len(*s.Url) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Url", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientIDList sets the ClientIDList field's value. +func (s *CreateOpenIDConnectProviderInput) SetClientIDList(v []*string) *CreateOpenIDConnectProviderInput { + s.ClientIDList = v + return s +} + +// SetThumbprintList sets the ThumbprintList field's value. +func (s *CreateOpenIDConnectProviderInput) SetThumbprintList(v []*string) *CreateOpenIDConnectProviderInput { + s.ThumbprintList = v + return s +} + +// SetUrl sets the Url field's value. +func (s *CreateOpenIDConnectProviderInput) SetUrl(v string) *CreateOpenIDConnectProviderInput { + s.Url = &v + return s +} + +// Contains the response to a successful CreateOpenIDConnectProvider request. +type CreateOpenIDConnectProviderOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the new IAM OpenID Connect provider that + // is created. For more information, see OpenIDConnectProviderListEntry. + OpenIDConnectProviderArn *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s CreateOpenIDConnectProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateOpenIDConnectProviderOutput) GoString() string { + return s.String() +} + +// SetOpenIDConnectProviderArn sets the OpenIDConnectProviderArn field's value. +func (s *CreateOpenIDConnectProviderOutput) SetOpenIDConnectProviderArn(v string) *CreateOpenIDConnectProviderOutput { + s.OpenIDConnectProviderArn = &v + return s +} + +type CreatePolicyInput struct { + _ struct{} `type:"structure"` + + // A friendly description of the policy. + // + // Typically used to store information about the permissions defined in the + // policy. For example, "Grants access to production DynamoDB tables." + // + // The policy description is immutable. After a value is assigned, it cannot + // be changed. + Description *string `type:"string"` + + // The path for the policy. + // + // For more information about paths, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + Path *string `type:"string"` + + // The JSON policy document that you want to use as the content for the new + // policy. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The friendly name of the policy. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreatePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePolicyInput"} + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreatePolicyInput) SetDescription(v string) *CreatePolicyInput { + s.Description = &v + return s +} + +// SetPath sets the Path field's value. +func (s *CreatePolicyInput) SetPath(v string) *CreatePolicyInput { + s.Path = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *CreatePolicyInput) SetPolicyDocument(v string) *CreatePolicyInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *CreatePolicyInput) SetPolicyName(v string) *CreatePolicyInput { + s.PolicyName = &v + return s +} + +// Contains the response to a successful CreatePolicy request. +type CreatePolicyOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the new policy. + Policy *Policy `type:"structure"` +} + +// String returns the string representation +func (s CreatePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyOutput) GoString() string { + return s.String() +} + +// SetPolicy sets the Policy field's value. +func (s *CreatePolicyOutput) SetPolicy(v *Policy) *CreatePolicyOutput { + s.Policy = v + return s +} + +type CreatePolicyVersionInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy to which you want to add + // a new version. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The JSON policy document that you want to use as the content for this new + // version of the policy. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // Specifies whether to set this version as the policy's default version. + // + // When this parameter is true, the new policy version becomes the operative + // version. That is, it becomes the version that is in effect for the IAM users, + // groups, and roles that the policy is attached to. + // + // For more information about managed policy versions, see Versioning for Managed + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the IAM User Guide. + SetAsDefault *bool `type:"boolean"` +} + +// String returns the string representation +func (s CreatePolicyVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePolicyVersionInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *CreatePolicyVersionInput) SetPolicyArn(v string) *CreatePolicyVersionInput { + s.PolicyArn = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *CreatePolicyVersionInput) SetPolicyDocument(v string) *CreatePolicyVersionInput { + s.PolicyDocument = &v + return s +} + +// SetSetAsDefault sets the SetAsDefault field's value. +func (s *CreatePolicyVersionInput) SetSetAsDefault(v bool) *CreatePolicyVersionInput { + s.SetAsDefault = &v + return s +} + +// Contains the response to a successful CreatePolicyVersion request. +type CreatePolicyVersionOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the new policy version. + PolicyVersion *PolicyVersion `type:"structure"` +} + +// String returns the string representation +func (s CreatePolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyVersionOutput) GoString() string { + return s.String() +} + +// SetPolicyVersion sets the PolicyVersion field's value. +func (s *CreatePolicyVersionOutput) SetPolicyVersion(v *PolicyVersion) *CreatePolicyVersionOutput { + s.PolicyVersion = v + return s +} + +type CreateRoleInput struct { + _ struct{} `type:"structure"` + + // The trust relationship policy document that grants an entity permission to + // assume the role. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // AssumeRolePolicyDocument is a required field + AssumeRolePolicyDocument *string `min:"1" type:"string" required:"true"` + + // A description of the role. + Description *string `type:"string"` + + // The maximum session duration (in seconds) that you want to set for the specified + // role. If you do not specify a value for this setting, the default maximum + // of one hour is applied. This setting can have a value from 1 hour to 12 hours. + // + // Anyone who assumes the role from the AWS CLI or API can use the DurationSeconds + // API parameter or the duration-seconds CLI parameter to request a longer session. + // The MaxSessionDuration setting determines the maximum duration that can be + // requested using the DurationSeconds parameter. If users don't specify a value + // for the DurationSeconds parameter, their security credentials are valid for + // one hour by default. This applies when you use the AssumeRole* API operations + // or the assume-role* CLI operations but does not apply when you use those + // operations to create a console URL. For more information, see Using IAM Roles + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) in the + // IAM User Guide. + MaxSessionDuration *int64 `min:"3600" type:"integer"` + + // The path to the role. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + Path *string `min:"1" type:"string"` + + // The name of the role to create. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // Role names are not distinguished by case. For example, you cannot create + // roles named both "PRODROLE" and "prodrole". + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateRoleInput"} + if s.AssumeRolePolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("AssumeRolePolicyDocument")) + } + if s.AssumeRolePolicyDocument != nil && len(*s.AssumeRolePolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AssumeRolePolicyDocument", 1)) + } + if s.MaxSessionDuration != nil && *s.MaxSessionDuration < 3600 { + invalidParams.Add(request.NewErrParamMinValue("MaxSessionDuration", 3600)) + } + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssumeRolePolicyDocument sets the AssumeRolePolicyDocument field's value. +func (s *CreateRoleInput) SetAssumeRolePolicyDocument(v string) *CreateRoleInput { + s.AssumeRolePolicyDocument = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateRoleInput) SetDescription(v string) *CreateRoleInput { + s.Description = &v + return s +} + +// SetMaxSessionDuration sets the MaxSessionDuration field's value. +func (s *CreateRoleInput) SetMaxSessionDuration(v int64) *CreateRoleInput { + s.MaxSessionDuration = &v + return s +} + +// SetPath sets the Path field's value. +func (s *CreateRoleInput) SetPath(v string) *CreateRoleInput { + s.Path = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *CreateRoleInput) SetRoleName(v string) *CreateRoleInput { + s.RoleName = &v + return s +} + +// Contains the response to a successful CreateRole request. +type CreateRoleOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the new role. + // + // Role is a required field + Role *Role `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateRoleOutput) GoString() string { + return s.String() +} + +// SetRole sets the Role field's value. +func (s *CreateRoleOutput) SetRole(v *Role) *CreateRoleOutput { + s.Role = v + return s +} + +type CreateSAMLProviderInput struct { + _ struct{} `type:"structure"` + + // The name of the provider to create. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // An XML document generated by an identity provider (IdP) that supports SAML + // 2.0. The document includes the issuer's name, expiration information, and + // keys that can be used to validate the SAML authentication response (assertions) + // that are received from the IdP. You must generate the metadata document using + // the identity management software that is used as your organization's IdP. + // + // For more information, see About SAML 2.0-based Federation (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html) + // in the IAM User Guide + // + // SAMLMetadataDocument is a required field + SAMLMetadataDocument *string `min:"1000" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateSAMLProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSAMLProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSAMLProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSAMLProviderInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.SAMLMetadataDocument == nil { + invalidParams.Add(request.NewErrParamRequired("SAMLMetadataDocument")) + } + if s.SAMLMetadataDocument != nil && len(*s.SAMLMetadataDocument) < 1000 { + invalidParams.Add(request.NewErrParamMinLen("SAMLMetadataDocument", 1000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *CreateSAMLProviderInput) SetName(v string) *CreateSAMLProviderInput { + s.Name = &v + return s +} + +// SetSAMLMetadataDocument sets the SAMLMetadataDocument field's value. +func (s *CreateSAMLProviderInput) SetSAMLMetadataDocument(v string) *CreateSAMLProviderInput { + s.SAMLMetadataDocument = &v + return s +} + +// Contains the response to a successful CreateSAMLProvider request. +type CreateSAMLProviderOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the new SAML provider resource in IAM. + SAMLProviderArn *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s CreateSAMLProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSAMLProviderOutput) GoString() string { + return s.String() +} + +// SetSAMLProviderArn sets the SAMLProviderArn field's value. +func (s *CreateSAMLProviderOutput) SetSAMLProviderArn(v string) *CreateSAMLProviderOutput { + s.SAMLProviderArn = &v + return s +} + +type CreateServiceLinkedRoleInput struct { + _ struct{} `type:"structure"` + + // The AWS service to which this role is attached. You use a string similar + // to a URL but without the http:// in front. For example: elasticbeanstalk.amazonaws.com + // + // AWSServiceName is a required field + AWSServiceName *string `min:"1" type:"string" required:"true"` + + // A string that you provide, which is combined with the service name to form + // the complete role name. If you make multiple requests for the same service, + // then you must supply a different CustomSuffix for each request. Otherwise + // the request fails with a duplicate role name error. For example, you could + // add -1 or -debug to the suffix. + CustomSuffix *string `min:"1" type:"string"` + + // The description of the role. + Description *string `type:"string"` +} + +// String returns the string representation +func (s CreateServiceLinkedRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateServiceLinkedRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateServiceLinkedRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateServiceLinkedRoleInput"} + if s.AWSServiceName == nil { + invalidParams.Add(request.NewErrParamRequired("AWSServiceName")) + } + if s.AWSServiceName != nil && len(*s.AWSServiceName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AWSServiceName", 1)) + } + if s.CustomSuffix != nil && len(*s.CustomSuffix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CustomSuffix", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAWSServiceName sets the AWSServiceName field's value. +func (s *CreateServiceLinkedRoleInput) SetAWSServiceName(v string) *CreateServiceLinkedRoleInput { + s.AWSServiceName = &v + return s +} + +// SetCustomSuffix sets the CustomSuffix field's value. +func (s *CreateServiceLinkedRoleInput) SetCustomSuffix(v string) *CreateServiceLinkedRoleInput { + s.CustomSuffix = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateServiceLinkedRoleInput) SetDescription(v string) *CreateServiceLinkedRoleInput { + s.Description = &v + return s +} + +type CreateServiceLinkedRoleOutput struct { + _ struct{} `type:"structure"` + + // A Role object that contains details about the newly created role. + Role *Role `type:"structure"` +} + +// String returns the string representation +func (s CreateServiceLinkedRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateServiceLinkedRoleOutput) GoString() string { + return s.String() +} + +// SetRole sets the Role field's value. +func (s *CreateServiceLinkedRoleOutput) SetRole(v *Role) *CreateServiceLinkedRoleOutput { + s.Role = v + return s +} + +type CreateServiceSpecificCredentialInput struct { + _ struct{} `type:"structure"` + + // The name of the AWS service that is to be associated with the credentials. + // The service you specify here is the only service that can be accessed using + // these credentials. + // + // ServiceName is a required field + ServiceName *string `type:"string" required:"true"` + + // The name of the IAM user that is to be associated with the credentials. The + // new service-specific credentials have the same permissions as the associated + // user except that they can be used only to access the specified service. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateServiceSpecificCredentialInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateServiceSpecificCredentialInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateServiceSpecificCredentialInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateServiceSpecificCredentialInput"} + if s.ServiceName == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceName")) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServiceName sets the ServiceName field's value. +func (s *CreateServiceSpecificCredentialInput) SetServiceName(v string) *CreateServiceSpecificCredentialInput { + s.ServiceName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *CreateServiceSpecificCredentialInput) SetUserName(v string) *CreateServiceSpecificCredentialInput { + s.UserName = &v + return s +} + +type CreateServiceSpecificCredentialOutput struct { + _ struct{} `type:"structure"` + + // A structure that contains information about the newly created service-specific + // credential. + // + // This is the only time that the password for this credential set is available. + // It cannot be recovered later. Instead, you will have to reset the password + // with ResetServiceSpecificCredential. + ServiceSpecificCredential *ServiceSpecificCredential `type:"structure"` +} + +// String returns the string representation +func (s CreateServiceSpecificCredentialOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateServiceSpecificCredentialOutput) GoString() string { + return s.String() +} + +// SetServiceSpecificCredential sets the ServiceSpecificCredential field's value. +func (s *CreateServiceSpecificCredentialOutput) SetServiceSpecificCredential(v *ServiceSpecificCredential) *CreateServiceSpecificCredentialOutput { + s.ServiceSpecificCredential = v + return s +} + +type CreateUserInput struct { + _ struct{} `type:"structure"` + + // The path for the user name. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + Path *string `min:"1" type:"string"` + + // The name of the user to create. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@-. + // User names are not distinguished by case. For example, you cannot create + // users named both "TESTUSER" and "testuser". + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateUserInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateUserInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateUserInput"} + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPath sets the Path field's value. +func (s *CreateUserInput) SetPath(v string) *CreateUserInput { + s.Path = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *CreateUserInput) SetUserName(v string) *CreateUserInput { + s.UserName = &v + return s +} + +// Contains the response to a successful CreateUser request. +type CreateUserOutput struct { + _ struct{} `type:"structure"` + + // A structure with details about the new IAM user. + User *User `type:"structure"` +} + +// String returns the string representation +func (s CreateUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateUserOutput) GoString() string { + return s.String() +} + +// SetUser sets the User field's value. +func (s *CreateUserOutput) SetUser(v *User) *CreateUserOutput { + s.User = v + return s +} + +type CreateVirtualMFADeviceInput struct { + _ struct{} `type:"structure"` + + // The path for the virtual MFA device. For more information about paths, see + // IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + Path *string `min:"1" type:"string"` + + // The name of the virtual MFA device. Use with path to uniquely identify a + // virtual MFA device. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // VirtualMFADeviceName is a required field + VirtualMFADeviceName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateVirtualMFADeviceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateVirtualMFADeviceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateVirtualMFADeviceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateVirtualMFADeviceInput"} + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + if s.VirtualMFADeviceName == nil { + invalidParams.Add(request.NewErrParamRequired("VirtualMFADeviceName")) + } + if s.VirtualMFADeviceName != nil && len(*s.VirtualMFADeviceName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("VirtualMFADeviceName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPath sets the Path field's value. +func (s *CreateVirtualMFADeviceInput) SetPath(v string) *CreateVirtualMFADeviceInput { + s.Path = &v + return s +} + +// SetVirtualMFADeviceName sets the VirtualMFADeviceName field's value. +func (s *CreateVirtualMFADeviceInput) SetVirtualMFADeviceName(v string) *CreateVirtualMFADeviceInput { + s.VirtualMFADeviceName = &v + return s +} + +// Contains the response to a successful CreateVirtualMFADevice request. +type CreateVirtualMFADeviceOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the new virtual MFA device. + // + // VirtualMFADevice is a required field + VirtualMFADevice *VirtualMFADevice `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateVirtualMFADeviceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateVirtualMFADeviceOutput) GoString() string { + return s.String() +} + +// SetVirtualMFADevice sets the VirtualMFADevice field's value. +func (s *CreateVirtualMFADeviceOutput) SetVirtualMFADevice(v *VirtualMFADevice) *CreateVirtualMFADeviceOutput { + s.VirtualMFADevice = v + return s +} + +type DeactivateMFADeviceInput struct { + _ struct{} `type:"structure"` + + // The serial number that uniquely identifies the MFA device. For virtual MFA + // devices, the serial number is the device ARN. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: =,.@:/- + // + // SerialNumber is a required field + SerialNumber *string `min:"9" type:"string" required:"true"` + + // The name of the user whose MFA device you want to deactivate. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeactivateMFADeviceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeactivateMFADeviceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeactivateMFADeviceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeactivateMFADeviceInput"} + if s.SerialNumber == nil { + invalidParams.Add(request.NewErrParamRequired("SerialNumber")) + } + if s.SerialNumber != nil && len(*s.SerialNumber) < 9 { + invalidParams.Add(request.NewErrParamMinLen("SerialNumber", 9)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *DeactivateMFADeviceInput) SetSerialNumber(v string) *DeactivateMFADeviceInput { + s.SerialNumber = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DeactivateMFADeviceInput) SetUserName(v string) *DeactivateMFADeviceInput { + s.UserName = &v + return s +} + +type DeactivateMFADeviceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeactivateMFADeviceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeactivateMFADeviceOutput) GoString() string { + return s.String() +} + +type DeleteAccessKeyInput struct { + _ struct{} `type:"structure"` + + // The access key ID for the access key ID and secret access key you want to + // delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // AccessKeyId is a required field + AccessKeyId *string `min:"16" type:"string" required:"true"` + + // The name of the user whose access key pair you want to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteAccessKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccessKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAccessKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAccessKeyInput"} + if s.AccessKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("AccessKeyId")) + } + if s.AccessKeyId != nil && len(*s.AccessKeyId) < 16 { + invalidParams.Add(request.NewErrParamMinLen("AccessKeyId", 16)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *DeleteAccessKeyInput) SetAccessKeyId(v string) *DeleteAccessKeyInput { + s.AccessKeyId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DeleteAccessKeyInput) SetUserName(v string) *DeleteAccessKeyInput { + s.UserName = &v + return s +} + +type DeleteAccessKeyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAccessKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccessKeyOutput) GoString() string { + return s.String() +} + +type DeleteAccountAliasInput struct { + _ struct{} `type:"structure"` + + // The name of the account alias to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of lowercase letters, digits, and dashes. + // You cannot start or finish with a dash, nor can you have two dashes in a + // row. + // + // AccountAlias is a required field + AccountAlias *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteAccountAliasInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountAliasInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAccountAliasInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAccountAliasInput"} + if s.AccountAlias == nil { + invalidParams.Add(request.NewErrParamRequired("AccountAlias")) + } + if s.AccountAlias != nil && len(*s.AccountAlias) < 3 { + invalidParams.Add(request.NewErrParamMinLen("AccountAlias", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountAlias sets the AccountAlias field's value. +func (s *DeleteAccountAliasInput) SetAccountAlias(v string) *DeleteAccountAliasInput { + s.AccountAlias = &v + return s +} + +type DeleteAccountAliasOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAccountAliasOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountAliasOutput) GoString() string { + return s.String() +} + +type DeleteAccountPasswordPolicyInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAccountPasswordPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountPasswordPolicyInput) GoString() string { + return s.String() +} + +type DeleteAccountPasswordPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAccountPasswordPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountPasswordPolicyOutput) GoString() string { + return s.String() +} + +type DeleteGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the IAM group to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *DeleteGroupInput) SetGroupName(v string) *DeleteGroupInput { + s.GroupName = &v + return s +} + +type DeleteGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGroupOutput) GoString() string { + return s.String() +} + +type DeleteGroupPolicyInput struct { + _ struct{} `type:"structure"` + + // The name (friendly name, not ARN) identifying the group that the policy is + // embedded in. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The name identifying the policy document to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteGroupPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGroupPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteGroupPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteGroupPolicyInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *DeleteGroupPolicyInput) SetGroupName(v string) *DeleteGroupPolicyInput { + s.GroupName = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *DeleteGroupPolicyInput) SetPolicyName(v string) *DeleteGroupPolicyInput { + s.PolicyName = &v + return s +} + +type DeleteGroupPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteGroupPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGroupPolicyOutput) GoString() string { + return s.String() +} + +type DeleteInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the instance profile to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // InstanceProfileName is a required field + InstanceProfileName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInstanceProfileInput"} + if s.InstanceProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceProfileName")) + } + if s.InstanceProfileName != nil && len(*s.InstanceProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceProfileName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceProfileName sets the InstanceProfileName field's value. +func (s *DeleteInstanceProfileInput) SetInstanceProfileName(v string) *DeleteInstanceProfileInput { + s.InstanceProfileName = &v + return s +} + +type DeleteInstanceProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstanceProfileOutput) GoString() string { + return s.String() +} + +type DeleteLoginProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the user whose password you want to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLoginProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLoginProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLoginProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLoginProfileInput"} + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetUserName sets the UserName field's value. +func (s *DeleteLoginProfileInput) SetUserName(v string) *DeleteLoginProfileInput { + s.UserName = &v + return s +} + +type DeleteLoginProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteLoginProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLoginProfileOutput) GoString() string { + return s.String() +} + +type DeleteOpenIDConnectProviderInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM OpenID Connect provider resource + // object to delete. You can get a list of OpenID Connect provider resource + // ARNs by using the ListOpenIDConnectProviders operation. + // + // OpenIDConnectProviderArn is a required field + OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteOpenIDConnectProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteOpenIDConnectProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteOpenIDConnectProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteOpenIDConnectProviderInput"} + if s.OpenIDConnectProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("OpenIDConnectProviderArn")) + } + if s.OpenIDConnectProviderArn != nil && len(*s.OpenIDConnectProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("OpenIDConnectProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOpenIDConnectProviderArn sets the OpenIDConnectProviderArn field's value. +func (s *DeleteOpenIDConnectProviderInput) SetOpenIDConnectProviderArn(v string) *DeleteOpenIDConnectProviderInput { + s.OpenIDConnectProviderArn = &v + return s +} + +type DeleteOpenIDConnectProviderOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteOpenIDConnectProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteOpenIDConnectProviderOutput) GoString() string { + return s.String() +} + +type DeletePolicyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to delete. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePolicyInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *DeletePolicyInput) SetPolicyArn(v string) *DeletePolicyInput { + s.PolicyArn = &v + return s +} + +type DeletePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeletePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyOutput) GoString() string { + return s.String() +} + +type DeletePolicyVersionInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy from which you want to delete + // a version. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The policy version to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consists of the lowercase letter 'v' followed + // by one or two digits, and optionally followed by a period '.' and a string + // of letters and digits. + // + // For more information about managed policy versions, see Versioning for Managed + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the IAM User Guide. + // + // VersionId is a required field + VersionId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePolicyVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePolicyVersionInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.VersionId == nil { + invalidParams.Add(request.NewErrParamRequired("VersionId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *DeletePolicyVersionInput) SetPolicyArn(v string) *DeletePolicyVersionInput { + s.PolicyArn = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeletePolicyVersionInput) SetVersionId(v string) *DeletePolicyVersionInput { + s.VersionId = &v + return s +} + +type DeletePolicyVersionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeletePolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyVersionOutput) GoString() string { + return s.String() +} + +type DeleteRoleInput struct { + _ struct{} `type:"structure"` + + // The name of the role to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRoleInput"} + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRoleName sets the RoleName field's value. +func (s *DeleteRoleInput) SetRoleName(v string) *DeleteRoleInput { + s.RoleName = &v + return s +} + +type DeleteRoleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRoleOutput) GoString() string { + return s.String() +} + +type DeleteRolePolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the inline policy to delete from the specified IAM role. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The name (friendly name, not ARN) identifying the role that the policy is + // embedded in. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteRolePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRolePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteRolePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRolePolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *DeleteRolePolicyInput) SetPolicyName(v string) *DeleteRolePolicyInput { + s.PolicyName = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *DeleteRolePolicyInput) SetRoleName(v string) *DeleteRolePolicyInput { + s.RoleName = &v + return s +} + +type DeleteRolePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteRolePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRolePolicyOutput) GoString() string { + return s.String() +} + +type DeleteSAMLProviderInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the SAML provider to delete. + // + // SAMLProviderArn is a required field + SAMLProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSAMLProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSAMLProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSAMLProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSAMLProviderInput"} + if s.SAMLProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("SAMLProviderArn")) + } + if s.SAMLProviderArn != nil && len(*s.SAMLProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("SAMLProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSAMLProviderArn sets the SAMLProviderArn field's value. +func (s *DeleteSAMLProviderInput) SetSAMLProviderArn(v string) *DeleteSAMLProviderInput { + s.SAMLProviderArn = &v + return s +} + +type DeleteSAMLProviderOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteSAMLProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSAMLProviderOutput) GoString() string { + return s.String() +} + +type DeleteSSHPublicKeyInput struct { + _ struct{} `type:"structure"` + + // The unique identifier for the SSH public key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // SSHPublicKeyId is a required field + SSHPublicKeyId *string `min:"20" type:"string" required:"true"` + + // The name of the IAM user associated with the SSH public key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSSHPublicKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSSHPublicKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSSHPublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSSHPublicKeyInput"} + if s.SSHPublicKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("SSHPublicKeyId")) + } + if s.SSHPublicKeyId != nil && len(*s.SSHPublicKeyId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("SSHPublicKeyId", 20)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSSHPublicKeyId sets the SSHPublicKeyId field's value. +func (s *DeleteSSHPublicKeyInput) SetSSHPublicKeyId(v string) *DeleteSSHPublicKeyInput { + s.SSHPublicKeyId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DeleteSSHPublicKeyInput) SetUserName(v string) *DeleteSSHPublicKeyInput { + s.UserName = &v + return s +} + +type DeleteSSHPublicKeyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteSSHPublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSSHPublicKeyOutput) GoString() string { + return s.String() +} + +type DeleteServerCertificateInput struct { + _ struct{} `type:"structure"` + + // The name of the server certificate you want to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // ServerCertificateName is a required field + ServerCertificateName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteServerCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServerCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteServerCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteServerCertificateInput"} + if s.ServerCertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("ServerCertificateName")) + } + if s.ServerCertificateName != nil && len(*s.ServerCertificateName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServerCertificateName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServerCertificateName sets the ServerCertificateName field's value. +func (s *DeleteServerCertificateInput) SetServerCertificateName(v string) *DeleteServerCertificateInput { + s.ServerCertificateName = &v + return s +} + +type DeleteServerCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteServerCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServerCertificateOutput) GoString() string { + return s.String() +} + +type DeleteServiceLinkedRoleInput struct { + _ struct{} `type:"structure"` + + // The name of the service-linked role to be deleted. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteServiceLinkedRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServiceLinkedRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteServiceLinkedRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteServiceLinkedRoleInput"} + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRoleName sets the RoleName field's value. +func (s *DeleteServiceLinkedRoleInput) SetRoleName(v string) *DeleteServiceLinkedRoleInput { + s.RoleName = &v + return s +} + +type DeleteServiceLinkedRoleOutput struct { + _ struct{} `type:"structure"` + + // The deletion task identifier that you can use to check the status of the + // deletion. This identifier is returned in the format task/aws-service-role///. + // + // DeletionTaskId is a required field + DeletionTaskId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteServiceLinkedRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServiceLinkedRoleOutput) GoString() string { + return s.String() +} + +// SetDeletionTaskId sets the DeletionTaskId field's value. +func (s *DeleteServiceLinkedRoleOutput) SetDeletionTaskId(v string) *DeleteServiceLinkedRoleOutput { + s.DeletionTaskId = &v + return s +} + +type DeleteServiceSpecificCredentialInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the service-specific credential. You can get this + // value by calling ListServiceSpecificCredentials. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // ServiceSpecificCredentialId is a required field + ServiceSpecificCredentialId *string `min:"20" type:"string" required:"true"` + + // The name of the IAM user associated with the service-specific credential. + // If this value is not specified, then the operation assumes the user whose + // credentials are used to call the operation. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteServiceSpecificCredentialInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServiceSpecificCredentialInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteServiceSpecificCredentialInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteServiceSpecificCredentialInput"} + if s.ServiceSpecificCredentialId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceSpecificCredentialId")) + } + if s.ServiceSpecificCredentialId != nil && len(*s.ServiceSpecificCredentialId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("ServiceSpecificCredentialId", 20)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. +func (s *DeleteServiceSpecificCredentialInput) SetServiceSpecificCredentialId(v string) *DeleteServiceSpecificCredentialInput { + s.ServiceSpecificCredentialId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DeleteServiceSpecificCredentialInput) SetUserName(v string) *DeleteServiceSpecificCredentialInput { + s.UserName = &v + return s +} + +type DeleteServiceSpecificCredentialOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteServiceSpecificCredentialOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServiceSpecificCredentialOutput) GoString() string { + return s.String() +} + +type DeleteSigningCertificateInput struct { + _ struct{} `type:"structure"` + + // The ID of the signing certificate to delete. + // + // The format of this parameter, as described by its regex (http://wikipedia.org/wiki/regex) + // pattern, is a string of characters that can be upper- or lower-cased letters + // or digits. + // + // CertificateId is a required field + CertificateId *string `min:"24" type:"string" required:"true"` + + // The name of the user the signing certificate belongs to. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteSigningCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSigningCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSigningCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSigningCertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 24 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 24)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *DeleteSigningCertificateInput) SetCertificateId(v string) *DeleteSigningCertificateInput { + s.CertificateId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DeleteSigningCertificateInput) SetUserName(v string) *DeleteSigningCertificateInput { + s.UserName = &v + return s +} + +type DeleteSigningCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteSigningCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSigningCertificateOutput) GoString() string { + return s.String() +} + +type DeleteUserInput struct { + _ struct{} `type:"structure"` + + // The name of the user to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteUserInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteUserInput"} + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetUserName sets the UserName field's value. +func (s *DeleteUserInput) SetUserName(v string) *DeleteUserInput { + s.UserName = &v + return s +} + +type DeleteUserOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserOutput) GoString() string { + return s.String() +} + +type DeleteUserPolicyInput struct { + _ struct{} `type:"structure"` + + // The name identifying the policy document to delete. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The name (friendly name, not ARN) identifying the user that the policy is + // embedded in. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteUserPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteUserPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteUserPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *DeleteUserPolicyInput) SetPolicyName(v string) *DeleteUserPolicyInput { + s.PolicyName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DeleteUserPolicyInput) SetUserName(v string) *DeleteUserPolicyInput { + s.UserName = &v + return s +} + +type DeleteUserPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteUserPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserPolicyOutput) GoString() string { + return s.String() +} + +type DeleteVirtualMFADeviceInput struct { + _ struct{} `type:"structure"` + + // The serial number that uniquely identifies the MFA device. For virtual MFA + // devices, the serial number is the same as the ARN. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: =,.@:/- + // + // SerialNumber is a required field + SerialNumber *string `min:"9" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteVirtualMFADeviceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVirtualMFADeviceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteVirtualMFADeviceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteVirtualMFADeviceInput"} + if s.SerialNumber == nil { + invalidParams.Add(request.NewErrParamRequired("SerialNumber")) + } + if s.SerialNumber != nil && len(*s.SerialNumber) < 9 { + invalidParams.Add(request.NewErrParamMinLen("SerialNumber", 9)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *DeleteVirtualMFADeviceInput) SetSerialNumber(v string) *DeleteVirtualMFADeviceInput { + s.SerialNumber = &v + return s +} + +type DeleteVirtualMFADeviceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteVirtualMFADeviceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVirtualMFADeviceOutput) GoString() string { + return s.String() +} + +// The reason that the service-linked role deletion failed. +// +// This data type is used as a response element in the GetServiceLinkedRoleDeletionStatus +// operation. +type DeletionTaskFailureReasonType struct { + _ struct{} `type:"structure"` + + // A short description of the reason that the service-linked role deletion failed. + Reason *string `type:"string"` + + // A list of objects that contains details about the service-linked role deletion + // failure, if that information is returned by the service. If the service-linked + // role has active sessions or if any resources that were used by the role have + // not been deleted from the linked service, the role can't be deleted. This + // parameter includes a list of the resources that are associated with the role + // and the region in which the resources are being used. + RoleUsageList []*RoleUsageType `type:"list"` +} + +// String returns the string representation +func (s DeletionTaskFailureReasonType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletionTaskFailureReasonType) GoString() string { + return s.String() +} + +// SetReason sets the Reason field's value. +func (s *DeletionTaskFailureReasonType) SetReason(v string) *DeletionTaskFailureReasonType { + s.Reason = &v + return s +} + +// SetRoleUsageList sets the RoleUsageList field's value. +func (s *DeletionTaskFailureReasonType) SetRoleUsageList(v []*RoleUsageType) *DeletionTaskFailureReasonType { + s.RoleUsageList = v + return s +} + +type DetachGroupPolicyInput struct { + _ struct{} `type:"structure"` + + // The name (friendly name, not ARN) of the IAM group to detach the policy from. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to detach. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DetachGroupPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachGroupPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DetachGroupPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachGroupPolicyInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *DetachGroupPolicyInput) SetGroupName(v string) *DetachGroupPolicyInput { + s.GroupName = &v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *DetachGroupPolicyInput) SetPolicyArn(v string) *DetachGroupPolicyInput { + s.PolicyArn = &v + return s +} + +type DetachGroupPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DetachGroupPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachGroupPolicyOutput) GoString() string { + return s.String() +} + +type DetachRolePolicyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to detach. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The name (friendly name, not ARN) of the IAM role to detach the policy from. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DetachRolePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachRolePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DetachRolePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachRolePolicyInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *DetachRolePolicyInput) SetPolicyArn(v string) *DetachRolePolicyInput { + s.PolicyArn = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *DetachRolePolicyInput) SetRoleName(v string) *DetachRolePolicyInput { + s.RoleName = &v + return s +} + +type DetachRolePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DetachRolePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachRolePolicyOutput) GoString() string { + return s.String() +} + +type DetachUserPolicyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy you want to detach. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The name (friendly name, not ARN) of the IAM user to detach the policy from. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DetachUserPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachUserPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DetachUserPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachUserPolicyInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *DetachUserPolicyInput) SetPolicyArn(v string) *DetachUserPolicyInput { + s.PolicyArn = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DetachUserPolicyInput) SetUserName(v string) *DetachUserPolicyInput { + s.UserName = &v + return s +} + +type DetachUserPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DetachUserPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachUserPolicyOutput) GoString() string { + return s.String() +} + +type EnableMFADeviceInput struct { + _ struct{} `type:"structure"` + + // An authentication code emitted by the device. + // + // The format for this parameter is a string of six digits. + // + // Submit your request immediately after generating the authentication codes. + // If you generate the codes and then wait too long to submit the request, the + // MFA device successfully associates with the user but the MFA device becomes + // out of sync. This happens because time-based one-time passwords (TOTP) expire + // after a short period of time. If this happens, you can resync the device + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_sync.html). + // + // AuthenticationCode1 is a required field + AuthenticationCode1 *string `min:"6" type:"string" required:"true"` + + // A subsequent authentication code emitted by the device. + // + // The format for this parameter is a string of six digits. + // + // Submit your request immediately after generating the authentication codes. + // If you generate the codes and then wait too long to submit the request, the + // MFA device successfully associates with the user but the MFA device becomes + // out of sync. This happens because time-based one-time passwords (TOTP) expire + // after a short period of time. If this happens, you can resync the device + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_sync.html). + // + // AuthenticationCode2 is a required field + AuthenticationCode2 *string `min:"6" type:"string" required:"true"` + + // The serial number that uniquely identifies the MFA device. For virtual MFA + // devices, the serial number is the device ARN. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: =,.@:/- + // + // SerialNumber is a required field + SerialNumber *string `min:"9" type:"string" required:"true"` + + // The name of the IAM user for whom you want to enable the MFA device. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s EnableMFADeviceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableMFADeviceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *EnableMFADeviceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EnableMFADeviceInput"} + if s.AuthenticationCode1 == nil { + invalidParams.Add(request.NewErrParamRequired("AuthenticationCode1")) + } + if s.AuthenticationCode1 != nil && len(*s.AuthenticationCode1) < 6 { + invalidParams.Add(request.NewErrParamMinLen("AuthenticationCode1", 6)) + } + if s.AuthenticationCode2 == nil { + invalidParams.Add(request.NewErrParamRequired("AuthenticationCode2")) + } + if s.AuthenticationCode2 != nil && len(*s.AuthenticationCode2) < 6 { + invalidParams.Add(request.NewErrParamMinLen("AuthenticationCode2", 6)) + } + if s.SerialNumber == nil { + invalidParams.Add(request.NewErrParamRequired("SerialNumber")) + } + if s.SerialNumber != nil && len(*s.SerialNumber) < 9 { + invalidParams.Add(request.NewErrParamMinLen("SerialNumber", 9)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthenticationCode1 sets the AuthenticationCode1 field's value. +func (s *EnableMFADeviceInput) SetAuthenticationCode1(v string) *EnableMFADeviceInput { + s.AuthenticationCode1 = &v + return s +} + +// SetAuthenticationCode2 sets the AuthenticationCode2 field's value. +func (s *EnableMFADeviceInput) SetAuthenticationCode2(v string) *EnableMFADeviceInput { + s.AuthenticationCode2 = &v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *EnableMFADeviceInput) SetSerialNumber(v string) *EnableMFADeviceInput { + s.SerialNumber = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *EnableMFADeviceInput) SetUserName(v string) *EnableMFADeviceInput { + s.UserName = &v + return s +} + +type EnableMFADeviceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s EnableMFADeviceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableMFADeviceOutput) GoString() string { + return s.String() +} + +// Contains the results of a simulation. +// +// This data type is used by the return parameter of SimulateCustomPolicy and +// SimulatePrincipalPolicy. +type EvaluationResult struct { + _ struct{} `type:"structure"` + + // The name of the API operation tested on the indicated resource. + // + // EvalActionName is a required field + EvalActionName *string `min:"3" type:"string" required:"true"` + + // The result of the simulation. + // + // EvalDecision is a required field + EvalDecision *string `type:"string" required:"true" enum:"PolicyEvaluationDecisionType"` + + // Additional details about the results of the evaluation decision. When there + // are both IAM policies and resource policies, this parameter explains how + // each set of policies contributes to the final evaluation decision. When simulating + // cross-account access to a resource, both the resource-based policy and the + // caller's IAM policy must grant access. See How IAM Roles Differ from Resource-based + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html) + EvalDecisionDetails map[string]*string `type:"map"` + + // The ARN of the resource that the indicated API operation was tested on. + EvalResourceName *string `min:"1" type:"string"` + + // A list of the statements in the input policies that determine the result + // for this scenario. Remember that even if multiple statements allow the operation + // on the resource, if only one statement denies that operation, then the explicit + // deny overrides any allow, and the deny statement is the only entry included + // in the result. + MatchedStatements []*Statement `type:"list"` + + // A list of context keys that are required by the included input policies but + // that were not provided by one of the input parameters. This list is used + // when the resource in a simulation is "*", either explicitly, or when the + // ResourceArns parameter blank. If you include a list of resources, then any + // missing context values are instead included under the ResourceSpecificResults + // section. To discover the context keys used by a set of policies, you can + // call GetContextKeysForCustomPolicy or GetContextKeysForPrincipalPolicy. + MissingContextValues []*string `type:"list"` + + // A structure that details how AWS Organizations and its service control policies + // affect the results of the simulation. Only applies if the simulated user's + // account is part of an organization. + OrganizationsDecisionDetail *OrganizationsDecisionDetail `type:"structure"` + + // The individual results of the simulation of the API operation specified in + // EvalActionName on each resource. + ResourceSpecificResults []*ResourceSpecificResult `type:"list"` +} + +// String returns the string representation +func (s EvaluationResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EvaluationResult) GoString() string { + return s.String() +} + +// SetEvalActionName sets the EvalActionName field's value. +func (s *EvaluationResult) SetEvalActionName(v string) *EvaluationResult { + s.EvalActionName = &v + return s +} + +// SetEvalDecision sets the EvalDecision field's value. +func (s *EvaluationResult) SetEvalDecision(v string) *EvaluationResult { + s.EvalDecision = &v + return s +} + +// SetEvalDecisionDetails sets the EvalDecisionDetails field's value. +func (s *EvaluationResult) SetEvalDecisionDetails(v map[string]*string) *EvaluationResult { + s.EvalDecisionDetails = v + return s +} + +// SetEvalResourceName sets the EvalResourceName field's value. +func (s *EvaluationResult) SetEvalResourceName(v string) *EvaluationResult { + s.EvalResourceName = &v + return s +} + +// SetMatchedStatements sets the MatchedStatements field's value. +func (s *EvaluationResult) SetMatchedStatements(v []*Statement) *EvaluationResult { + s.MatchedStatements = v + return s +} + +// SetMissingContextValues sets the MissingContextValues field's value. +func (s *EvaluationResult) SetMissingContextValues(v []*string) *EvaluationResult { + s.MissingContextValues = v + return s +} + +// SetOrganizationsDecisionDetail sets the OrganizationsDecisionDetail field's value. +func (s *EvaluationResult) SetOrganizationsDecisionDetail(v *OrganizationsDecisionDetail) *EvaluationResult { + s.OrganizationsDecisionDetail = v + return s +} + +// SetResourceSpecificResults sets the ResourceSpecificResults field's value. +func (s *EvaluationResult) SetResourceSpecificResults(v []*ResourceSpecificResult) *EvaluationResult { + s.ResourceSpecificResults = v + return s +} + +type GenerateCredentialReportInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GenerateCredentialReportInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GenerateCredentialReportInput) GoString() string { + return s.String() +} + +// Contains the response to a successful GenerateCredentialReport request. +type GenerateCredentialReportOutput struct { + _ struct{} `type:"structure"` + + // Information about the credential report. + Description *string `type:"string"` + + // Information about the state of the credential report. + State *string `type:"string" enum:"ReportStateType"` +} + +// String returns the string representation +func (s GenerateCredentialReportOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GenerateCredentialReportOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *GenerateCredentialReportOutput) SetDescription(v string) *GenerateCredentialReportOutput { + s.Description = &v + return s +} + +// SetState sets the State field's value. +func (s *GenerateCredentialReportOutput) SetState(v string) *GenerateCredentialReportOutput { + s.State = &v + return s +} + +type GetAccessKeyLastUsedInput struct { + _ struct{} `type:"structure"` + + // The identifier of an access key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // AccessKeyId is a required field + AccessKeyId *string `min:"16" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetAccessKeyLastUsedInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccessKeyLastUsedInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAccessKeyLastUsedInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAccessKeyLastUsedInput"} + if s.AccessKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("AccessKeyId")) + } + if s.AccessKeyId != nil && len(*s.AccessKeyId) < 16 { + invalidParams.Add(request.NewErrParamMinLen("AccessKeyId", 16)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *GetAccessKeyLastUsedInput) SetAccessKeyId(v string) *GetAccessKeyLastUsedInput { + s.AccessKeyId = &v + return s +} + +// Contains the response to a successful GetAccessKeyLastUsed request. It is +// also returned as a member of the AccessKeyMetaData structure returned by +// the ListAccessKeys action. +type GetAccessKeyLastUsedOutput struct { + _ struct{} `type:"structure"` + + // Contains information about the last time the access key was used. + AccessKeyLastUsed *AccessKeyLastUsed `type:"structure"` + + // The name of the AWS IAM user that owns this access key. + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetAccessKeyLastUsedOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccessKeyLastUsedOutput) GoString() string { + return s.String() +} + +// SetAccessKeyLastUsed sets the AccessKeyLastUsed field's value. +func (s *GetAccessKeyLastUsedOutput) SetAccessKeyLastUsed(v *AccessKeyLastUsed) *GetAccessKeyLastUsedOutput { + s.AccessKeyLastUsed = v + return s +} + +// SetUserName sets the UserName field's value. +func (s *GetAccessKeyLastUsedOutput) SetUserName(v string) *GetAccessKeyLastUsedOutput { + s.UserName = &v + return s +} + +type GetAccountAuthorizationDetailsInput struct { + _ struct{} `type:"structure"` + + // A list of entity types used to filter the results. Only the entities that + // match the types you specify are included in the output. Use the value LocalManagedPolicy + // to include customer managed policies. + // + // The format for this parameter is a comma-separated (if more than one) list + // of strings. Each string value in the list must be one of the valid values + // listed below. + Filter []*string `type:"list"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s GetAccountAuthorizationDetailsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccountAuthorizationDetailsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAccountAuthorizationDetailsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAccountAuthorizationDetailsInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilter sets the Filter field's value. +func (s *GetAccountAuthorizationDetailsInput) SetFilter(v []*string) *GetAccountAuthorizationDetailsInput { + s.Filter = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *GetAccountAuthorizationDetailsInput) SetMarker(v string) *GetAccountAuthorizationDetailsInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *GetAccountAuthorizationDetailsInput) SetMaxItems(v int64) *GetAccountAuthorizationDetailsInput { + s.MaxItems = &v + return s +} + +// Contains the response to a successful GetAccountAuthorizationDetails request. +type GetAccountAuthorizationDetailsOutput struct { + _ struct{} `type:"structure"` + + // A list containing information about IAM groups. + GroupDetailList []*GroupDetail `type:"list"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list containing information about managed policies. + Policies []*ManagedPolicyDetail `type:"list"` + + // A list containing information about IAM roles. + RoleDetailList []*RoleDetail `type:"list"` + + // A list containing information about IAM users. + UserDetailList []*UserDetail `type:"list"` +} + +// String returns the string representation +func (s GetAccountAuthorizationDetailsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccountAuthorizationDetailsOutput) GoString() string { + return s.String() +} + +// SetGroupDetailList sets the GroupDetailList field's value. +func (s *GetAccountAuthorizationDetailsOutput) SetGroupDetailList(v []*GroupDetail) *GetAccountAuthorizationDetailsOutput { + s.GroupDetailList = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *GetAccountAuthorizationDetailsOutput) SetIsTruncated(v bool) *GetAccountAuthorizationDetailsOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *GetAccountAuthorizationDetailsOutput) SetMarker(v string) *GetAccountAuthorizationDetailsOutput { + s.Marker = &v + return s +} + +// SetPolicies sets the Policies field's value. +func (s *GetAccountAuthorizationDetailsOutput) SetPolicies(v []*ManagedPolicyDetail) *GetAccountAuthorizationDetailsOutput { + s.Policies = v + return s +} + +// SetRoleDetailList sets the RoleDetailList field's value. +func (s *GetAccountAuthorizationDetailsOutput) SetRoleDetailList(v []*RoleDetail) *GetAccountAuthorizationDetailsOutput { + s.RoleDetailList = v + return s +} + +// SetUserDetailList sets the UserDetailList field's value. +func (s *GetAccountAuthorizationDetailsOutput) SetUserDetailList(v []*UserDetail) *GetAccountAuthorizationDetailsOutput { + s.UserDetailList = v + return s +} + +type GetAccountPasswordPolicyInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetAccountPasswordPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccountPasswordPolicyInput) GoString() string { + return s.String() +} + +// Contains the response to a successful GetAccountPasswordPolicy request. +type GetAccountPasswordPolicyOutput struct { + _ struct{} `type:"structure"` + + // A structure that contains details about the account's password policy. + // + // PasswordPolicy is a required field + PasswordPolicy *PasswordPolicy `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetAccountPasswordPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccountPasswordPolicyOutput) GoString() string { + return s.String() +} + +// SetPasswordPolicy sets the PasswordPolicy field's value. +func (s *GetAccountPasswordPolicyOutput) SetPasswordPolicy(v *PasswordPolicy) *GetAccountPasswordPolicyOutput { + s.PasswordPolicy = v + return s +} + +type GetAccountSummaryInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetAccountSummaryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccountSummaryInput) GoString() string { + return s.String() +} + +// Contains the response to a successful GetAccountSummary request. +type GetAccountSummaryOutput struct { + _ struct{} `type:"structure"` + + // A set of key value pairs containing information about IAM entity usage and + // IAM quotas. + SummaryMap map[string]*int64 `type:"map"` +} + +// String returns the string representation +func (s GetAccountSummaryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAccountSummaryOutput) GoString() string { + return s.String() +} + +// SetSummaryMap sets the SummaryMap field's value. +func (s *GetAccountSummaryOutput) SetSummaryMap(v map[string]*int64) *GetAccountSummaryOutput { + s.SummaryMap = v + return s +} + +type GetContextKeysForCustomPolicyInput struct { + _ struct{} `type:"structure"` + + // A list of policies for which you want the list of context keys referenced + // in those policies. Each document is specified as a string containing the + // complete, valid JSON text of an IAM policy. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyInputList is a required field + PolicyInputList []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s GetContextKeysForCustomPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetContextKeysForCustomPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetContextKeysForCustomPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetContextKeysForCustomPolicyInput"} + if s.PolicyInputList == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyInputList")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyInputList sets the PolicyInputList field's value. +func (s *GetContextKeysForCustomPolicyInput) SetPolicyInputList(v []*string) *GetContextKeysForCustomPolicyInput { + s.PolicyInputList = v + return s +} + +// Contains the response to a successful GetContextKeysForPrincipalPolicy or +// GetContextKeysForCustomPolicy request. +type GetContextKeysForPolicyResponse struct { + _ struct{} `type:"structure"` + + // The list of context keys that are referenced in the input policies. + ContextKeyNames []*string `type:"list"` +} + +// String returns the string representation +func (s GetContextKeysForPolicyResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetContextKeysForPolicyResponse) GoString() string { + return s.String() +} + +// SetContextKeyNames sets the ContextKeyNames field's value. +func (s *GetContextKeysForPolicyResponse) SetContextKeyNames(v []*string) *GetContextKeysForPolicyResponse { + s.ContextKeyNames = v + return s +} + +type GetContextKeysForPrincipalPolicyInput struct { + _ struct{} `type:"structure"` + + // An optional list of additional policies for which you want the list of context + // keys that are referenced. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + PolicyInputList []*string `type:"list"` + + // The ARN of a user, group, or role whose policies contain the context keys + // that you want listed. If you specify a user, the list includes context keys + // that are found in all policies that are attached to the user. The list also + // includes all groups that the user is a member of. If you pick a group or + // a role, then it includes only those context keys that are found in policies + // attached to that entity. Note that all parameters are shown in unencoded + // form here for clarity, but must be URL encoded to be included as a part of + // a real HTML request. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicySourceArn is a required field + PolicySourceArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetContextKeysForPrincipalPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetContextKeysForPrincipalPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetContextKeysForPrincipalPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetContextKeysForPrincipalPolicyInput"} + if s.PolicySourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicySourceArn")) + } + if s.PolicySourceArn != nil && len(*s.PolicySourceArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicySourceArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyInputList sets the PolicyInputList field's value. +func (s *GetContextKeysForPrincipalPolicyInput) SetPolicyInputList(v []*string) *GetContextKeysForPrincipalPolicyInput { + s.PolicyInputList = v + return s +} + +// SetPolicySourceArn sets the PolicySourceArn field's value. +func (s *GetContextKeysForPrincipalPolicyInput) SetPolicySourceArn(v string) *GetContextKeysForPrincipalPolicyInput { + s.PolicySourceArn = &v + return s +} + +type GetCredentialReportInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetCredentialReportInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCredentialReportInput) GoString() string { + return s.String() +} + +// Contains the response to a successful GetCredentialReport request. +type GetCredentialReportOutput struct { + _ struct{} `type:"structure"` + + // Contains the credential report. The report is Base64-encoded. + // + // Content is automatically base64 encoded/decoded by the SDK. + Content []byte `type:"blob"` + + // The date and time when the credential report was created, in ISO 8601 date-time + // format (http://www.iso.org/iso/iso8601). + GeneratedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The format (MIME type) of the credential report. + ReportFormat *string `type:"string" enum:"ReportFormatType"` +} + +// String returns the string representation +func (s GetCredentialReportOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCredentialReportOutput) GoString() string { + return s.String() +} + +// SetContent sets the Content field's value. +func (s *GetCredentialReportOutput) SetContent(v []byte) *GetCredentialReportOutput { + s.Content = v + return s +} + +// SetGeneratedTime sets the GeneratedTime field's value. +func (s *GetCredentialReportOutput) SetGeneratedTime(v time.Time) *GetCredentialReportOutput { + s.GeneratedTime = &v + return s +} + +// SetReportFormat sets the ReportFormat field's value. +func (s *GetCredentialReportOutput) SetReportFormat(v string) *GetCredentialReportOutput { + s.ReportFormat = &v + return s +} + +type GetGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the group. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s GetGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *GetGroupInput) SetGroupName(v string) *GetGroupInput { + s.GroupName = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *GetGroupInput) SetMarker(v string) *GetGroupInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *GetGroupInput) SetMaxItems(v int64) *GetGroupInput { + s.MaxItems = &v + return s +} + +// Contains the response to a successful GetGroup request. +type GetGroupOutput struct { + _ struct{} `type:"structure"` + + // A structure that contains details about the group. + // + // Group is a required field + Group *Group `type:"structure" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of users in the group. + // + // Users is a required field + Users []*User `type:"list" required:"true"` +} + +// String returns the string representation +func (s GetGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupOutput) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *GetGroupOutput) SetGroup(v *Group) *GetGroupOutput { + s.Group = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *GetGroupOutput) SetIsTruncated(v bool) *GetGroupOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *GetGroupOutput) SetMarker(v string) *GetGroupOutput { + s.Marker = &v + return s +} + +// SetUsers sets the Users field's value. +func (s *GetGroupOutput) SetUsers(v []*User) *GetGroupOutput { + s.Users = v + return s +} + +type GetGroupPolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the group the policy is associated with. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The name of the policy document to get. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetGroupPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetGroupPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetGroupPolicyInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *GetGroupPolicyInput) SetGroupName(v string) *GetGroupPolicyInput { + s.GroupName = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *GetGroupPolicyInput) SetPolicyName(v string) *GetGroupPolicyInput { + s.PolicyName = &v + return s +} + +// Contains the response to a successful GetGroupPolicy request. +type GetGroupPolicyOutput struct { + _ struct{} `type:"structure"` + + // The group the policy is associated with. + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The policy document. + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the policy. + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetGroupPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupPolicyOutput) GoString() string { + return s.String() +} + +// SetGroupName sets the GroupName field's value. +func (s *GetGroupPolicyOutput) SetGroupName(v string) *GetGroupPolicyOutput { + s.GroupName = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *GetGroupPolicyOutput) SetPolicyDocument(v string) *GetGroupPolicyOutput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *GetGroupPolicyOutput) SetPolicyName(v string) *GetGroupPolicyOutput { + s.PolicyName = &v + return s +} + +type GetInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the instance profile to get information about. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // InstanceProfileName is a required field + InstanceProfileName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceProfileInput"} + if s.InstanceProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceProfileName")) + } + if s.InstanceProfileName != nil && len(*s.InstanceProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceProfileName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceProfileName sets the InstanceProfileName field's value. +func (s *GetInstanceProfileInput) SetInstanceProfileName(v string) *GetInstanceProfileInput { + s.InstanceProfileName = &v + return s +} + +// Contains the response to a successful GetInstanceProfile request. +type GetInstanceProfileOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the instance profile. + // + // InstanceProfile is a required field + InstanceProfile *InstanceProfile `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInstanceProfileOutput) GoString() string { + return s.String() +} + +// SetInstanceProfile sets the InstanceProfile field's value. +func (s *GetInstanceProfileOutput) SetInstanceProfile(v *InstanceProfile) *GetInstanceProfileOutput { + s.InstanceProfile = v + return s +} + +type GetLoginProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the user whose login profile you want to retrieve. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetLoginProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLoginProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLoginProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLoginProfileInput"} + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetUserName sets the UserName field's value. +func (s *GetLoginProfileInput) SetUserName(v string) *GetLoginProfileInput { + s.UserName = &v + return s +} + +// Contains the response to a successful GetLoginProfile request. +type GetLoginProfileOutput struct { + _ struct{} `type:"structure"` + + // A structure containing the user name and password create date for the user. + // + // LoginProfile is a required field + LoginProfile *LoginProfile `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetLoginProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLoginProfileOutput) GoString() string { + return s.String() +} + +// SetLoginProfile sets the LoginProfile field's value. +func (s *GetLoginProfileOutput) SetLoginProfile(v *LoginProfile) *GetLoginProfileOutput { + s.LoginProfile = v + return s +} + +type GetOpenIDConnectProviderInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the OIDC provider resource object in IAM + // to get information for. You can get a list of OIDC provider resource ARNs + // by using the ListOpenIDConnectProviders operation. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // OpenIDConnectProviderArn is a required field + OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetOpenIDConnectProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetOpenIDConnectProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetOpenIDConnectProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetOpenIDConnectProviderInput"} + if s.OpenIDConnectProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("OpenIDConnectProviderArn")) + } + if s.OpenIDConnectProviderArn != nil && len(*s.OpenIDConnectProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("OpenIDConnectProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOpenIDConnectProviderArn sets the OpenIDConnectProviderArn field's value. +func (s *GetOpenIDConnectProviderInput) SetOpenIDConnectProviderArn(v string) *GetOpenIDConnectProviderInput { + s.OpenIDConnectProviderArn = &v + return s +} + +// Contains the response to a successful GetOpenIDConnectProvider request. +type GetOpenIDConnectProviderOutput struct { + _ struct{} `type:"structure"` + + // A list of client IDs (also known as audiences) that are associated with the + // specified IAM OIDC provider resource object. For more information, see CreateOpenIDConnectProvider. + ClientIDList []*string `type:"list"` + + // The date and time when the IAM OIDC provider resource object was created + // in the AWS account. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // A list of certificate thumbprints that are associated with the specified + // IAM OIDC provider resource object. For more information, see CreateOpenIDConnectProvider. + ThumbprintList []*string `type:"list"` + + // The URL that the IAM OIDC provider resource object is associated with. For + // more information, see CreateOpenIDConnectProvider. + Url *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetOpenIDConnectProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetOpenIDConnectProviderOutput) GoString() string { + return s.String() +} + +// SetClientIDList sets the ClientIDList field's value. +func (s *GetOpenIDConnectProviderOutput) SetClientIDList(v []*string) *GetOpenIDConnectProviderOutput { + s.ClientIDList = v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *GetOpenIDConnectProviderOutput) SetCreateDate(v time.Time) *GetOpenIDConnectProviderOutput { + s.CreateDate = &v + return s +} + +// SetThumbprintList sets the ThumbprintList field's value. +func (s *GetOpenIDConnectProviderOutput) SetThumbprintList(v []*string) *GetOpenIDConnectProviderOutput { + s.ThumbprintList = v + return s +} + +// SetUrl sets the Url field's value. +func (s *GetOpenIDConnectProviderOutput) SetUrl(v string) *GetOpenIDConnectProviderOutput { + s.Url = &v + return s +} + +type GetPolicyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the managed policy that you want information + // about. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPolicyInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *GetPolicyInput) SetPolicyArn(v string) *GetPolicyInput { + s.PolicyArn = &v + return s +} + +// Contains the response to a successful GetPolicy request. +type GetPolicyOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the policy. + Policy *Policy `type:"structure"` +} + +// String returns the string representation +func (s GetPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPolicyOutput) GoString() string { + return s.String() +} + +// SetPolicy sets the Policy field's value. +func (s *GetPolicyOutput) SetPolicy(v *Policy) *GetPolicyOutput { + s.Policy = v + return s +} + +type GetPolicyVersionInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the managed policy that you want information + // about. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // Identifies the policy version to retrieve. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consists of the lowercase letter 'v' followed + // by one or two digits, and optionally followed by a period '.' and a string + // of letters and digits. + // + // VersionId is a required field + VersionId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s GetPolicyVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPolicyVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPolicyVersionInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.VersionId == nil { + invalidParams.Add(request.NewErrParamRequired("VersionId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *GetPolicyVersionInput) SetPolicyArn(v string) *GetPolicyVersionInput { + s.PolicyArn = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetPolicyVersionInput) SetVersionId(v string) *GetPolicyVersionInput { + s.VersionId = &v + return s +} + +// Contains the response to a successful GetPolicyVersion request. +type GetPolicyVersionOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the policy version. + PolicyVersion *PolicyVersion `type:"structure"` +} + +// String returns the string representation +func (s GetPolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPolicyVersionOutput) GoString() string { + return s.String() +} + +// SetPolicyVersion sets the PolicyVersion field's value. +func (s *GetPolicyVersionOutput) SetPolicyVersion(v *PolicyVersion) *GetPolicyVersionOutput { + s.PolicyVersion = v + return s +} + +type GetRoleInput struct { + _ struct{} `type:"structure"` + + // The name of the IAM role to get information about. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRoleInput"} + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRoleName sets the RoleName field's value. +func (s *GetRoleInput) SetRoleName(v string) *GetRoleInput { + s.RoleName = &v + return s +} + +// Contains the response to a successful GetRole request. +type GetRoleOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the IAM role. + // + // Role is a required field + Role *Role `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRoleOutput) GoString() string { + return s.String() +} + +// SetRole sets the Role field's value. +func (s *GetRoleOutput) SetRole(v *Role) *GetRoleOutput { + s.Role = v + return s +} + +type GetRolePolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the policy document to get. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The name of the role associated with the policy. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetRolePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRolePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRolePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRolePolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *GetRolePolicyInput) SetPolicyName(v string) *GetRolePolicyInput { + s.PolicyName = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *GetRolePolicyInput) SetRoleName(v string) *GetRolePolicyInput { + s.RoleName = &v + return s +} + +// Contains the response to a successful GetRolePolicy request. +type GetRolePolicyOutput struct { + _ struct{} `type:"structure"` + + // The policy document. + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the policy. + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The role the policy is associated with. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetRolePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRolePolicyOutput) GoString() string { + return s.String() +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *GetRolePolicyOutput) SetPolicyDocument(v string) *GetRolePolicyOutput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *GetRolePolicyOutput) SetPolicyName(v string) *GetRolePolicyOutput { + s.PolicyName = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *GetRolePolicyOutput) SetRoleName(v string) *GetRolePolicyOutput { + s.RoleName = &v + return s +} + +type GetSAMLProviderInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the SAML provider resource object in IAM + // to get information about. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // SAMLProviderArn is a required field + SAMLProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetSAMLProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSAMLProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSAMLProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSAMLProviderInput"} + if s.SAMLProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("SAMLProviderArn")) + } + if s.SAMLProviderArn != nil && len(*s.SAMLProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("SAMLProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSAMLProviderArn sets the SAMLProviderArn field's value. +func (s *GetSAMLProviderInput) SetSAMLProviderArn(v string) *GetSAMLProviderInput { + s.SAMLProviderArn = &v + return s +} + +// Contains the response to a successful GetSAMLProvider request. +type GetSAMLProviderOutput struct { + _ struct{} `type:"structure"` + + // The date and time when the SAML provider was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The XML metadata document that includes information about an identity provider. + SAMLMetadataDocument *string `min:"1000" type:"string"` + + // The expiration date and time for the SAML provider. + ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s GetSAMLProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSAMLProviderOutput) GoString() string { + return s.String() +} + +// SetCreateDate sets the CreateDate field's value. +func (s *GetSAMLProviderOutput) SetCreateDate(v time.Time) *GetSAMLProviderOutput { + s.CreateDate = &v + return s +} + +// SetSAMLMetadataDocument sets the SAMLMetadataDocument field's value. +func (s *GetSAMLProviderOutput) SetSAMLMetadataDocument(v string) *GetSAMLProviderOutput { + s.SAMLMetadataDocument = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *GetSAMLProviderOutput) SetValidUntil(v time.Time) *GetSAMLProviderOutput { + s.ValidUntil = &v + return s +} + +type GetSSHPublicKeyInput struct { + _ struct{} `type:"structure"` + + // Specifies the public key encoding format to use in the response. To retrieve + // the public key in ssh-rsa format, use SSH. To retrieve the public key in + // PEM format, use PEM. + // + // Encoding is a required field + Encoding *string `type:"string" required:"true" enum:"encodingType"` + + // The unique identifier for the SSH public key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // SSHPublicKeyId is a required field + SSHPublicKeyId *string `min:"20" type:"string" required:"true"` + + // The name of the IAM user associated with the SSH public key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetSSHPublicKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSSHPublicKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSSHPublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSSHPublicKeyInput"} + if s.Encoding == nil { + invalidParams.Add(request.NewErrParamRequired("Encoding")) + } + if s.SSHPublicKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("SSHPublicKeyId")) + } + if s.SSHPublicKeyId != nil && len(*s.SSHPublicKeyId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("SSHPublicKeyId", 20)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncoding sets the Encoding field's value. +func (s *GetSSHPublicKeyInput) SetEncoding(v string) *GetSSHPublicKeyInput { + s.Encoding = &v + return s +} + +// SetSSHPublicKeyId sets the SSHPublicKeyId field's value. +func (s *GetSSHPublicKeyInput) SetSSHPublicKeyId(v string) *GetSSHPublicKeyInput { + s.SSHPublicKeyId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *GetSSHPublicKeyInput) SetUserName(v string) *GetSSHPublicKeyInput { + s.UserName = &v + return s +} + +// Contains the response to a successful GetSSHPublicKey request. +type GetSSHPublicKeyOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the SSH public key. + SSHPublicKey *SSHPublicKey `type:"structure"` +} + +// String returns the string representation +func (s GetSSHPublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSSHPublicKeyOutput) GoString() string { + return s.String() +} + +// SetSSHPublicKey sets the SSHPublicKey field's value. +func (s *GetSSHPublicKeyOutput) SetSSHPublicKey(v *SSHPublicKey) *GetSSHPublicKeyOutput { + s.SSHPublicKey = v + return s +} + +type GetServerCertificateInput struct { + _ struct{} `type:"structure"` + + // The name of the server certificate you want to retrieve information about. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // ServerCertificateName is a required field + ServerCertificateName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetServerCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetServerCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetServerCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetServerCertificateInput"} + if s.ServerCertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("ServerCertificateName")) + } + if s.ServerCertificateName != nil && len(*s.ServerCertificateName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServerCertificateName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServerCertificateName sets the ServerCertificateName field's value. +func (s *GetServerCertificateInput) SetServerCertificateName(v string) *GetServerCertificateInput { + s.ServerCertificateName = &v + return s +} + +// Contains the response to a successful GetServerCertificate request. +type GetServerCertificateOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the server certificate. + // + // ServerCertificate is a required field + ServerCertificate *ServerCertificate `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetServerCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetServerCertificateOutput) GoString() string { + return s.String() +} + +// SetServerCertificate sets the ServerCertificate field's value. +func (s *GetServerCertificateOutput) SetServerCertificate(v *ServerCertificate) *GetServerCertificateOutput { + s.ServerCertificate = v + return s +} + +type GetServiceLinkedRoleDeletionStatusInput struct { + _ struct{} `type:"structure"` + + // The deletion task identifier. This identifier is returned by the DeleteServiceLinkedRole + // operation in the format task/aws-service-role///. + // + // DeletionTaskId is a required field + DeletionTaskId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetServiceLinkedRoleDeletionStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetServiceLinkedRoleDeletionStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetServiceLinkedRoleDeletionStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetServiceLinkedRoleDeletionStatusInput"} + if s.DeletionTaskId == nil { + invalidParams.Add(request.NewErrParamRequired("DeletionTaskId")) + } + if s.DeletionTaskId != nil && len(*s.DeletionTaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeletionTaskId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeletionTaskId sets the DeletionTaskId field's value. +func (s *GetServiceLinkedRoleDeletionStatusInput) SetDeletionTaskId(v string) *GetServiceLinkedRoleDeletionStatusInput { + s.DeletionTaskId = &v + return s +} + +type GetServiceLinkedRoleDeletionStatusOutput struct { + _ struct{} `type:"structure"` + + // An object that contains details about the reason the deletion failed. + Reason *DeletionTaskFailureReasonType `type:"structure"` + + // The status of the deletion. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"DeletionTaskStatusType"` +} + +// String returns the string representation +func (s GetServiceLinkedRoleDeletionStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetServiceLinkedRoleDeletionStatusOutput) GoString() string { + return s.String() +} + +// SetReason sets the Reason field's value. +func (s *GetServiceLinkedRoleDeletionStatusOutput) SetReason(v *DeletionTaskFailureReasonType) *GetServiceLinkedRoleDeletionStatusOutput { + s.Reason = v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetServiceLinkedRoleDeletionStatusOutput) SetStatus(v string) *GetServiceLinkedRoleDeletionStatusOutput { + s.Status = &v + return s +} + +type GetUserInput struct { + _ struct{} `type:"structure"` + + // The name of the user to get information about. + // + // This parameter is optional. If it is not included, it defaults to the user + // making the request. This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetUserInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetUserInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetUserInput"} + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetUserName sets the UserName field's value. +func (s *GetUserInput) SetUserName(v string) *GetUserInput { + s.UserName = &v + return s +} + +// Contains the response to a successful GetUser request. +type GetUserOutput struct { + _ struct{} `type:"structure"` + + // A structure containing details about the IAM user. + // + // User is a required field + User *User `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetUserOutput) GoString() string { + return s.String() +} + +// SetUser sets the User field's value. +func (s *GetUserOutput) SetUser(v *User) *GetUserOutput { + s.User = v + return s +} + +type GetUserPolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the policy document to get. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The name of the user who the policy is associated with. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetUserPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetUserPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetUserPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetUserPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *GetUserPolicyInput) SetPolicyName(v string) *GetUserPolicyInput { + s.PolicyName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *GetUserPolicyInput) SetUserName(v string) *GetUserPolicyInput { + s.UserName = &v + return s +} + +// Contains the response to a successful GetUserPolicy request. +type GetUserPolicyOutput struct { + _ struct{} `type:"structure"` + + // The policy document. + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the policy. + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The user the policy is associated with. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetUserPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetUserPolicyOutput) GoString() string { + return s.String() +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *GetUserPolicyOutput) SetPolicyDocument(v string) *GetUserPolicyOutput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *GetUserPolicyOutput) SetPolicyName(v string) *GetUserPolicyOutput { + s.PolicyName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *GetUserPolicyOutput) SetUserName(v string) *GetUserPolicyOutput { + s.UserName = &v + return s +} + +// Contains information about an IAM group entity. +// +// This data type is used as a response element in the following operations: +// +// * CreateGroup +// +// * GetGroup +// +// * ListGroups +type Group struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) specifying the group. For more information + // about ARNs and how to use them in policies, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the group was created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The stable and unique string identifying the group. For more information + // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // GroupId is a required field + GroupId *string `min:"16" type:"string" required:"true"` + + // The friendly name that identifies the group. + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The path to the group. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Path is a required field + Path *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Group) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Group) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *Group) SetArn(v string) *Group { + s.Arn = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *Group) SetCreateDate(v time.Time) *Group { + s.CreateDate = &v + return s +} + +// SetGroupId sets the GroupId field's value. +func (s *Group) SetGroupId(v string) *Group { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *Group) SetGroupName(v string) *Group { + s.GroupName = &v + return s +} + +// SetPath sets the Path field's value. +func (s *Group) SetPath(v string) *Group { + s.Path = &v + return s +} + +// Contains information about an IAM group, including all of the group's policies. +// +// This data type is used as a response element in the GetAccountAuthorizationDetails +// operation. +type GroupDetail struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + Arn *string `min:"20" type:"string"` + + // A list of the managed policies attached to the group. + AttachedManagedPolicies []*AttachedPolicy `type:"list"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the group was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The stable and unique string identifying the group. For more information + // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + GroupId *string `min:"16" type:"string"` + + // The friendly name that identifies the group. + GroupName *string `min:"1" type:"string"` + + // A list of the inline policies embedded in the group. + GroupPolicyList []*PolicyDetail `type:"list"` + + // The path to the group. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + Path *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GroupDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GroupDetail) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *GroupDetail) SetArn(v string) *GroupDetail { + s.Arn = &v + return s +} + +// SetAttachedManagedPolicies sets the AttachedManagedPolicies field's value. +func (s *GroupDetail) SetAttachedManagedPolicies(v []*AttachedPolicy) *GroupDetail { + s.AttachedManagedPolicies = v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *GroupDetail) SetCreateDate(v time.Time) *GroupDetail { + s.CreateDate = &v + return s +} + +// SetGroupId sets the GroupId field's value. +func (s *GroupDetail) SetGroupId(v string) *GroupDetail { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *GroupDetail) SetGroupName(v string) *GroupDetail { + s.GroupName = &v + return s +} + +// SetGroupPolicyList sets the GroupPolicyList field's value. +func (s *GroupDetail) SetGroupPolicyList(v []*PolicyDetail) *GroupDetail { + s.GroupPolicyList = v + return s +} + +// SetPath sets the Path field's value. +func (s *GroupDetail) SetPath(v string) *GroupDetail { + s.Path = &v + return s +} + +// Contains information about an instance profile. +// +// This data type is used as a response element in the following operations: +// +// * CreateInstanceProfile +// +// * GetInstanceProfile +// +// * ListInstanceProfiles +// +// * ListInstanceProfilesForRole +type InstanceProfile struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) specifying the instance profile. For more + // information about ARNs and how to use them in policies, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // The date when the instance profile was created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The stable and unique string identifying the instance profile. For more information + // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // InstanceProfileId is a required field + InstanceProfileId *string `min:"16" type:"string" required:"true"` + + // The name identifying the instance profile. + // + // InstanceProfileName is a required field + InstanceProfileName *string `min:"1" type:"string" required:"true"` + + // The path to the instance profile. For more information about paths, see IAM + // Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Path is a required field + Path *string `min:"1" type:"string" required:"true"` + + // The role associated with the instance profile. + // + // Roles is a required field + Roles []*Role `type:"list" required:"true"` +} + +// String returns the string representation +func (s InstanceProfile) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceProfile) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *InstanceProfile) SetArn(v string) *InstanceProfile { + s.Arn = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *InstanceProfile) SetCreateDate(v time.Time) *InstanceProfile { + s.CreateDate = &v + return s +} + +// SetInstanceProfileId sets the InstanceProfileId field's value. +func (s *InstanceProfile) SetInstanceProfileId(v string) *InstanceProfile { + s.InstanceProfileId = &v + return s +} + +// SetInstanceProfileName sets the InstanceProfileName field's value. +func (s *InstanceProfile) SetInstanceProfileName(v string) *InstanceProfile { + s.InstanceProfileName = &v + return s +} + +// SetPath sets the Path field's value. +func (s *InstanceProfile) SetPath(v string) *InstanceProfile { + s.Path = &v + return s +} + +// SetRoles sets the Roles field's value. +func (s *InstanceProfile) SetRoles(v []*Role) *InstanceProfile { + s.Roles = v + return s +} + +type ListAccessKeysInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the user. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListAccessKeysInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAccessKeysInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAccessKeysInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAccessKeysInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListAccessKeysInput) SetMarker(v string) *ListAccessKeysInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListAccessKeysInput) SetMaxItems(v int64) *ListAccessKeysInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListAccessKeysInput) SetUserName(v string) *ListAccessKeysInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListAccessKeys request. +type ListAccessKeysOutput struct { + _ struct{} `type:"structure"` + + // A list of objects containing metadata about the access keys. + // + // AccessKeyMetadata is a required field + AccessKeyMetadata []*AccessKeyMetadata `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListAccessKeysOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAccessKeysOutput) GoString() string { + return s.String() +} + +// SetAccessKeyMetadata sets the AccessKeyMetadata field's value. +func (s *ListAccessKeysOutput) SetAccessKeyMetadata(v []*AccessKeyMetadata) *ListAccessKeysOutput { + s.AccessKeyMetadata = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListAccessKeysOutput) SetIsTruncated(v bool) *ListAccessKeysOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListAccessKeysOutput) SetMarker(v string) *ListAccessKeysOutput { + s.Marker = &v + return s +} + +type ListAccountAliasesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s ListAccountAliasesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAccountAliasesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAccountAliasesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAccountAliasesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListAccountAliasesInput) SetMarker(v string) *ListAccountAliasesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListAccountAliasesInput) SetMaxItems(v int64) *ListAccountAliasesInput { + s.MaxItems = &v + return s +} + +// Contains the response to a successful ListAccountAliases request. +type ListAccountAliasesOutput struct { + _ struct{} `type:"structure"` + + // A list of aliases associated with the account. AWS supports only one alias + // per account. + // + // AccountAliases is a required field + AccountAliases []*string `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListAccountAliasesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAccountAliasesOutput) GoString() string { + return s.String() +} + +// SetAccountAliases sets the AccountAliases field's value. +func (s *ListAccountAliasesOutput) SetAccountAliases(v []*string) *ListAccountAliasesOutput { + s.AccountAliases = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListAccountAliasesOutput) SetIsTruncated(v bool) *ListAccountAliasesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListAccountAliasesOutput) SetMarker(v string) *ListAccountAliasesOutput { + s.Marker = &v + return s +} + +type ListAttachedGroupPoliciesInput struct { + _ struct{} `type:"structure"` + + // The name (friendly name, not ARN) of the group to list attached policies + // for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. This parameter is optional. If + // it is not included, it defaults to a slash (/), listing all policies. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `type:"string"` +} + +// String returns the string representation +func (s ListAttachedGroupPoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAttachedGroupPoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAttachedGroupPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAttachedGroupPoliciesInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *ListAttachedGroupPoliciesInput) SetGroupName(v string) *ListAttachedGroupPoliciesInput { + s.GroupName = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListAttachedGroupPoliciesInput) SetMarker(v string) *ListAttachedGroupPoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListAttachedGroupPoliciesInput) SetMaxItems(v int64) *ListAttachedGroupPoliciesInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListAttachedGroupPoliciesInput) SetPathPrefix(v string) *ListAttachedGroupPoliciesInput { + s.PathPrefix = &v + return s +} + +// Contains the response to a successful ListAttachedGroupPolicies request. +type ListAttachedGroupPoliciesOutput struct { + _ struct{} `type:"structure"` + + // A list of the attached policies. + AttachedPolicies []*AttachedPolicy `type:"list"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListAttachedGroupPoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAttachedGroupPoliciesOutput) GoString() string { + return s.String() +} + +// SetAttachedPolicies sets the AttachedPolicies field's value. +func (s *ListAttachedGroupPoliciesOutput) SetAttachedPolicies(v []*AttachedPolicy) *ListAttachedGroupPoliciesOutput { + s.AttachedPolicies = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListAttachedGroupPoliciesOutput) SetIsTruncated(v bool) *ListAttachedGroupPoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListAttachedGroupPoliciesOutput) SetMarker(v string) *ListAttachedGroupPoliciesOutput { + s.Marker = &v + return s +} + +type ListAttachedRolePoliciesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. This parameter is optional. If + // it is not included, it defaults to a slash (/), listing all policies. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `type:"string"` + + // The name (friendly name, not ARN) of the role to list attached policies for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListAttachedRolePoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAttachedRolePoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAttachedRolePoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAttachedRolePoliciesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListAttachedRolePoliciesInput) SetMarker(v string) *ListAttachedRolePoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListAttachedRolePoliciesInput) SetMaxItems(v int64) *ListAttachedRolePoliciesInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListAttachedRolePoliciesInput) SetPathPrefix(v string) *ListAttachedRolePoliciesInput { + s.PathPrefix = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *ListAttachedRolePoliciesInput) SetRoleName(v string) *ListAttachedRolePoliciesInput { + s.RoleName = &v + return s +} + +// Contains the response to a successful ListAttachedRolePolicies request. +type ListAttachedRolePoliciesOutput struct { + _ struct{} `type:"structure"` + + // A list of the attached policies. + AttachedPolicies []*AttachedPolicy `type:"list"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListAttachedRolePoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAttachedRolePoliciesOutput) GoString() string { + return s.String() +} + +// SetAttachedPolicies sets the AttachedPolicies field's value. +func (s *ListAttachedRolePoliciesOutput) SetAttachedPolicies(v []*AttachedPolicy) *ListAttachedRolePoliciesOutput { + s.AttachedPolicies = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListAttachedRolePoliciesOutput) SetIsTruncated(v bool) *ListAttachedRolePoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListAttachedRolePoliciesOutput) SetMarker(v string) *ListAttachedRolePoliciesOutput { + s.Marker = &v + return s +} + +type ListAttachedUserPoliciesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. This parameter is optional. If + // it is not included, it defaults to a slash (/), listing all policies. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `type:"string"` + + // The name (friendly name, not ARN) of the user to list attached policies for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListAttachedUserPoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAttachedUserPoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAttachedUserPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAttachedUserPoliciesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListAttachedUserPoliciesInput) SetMarker(v string) *ListAttachedUserPoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListAttachedUserPoliciesInput) SetMaxItems(v int64) *ListAttachedUserPoliciesInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListAttachedUserPoliciesInput) SetPathPrefix(v string) *ListAttachedUserPoliciesInput { + s.PathPrefix = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListAttachedUserPoliciesInput) SetUserName(v string) *ListAttachedUserPoliciesInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListAttachedUserPolicies request. +type ListAttachedUserPoliciesOutput struct { + _ struct{} `type:"structure"` + + // A list of the attached policies. + AttachedPolicies []*AttachedPolicy `type:"list"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListAttachedUserPoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAttachedUserPoliciesOutput) GoString() string { + return s.String() +} + +// SetAttachedPolicies sets the AttachedPolicies field's value. +func (s *ListAttachedUserPoliciesOutput) SetAttachedPolicies(v []*AttachedPolicy) *ListAttachedUserPoliciesOutput { + s.AttachedPolicies = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListAttachedUserPoliciesOutput) SetIsTruncated(v bool) *ListAttachedUserPoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListAttachedUserPoliciesOutput) SetMarker(v string) *ListAttachedUserPoliciesOutput { + s.Marker = &v + return s +} + +type ListEntitiesForPolicyInput struct { + _ struct{} `type:"structure"` + + // The entity type to use for filtering the results. + // + // For example, when EntityFilter is Role, only the roles that are attached + // to the specified policy are returned. This parameter is optional. If it is + // not included, all attached entities (users, groups, and roles) are returned. + // The argument for this parameter must be one of the valid values listed below. + EntityFilter *string `type:"string" enum:"EntityType"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. This parameter is optional. If + // it is not included, it defaults to a slash (/), listing all entities. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the IAM policy for which you want the versions. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListEntitiesForPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListEntitiesForPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListEntitiesForPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListEntitiesForPolicyInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PathPrefix != nil && len(*s.PathPrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PathPrefix", 1)) + } + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEntityFilter sets the EntityFilter field's value. +func (s *ListEntitiesForPolicyInput) SetEntityFilter(v string) *ListEntitiesForPolicyInput { + s.EntityFilter = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListEntitiesForPolicyInput) SetMarker(v string) *ListEntitiesForPolicyInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListEntitiesForPolicyInput) SetMaxItems(v int64) *ListEntitiesForPolicyInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListEntitiesForPolicyInput) SetPathPrefix(v string) *ListEntitiesForPolicyInput { + s.PathPrefix = &v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *ListEntitiesForPolicyInput) SetPolicyArn(v string) *ListEntitiesForPolicyInput { + s.PolicyArn = &v + return s +} + +// Contains the response to a successful ListEntitiesForPolicy request. +type ListEntitiesForPolicyOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of IAM groups that the policy is attached to. + PolicyGroups []*PolicyGroup `type:"list"` + + // A list of IAM roles that the policy is attached to. + PolicyRoles []*PolicyRole `type:"list"` + + // A list of IAM users that the policy is attached to. + PolicyUsers []*PolicyUser `type:"list"` +} + +// String returns the string representation +func (s ListEntitiesForPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListEntitiesForPolicyOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListEntitiesForPolicyOutput) SetIsTruncated(v bool) *ListEntitiesForPolicyOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListEntitiesForPolicyOutput) SetMarker(v string) *ListEntitiesForPolicyOutput { + s.Marker = &v + return s +} + +// SetPolicyGroups sets the PolicyGroups field's value. +func (s *ListEntitiesForPolicyOutput) SetPolicyGroups(v []*PolicyGroup) *ListEntitiesForPolicyOutput { + s.PolicyGroups = v + return s +} + +// SetPolicyRoles sets the PolicyRoles field's value. +func (s *ListEntitiesForPolicyOutput) SetPolicyRoles(v []*PolicyRole) *ListEntitiesForPolicyOutput { + s.PolicyRoles = v + return s +} + +// SetPolicyUsers sets the PolicyUsers field's value. +func (s *ListEntitiesForPolicyOutput) SetPolicyUsers(v []*PolicyUser) *ListEntitiesForPolicyOutput { + s.PolicyUsers = v + return s +} + +type ListGroupPoliciesInput struct { + _ struct{} `type:"structure"` + + // The name of the group to list policies for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s ListGroupPoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupPoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGroupPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGroupPoliciesInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *ListGroupPoliciesInput) SetGroupName(v string) *ListGroupPoliciesInput { + s.GroupName = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListGroupPoliciesInput) SetMarker(v string) *ListGroupPoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListGroupPoliciesInput) SetMaxItems(v int64) *ListGroupPoliciesInput { + s.MaxItems = &v + return s +} + +// Contains the response to a successful ListGroupPolicies request. +type ListGroupPoliciesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of policy names. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyNames is a required field + PolicyNames []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListGroupPoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupPoliciesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListGroupPoliciesOutput) SetIsTruncated(v bool) *ListGroupPoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListGroupPoliciesOutput) SetMarker(v string) *ListGroupPoliciesOutput { + s.Marker = &v + return s +} + +// SetPolicyNames sets the PolicyNames field's value. +func (s *ListGroupPoliciesOutput) SetPolicyNames(v []*string) *ListGroupPoliciesOutput { + s.PolicyNames = v + return s +} + +type ListGroupsForUserInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the user to list groups for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListGroupsForUserInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupsForUserInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGroupsForUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGroupsForUserInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListGroupsForUserInput) SetMarker(v string) *ListGroupsForUserInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListGroupsForUserInput) SetMaxItems(v int64) *ListGroupsForUserInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListGroupsForUserInput) SetUserName(v string) *ListGroupsForUserInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListGroupsForUser request. +type ListGroupsForUserOutput struct { + _ struct{} `type:"structure"` + + // A list of groups. + // + // Groups is a required field + Groups []*Group `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListGroupsForUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupsForUserOutput) GoString() string { + return s.String() +} + +// SetGroups sets the Groups field's value. +func (s *ListGroupsForUserOutput) SetGroups(v []*Group) *ListGroupsForUserOutput { + s.Groups = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListGroupsForUserOutput) SetIsTruncated(v bool) *ListGroupsForUserOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListGroupsForUserOutput) SetMarker(v string) *ListGroupsForUserOutput { + s.Marker = &v + return s +} + +type ListGroupsInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. For example, the prefix /division_abc/subdivision_xyz/ + // gets all groups whose path starts with /division_abc/subdivision_xyz/. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/), listing all groups. This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGroupsInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PathPrefix != nil && len(*s.PathPrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PathPrefix", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListGroupsInput) SetMarker(v string) *ListGroupsInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListGroupsInput) SetMaxItems(v int64) *ListGroupsInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListGroupsInput) SetPathPrefix(v string) *ListGroupsInput { + s.PathPrefix = &v + return s +} + +// Contains the response to a successful ListGroups request. +type ListGroupsOutput struct { + _ struct{} `type:"structure"` + + // A list of groups. + // + // Groups is a required field + Groups []*Group `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupsOutput) GoString() string { + return s.String() +} + +// SetGroups sets the Groups field's value. +func (s *ListGroupsOutput) SetGroups(v []*Group) *ListGroupsOutput { + s.Groups = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListGroupsOutput) SetIsTruncated(v bool) *ListGroupsOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListGroupsOutput) SetMarker(v string) *ListGroupsOutput { + s.Marker = &v + return s +} + +type ListInstanceProfilesForRoleInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the role to list instance profiles for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListInstanceProfilesForRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceProfilesForRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListInstanceProfilesForRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInstanceProfilesForRoleInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListInstanceProfilesForRoleInput) SetMarker(v string) *ListInstanceProfilesForRoleInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListInstanceProfilesForRoleInput) SetMaxItems(v int64) *ListInstanceProfilesForRoleInput { + s.MaxItems = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *ListInstanceProfilesForRoleInput) SetRoleName(v string) *ListInstanceProfilesForRoleInput { + s.RoleName = &v + return s +} + +// Contains the response to a successful ListInstanceProfilesForRole request. +type ListInstanceProfilesForRoleOutput struct { + _ struct{} `type:"structure"` + + // A list of instance profiles. + // + // InstanceProfiles is a required field + InstanceProfiles []*InstanceProfile `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListInstanceProfilesForRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceProfilesForRoleOutput) GoString() string { + return s.String() +} + +// SetInstanceProfiles sets the InstanceProfiles field's value. +func (s *ListInstanceProfilesForRoleOutput) SetInstanceProfiles(v []*InstanceProfile) *ListInstanceProfilesForRoleOutput { + s.InstanceProfiles = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListInstanceProfilesForRoleOutput) SetIsTruncated(v bool) *ListInstanceProfilesForRoleOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListInstanceProfilesForRoleOutput) SetMarker(v string) *ListInstanceProfilesForRoleOutput { + s.Marker = &v + return s +} + +type ListInstanceProfilesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. For example, the prefix /application_abc/component_xyz/ + // gets all instance profiles whose path starts with /application_abc/component_xyz/. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/), listing all instance profiles. This parameter allows (per its regex + // pattern (http://wikipedia.org/wiki/regex)) a string of characters consisting + // of either a forward slash (/) by itself or a string that must begin and end + // with forward slashes. In addition, it can contain any ASCII character from + // the ! (\u0021) through the DEL character (\u007F), including most punctuation + // characters, digits, and upper and lowercased letters. + PathPrefix *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListInstanceProfilesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceProfilesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListInstanceProfilesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInstanceProfilesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PathPrefix != nil && len(*s.PathPrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PathPrefix", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListInstanceProfilesInput) SetMarker(v string) *ListInstanceProfilesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListInstanceProfilesInput) SetMaxItems(v int64) *ListInstanceProfilesInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListInstanceProfilesInput) SetPathPrefix(v string) *ListInstanceProfilesInput { + s.PathPrefix = &v + return s +} + +// Contains the response to a successful ListInstanceProfiles request. +type ListInstanceProfilesOutput struct { + _ struct{} `type:"structure"` + + // A list of instance profiles. + // + // InstanceProfiles is a required field + InstanceProfiles []*InstanceProfile `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListInstanceProfilesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceProfilesOutput) GoString() string { + return s.String() +} + +// SetInstanceProfiles sets the InstanceProfiles field's value. +func (s *ListInstanceProfilesOutput) SetInstanceProfiles(v []*InstanceProfile) *ListInstanceProfilesOutput { + s.InstanceProfiles = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListInstanceProfilesOutput) SetIsTruncated(v bool) *ListInstanceProfilesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListInstanceProfilesOutput) SetMarker(v string) *ListInstanceProfilesOutput { + s.Marker = &v + return s +} + +type ListMFADevicesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the user whose MFA devices you want to list. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListMFADevicesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMFADevicesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListMFADevicesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListMFADevicesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListMFADevicesInput) SetMarker(v string) *ListMFADevicesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListMFADevicesInput) SetMaxItems(v int64) *ListMFADevicesInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListMFADevicesInput) SetUserName(v string) *ListMFADevicesInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListMFADevices request. +type ListMFADevicesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // A list of MFA devices. + // + // MFADevices is a required field + MFADevices []*MFADevice `type:"list" required:"true"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListMFADevicesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMFADevicesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListMFADevicesOutput) SetIsTruncated(v bool) *ListMFADevicesOutput { + s.IsTruncated = &v + return s +} + +// SetMFADevices sets the MFADevices field's value. +func (s *ListMFADevicesOutput) SetMFADevices(v []*MFADevice) *ListMFADevicesOutput { + s.MFADevices = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListMFADevicesOutput) SetMarker(v string) *ListMFADevicesOutput { + s.Marker = &v + return s +} + +type ListOpenIDConnectProvidersInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ListOpenIDConnectProvidersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListOpenIDConnectProvidersInput) GoString() string { + return s.String() +} + +// Contains the response to a successful ListOpenIDConnectProviders request. +type ListOpenIDConnectProvidersOutput struct { + _ struct{} `type:"structure"` + + // The list of IAM OIDC provider resource objects defined in the AWS account. + OpenIDConnectProviderList []*OpenIDConnectProviderListEntry `type:"list"` +} + +// String returns the string representation +func (s ListOpenIDConnectProvidersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListOpenIDConnectProvidersOutput) GoString() string { + return s.String() +} + +// SetOpenIDConnectProviderList sets the OpenIDConnectProviderList field's value. +func (s *ListOpenIDConnectProvidersOutput) SetOpenIDConnectProviderList(v []*OpenIDConnectProviderListEntry) *ListOpenIDConnectProvidersOutput { + s.OpenIDConnectProviderList = v + return s +} + +type ListPoliciesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // A flag to filter the results to only the attached policies. + // + // When OnlyAttached is true, the returned list contains only the policies that + // are attached to an IAM user, group, or role. When OnlyAttached is false, + // or when the parameter is not included, all policies are returned. + OnlyAttached *bool `type:"boolean"` + + // The path prefix for filtering the results. This parameter is optional. If + // it is not included, it defaults to a slash (/), listing all policies. This + // parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `type:"string"` + + // The scope to use for filtering the results. + // + // To list only AWS managed policies, set Scope to AWS. To list only the customer + // managed policies in your AWS account, set Scope to Local. + // + // This parameter is optional. If it is not included, or if it is set to All, + // all policies are returned. + Scope *string `type:"string" enum:"policyScopeType"` +} + +// String returns the string representation +func (s ListPoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPoliciesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListPoliciesInput) SetMarker(v string) *ListPoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListPoliciesInput) SetMaxItems(v int64) *ListPoliciesInput { + s.MaxItems = &v + return s +} + +// SetOnlyAttached sets the OnlyAttached field's value. +func (s *ListPoliciesInput) SetOnlyAttached(v bool) *ListPoliciesInput { + s.OnlyAttached = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListPoliciesInput) SetPathPrefix(v string) *ListPoliciesInput { + s.PathPrefix = &v + return s +} + +// SetScope sets the Scope field's value. +func (s *ListPoliciesInput) SetScope(v string) *ListPoliciesInput { + s.Scope = &v + return s +} + +// Contains the response to a successful ListPolicies request. +type ListPoliciesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of policies. + Policies []*Policy `type:"list"` +} + +// String returns the string representation +func (s ListPoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPoliciesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListPoliciesOutput) SetIsTruncated(v bool) *ListPoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListPoliciesOutput) SetMarker(v string) *ListPoliciesOutput { + s.Marker = &v + return s +} + +// SetPolicies sets the Policies field's value. +func (s *ListPoliciesOutput) SetPolicies(v []*Policy) *ListPoliciesOutput { + s.Policies = v + return s +} + +type ListPolicyVersionsInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The Amazon Resource Name (ARN) of the IAM policy for which you want the versions. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListPolicyVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPolicyVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPolicyVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPolicyVersionsInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListPolicyVersionsInput) SetMarker(v string) *ListPolicyVersionsInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListPolicyVersionsInput) SetMaxItems(v int64) *ListPolicyVersionsInput { + s.MaxItems = &v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *ListPolicyVersionsInput) SetPolicyArn(v string) *ListPolicyVersionsInput { + s.PolicyArn = &v + return s +} + +// Contains the response to a successful ListPolicyVersions request. +type ListPolicyVersionsOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of policy versions. + // + // For more information about managed policy versions, see Versioning for Managed + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the IAM User Guide. + Versions []*PolicyVersion `type:"list"` +} + +// String returns the string representation +func (s ListPolicyVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPolicyVersionsOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListPolicyVersionsOutput) SetIsTruncated(v bool) *ListPolicyVersionsOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListPolicyVersionsOutput) SetMarker(v string) *ListPolicyVersionsOutput { + s.Marker = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *ListPolicyVersionsOutput) SetVersions(v []*PolicyVersion) *ListPolicyVersionsOutput { + s.Versions = v + return s +} + +type ListRolePoliciesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the role to list policies for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListRolePoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListRolePoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListRolePoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListRolePoliciesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListRolePoliciesInput) SetMarker(v string) *ListRolePoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListRolePoliciesInput) SetMaxItems(v int64) *ListRolePoliciesInput { + s.MaxItems = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *ListRolePoliciesInput) SetRoleName(v string) *ListRolePoliciesInput { + s.RoleName = &v + return s +} + +// Contains the response to a successful ListRolePolicies request. +type ListRolePoliciesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of policy names. + // + // PolicyNames is a required field + PolicyNames []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListRolePoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListRolePoliciesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListRolePoliciesOutput) SetIsTruncated(v bool) *ListRolePoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListRolePoliciesOutput) SetMarker(v string) *ListRolePoliciesOutput { + s.Marker = &v + return s +} + +// SetPolicyNames sets the PolicyNames field's value. +func (s *ListRolePoliciesOutput) SetPolicyNames(v []*string) *ListRolePoliciesOutput { + s.PolicyNames = v + return s +} + +type ListRolesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. For example, the prefix /application_abc/component_xyz/ + // gets all roles whose path starts with /application_abc/component_xyz/. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/), listing all roles. This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + PathPrefix *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListRolesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListRolesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListRolesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListRolesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PathPrefix != nil && len(*s.PathPrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PathPrefix", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListRolesInput) SetMarker(v string) *ListRolesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListRolesInput) SetMaxItems(v int64) *ListRolesInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListRolesInput) SetPathPrefix(v string) *ListRolesInput { + s.PathPrefix = &v + return s +} + +// Contains the response to a successful ListRoles request. +type ListRolesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of roles. + // + // Roles is a required field + Roles []*Role `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListRolesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListRolesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListRolesOutput) SetIsTruncated(v bool) *ListRolesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListRolesOutput) SetMarker(v string) *ListRolesOutput { + s.Marker = &v + return s +} + +// SetRoles sets the Roles field's value. +func (s *ListRolesOutput) SetRoles(v []*Role) *ListRolesOutput { + s.Roles = v + return s +} + +type ListSAMLProvidersInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ListSAMLProvidersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSAMLProvidersInput) GoString() string { + return s.String() +} + +// Contains the response to a successful ListSAMLProviders request. +type ListSAMLProvidersOutput struct { + _ struct{} `type:"structure"` + + // The list of SAML provider resource objects defined in IAM for this AWS account. + SAMLProviderList []*SAMLProviderListEntry `type:"list"` +} + +// String returns the string representation +func (s ListSAMLProvidersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSAMLProvidersOutput) GoString() string { + return s.String() +} + +// SetSAMLProviderList sets the SAMLProviderList field's value. +func (s *ListSAMLProvidersOutput) SetSAMLProviderList(v []*SAMLProviderListEntry) *ListSAMLProvidersOutput { + s.SAMLProviderList = v + return s +} + +type ListSSHPublicKeysInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the IAM user to list SSH public keys for. If none is specified, + // the UserName field is determined implicitly based on the AWS access key used + // to sign the request. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListSSHPublicKeysInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSSHPublicKeysInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListSSHPublicKeysInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListSSHPublicKeysInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListSSHPublicKeysInput) SetMarker(v string) *ListSSHPublicKeysInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListSSHPublicKeysInput) SetMaxItems(v int64) *ListSSHPublicKeysInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListSSHPublicKeysInput) SetUserName(v string) *ListSSHPublicKeysInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListSSHPublicKeys request. +type ListSSHPublicKeysOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of the SSH public keys assigned to IAM user. + SSHPublicKeys []*SSHPublicKeyMetadata `type:"list"` +} + +// String returns the string representation +func (s ListSSHPublicKeysOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSSHPublicKeysOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListSSHPublicKeysOutput) SetIsTruncated(v bool) *ListSSHPublicKeysOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListSSHPublicKeysOutput) SetMarker(v string) *ListSSHPublicKeysOutput { + s.Marker = &v + return s +} + +// SetSSHPublicKeys sets the SSHPublicKeys field's value. +func (s *ListSSHPublicKeysOutput) SetSSHPublicKeys(v []*SSHPublicKeyMetadata) *ListSSHPublicKeysOutput { + s.SSHPublicKeys = v + return s +} + +type ListServerCertificatesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. For example: /company/servercerts + // would get all server certificates for which the path starts with /company/servercerts. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/), listing all server certificates. This parameter allows (per its regex + // pattern (http://wikipedia.org/wiki/regex)) a string of characters consisting + // of either a forward slash (/) by itself or a string that must begin and end + // with forward slashes. In addition, it can contain any ASCII character from + // the ! (\u0021) through the DEL character (\u007F), including most punctuation + // characters, digits, and upper and lowercased letters. + PathPrefix *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListServerCertificatesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServerCertificatesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListServerCertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListServerCertificatesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PathPrefix != nil && len(*s.PathPrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PathPrefix", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListServerCertificatesInput) SetMarker(v string) *ListServerCertificatesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListServerCertificatesInput) SetMaxItems(v int64) *ListServerCertificatesInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListServerCertificatesInput) SetPathPrefix(v string) *ListServerCertificatesInput { + s.PathPrefix = &v + return s +} + +// Contains the response to a successful ListServerCertificates request. +type ListServerCertificatesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of server certificates. + // + // ServerCertificateMetadataList is a required field + ServerCertificateMetadataList []*ServerCertificateMetadata `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListServerCertificatesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServerCertificatesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListServerCertificatesOutput) SetIsTruncated(v bool) *ListServerCertificatesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListServerCertificatesOutput) SetMarker(v string) *ListServerCertificatesOutput { + s.Marker = &v + return s +} + +// SetServerCertificateMetadataList sets the ServerCertificateMetadataList field's value. +func (s *ListServerCertificatesOutput) SetServerCertificateMetadataList(v []*ServerCertificateMetadata) *ListServerCertificatesOutput { + s.ServerCertificateMetadataList = v + return s +} + +type ListServiceSpecificCredentialsInput struct { + _ struct{} `type:"structure"` + + // Filters the returned results to only those for the specified AWS service. + // If not specified, then AWS returns service-specific credentials for all services. + ServiceName *string `type:"string"` + + // The name of the user whose service-specific credentials you want information + // about. If this value is not specified, then the operation assumes the user + // whose credentials are used to call the operation. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListServiceSpecificCredentialsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServiceSpecificCredentialsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListServiceSpecificCredentialsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListServiceSpecificCredentialsInput"} + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServiceName sets the ServiceName field's value. +func (s *ListServiceSpecificCredentialsInput) SetServiceName(v string) *ListServiceSpecificCredentialsInput { + s.ServiceName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListServiceSpecificCredentialsInput) SetUserName(v string) *ListServiceSpecificCredentialsInput { + s.UserName = &v + return s +} + +type ListServiceSpecificCredentialsOutput struct { + _ struct{} `type:"structure"` + + // A list of structures that each contain details about a service-specific credential. + ServiceSpecificCredentials []*ServiceSpecificCredentialMetadata `type:"list"` +} + +// String returns the string representation +func (s ListServiceSpecificCredentialsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServiceSpecificCredentialsOutput) GoString() string { + return s.String() +} + +// SetServiceSpecificCredentials sets the ServiceSpecificCredentials field's value. +func (s *ListServiceSpecificCredentialsOutput) SetServiceSpecificCredentials(v []*ServiceSpecificCredentialMetadata) *ListServiceSpecificCredentialsOutput { + s.ServiceSpecificCredentials = v + return s +} + +type ListSigningCertificatesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the IAM user whose signing certificates you want to examine. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListSigningCertificatesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSigningCertificatesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListSigningCertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListSigningCertificatesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListSigningCertificatesInput) SetMarker(v string) *ListSigningCertificatesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListSigningCertificatesInput) SetMaxItems(v int64) *ListSigningCertificatesInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListSigningCertificatesInput) SetUserName(v string) *ListSigningCertificatesInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListSigningCertificates request. +type ListSigningCertificatesOutput struct { + _ struct{} `type:"structure"` + + // A list of the user's signing certificate information. + // + // Certificates is a required field + Certificates []*SigningCertificate `type:"list" required:"true"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListSigningCertificatesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSigningCertificatesOutput) GoString() string { + return s.String() +} + +// SetCertificates sets the Certificates field's value. +func (s *ListSigningCertificatesOutput) SetCertificates(v []*SigningCertificate) *ListSigningCertificatesOutput { + s.Certificates = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListSigningCertificatesOutput) SetIsTruncated(v bool) *ListSigningCertificatesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListSigningCertificatesOutput) SetMarker(v string) *ListSigningCertificatesOutput { + s.Marker = &v + return s +} + +type ListUserPoliciesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the user to list policies for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListUserPoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListUserPoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListUserPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListUserPoliciesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListUserPoliciesInput) SetMarker(v string) *ListUserPoliciesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListUserPoliciesInput) SetMaxItems(v int64) *ListUserPoliciesInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListUserPoliciesInput) SetUserName(v string) *ListUserPoliciesInput { + s.UserName = &v + return s +} + +// Contains the response to a successful ListUserPolicies request. +type ListUserPoliciesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of policy names. + // + // PolicyNames is a required field + PolicyNames []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListUserPoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListUserPoliciesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListUserPoliciesOutput) SetIsTruncated(v bool) *ListUserPoliciesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListUserPoliciesOutput) SetMarker(v string) *ListUserPoliciesOutput { + s.Marker = &v + return s +} + +// SetPolicyNames sets the PolicyNames field's value. +func (s *ListUserPoliciesOutput) SetPolicyNames(v []*string) *ListUserPoliciesOutput { + s.PolicyNames = v + return s +} + +type ListUsersInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The path prefix for filtering the results. For example: /division_abc/subdivision_xyz/, + // which would get all user names whose path starts with /division_abc/subdivision_xyz/. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/), listing all user names. This parameter allows (per its regex pattern + // (http://wikipedia.org/wiki/regex)) a string of characters consisting of either + // a forward slash (/) by itself or a string that must begin and end with forward + // slashes. In addition, it can contain any ASCII character from the ! (\u0021) + // through the DEL character (\u007F), including most punctuation characters, + // digits, and upper and lowercased letters. + PathPrefix *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListUsersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListUsersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListUsersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListUsersInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PathPrefix != nil && len(*s.PathPrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PathPrefix", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListUsersInput) SetMarker(v string) *ListUsersInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListUsersInput) SetMaxItems(v int64) *ListUsersInput { + s.MaxItems = &v + return s +} + +// SetPathPrefix sets the PathPrefix field's value. +func (s *ListUsersInput) SetPathPrefix(v string) *ListUsersInput { + s.PathPrefix = &v + return s +} + +// Contains the response to a successful ListUsers request. +type ListUsersOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of users. + // + // Users is a required field + Users []*User `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListUsersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListUsersOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListUsersOutput) SetIsTruncated(v bool) *ListUsersOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListUsersOutput) SetMarker(v string) *ListUsersOutput { + s.Marker = &v + return s +} + +// SetUsers sets the Users field's value. +func (s *ListUsersOutput) SetUsers(v []*User) *ListUsersOutput { + s.Users = v + return s +} + +type ListVirtualMFADevicesInput struct { + _ struct{} `type:"structure"` + + // The status (Unassigned or Assigned) of the devices to list. If you do not + // specify an AssignmentStatus, the operation defaults to Any which lists both + // assigned and unassigned virtual MFA devices. + AssignmentStatus *string `type:"string" enum:"assignmentStatusType"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s ListVirtualMFADevicesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVirtualMFADevicesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListVirtualMFADevicesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListVirtualMFADevicesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssignmentStatus sets the AssignmentStatus field's value. +func (s *ListVirtualMFADevicesInput) SetAssignmentStatus(v string) *ListVirtualMFADevicesInput { + s.AssignmentStatus = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListVirtualMFADevicesInput) SetMarker(v string) *ListVirtualMFADevicesInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListVirtualMFADevicesInput) SetMaxItems(v int64) *ListVirtualMFADevicesInput { + s.MaxItems = &v + return s +} + +// Contains the response to a successful ListVirtualMFADevices request. +type ListVirtualMFADevicesOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // The list of virtual MFA devices in the current account that match the AssignmentStatus + // value that was passed in the request. + // + // VirtualMFADevices is a required field + VirtualMFADevices []*VirtualMFADevice `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListVirtualMFADevicesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVirtualMFADevicesOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListVirtualMFADevicesOutput) SetIsTruncated(v bool) *ListVirtualMFADevicesOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListVirtualMFADevicesOutput) SetMarker(v string) *ListVirtualMFADevicesOutput { + s.Marker = &v + return s +} + +// SetVirtualMFADevices sets the VirtualMFADevices field's value. +func (s *ListVirtualMFADevicesOutput) SetVirtualMFADevices(v []*VirtualMFADevice) *ListVirtualMFADevicesOutput { + s.VirtualMFADevices = v + return s +} + +// Contains the user name and password create date for a user. +// +// This data type is used as a response element in the CreateLoginProfile and +// GetLoginProfile operations. +type LoginProfile struct { + _ struct{} `type:"structure"` + + // The date when the password for the user was created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // Specifies whether the user is required to set a new password on next sign-in. + PasswordResetRequired *bool `type:"boolean"` + + // The name of the user, which can be used for signing in to the AWS Management + // Console. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s LoginProfile) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoginProfile) GoString() string { + return s.String() +} + +// SetCreateDate sets the CreateDate field's value. +func (s *LoginProfile) SetCreateDate(v time.Time) *LoginProfile { + s.CreateDate = &v + return s +} + +// SetPasswordResetRequired sets the PasswordResetRequired field's value. +func (s *LoginProfile) SetPasswordResetRequired(v bool) *LoginProfile { + s.PasswordResetRequired = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *LoginProfile) SetUserName(v string) *LoginProfile { + s.UserName = &v + return s +} + +// Contains information about an MFA device. +// +// This data type is used as a response element in the ListMFADevices operation. +type MFADevice struct { + _ struct{} `type:"structure"` + + // The date when the MFA device was enabled for the user. + // + // EnableDate is a required field + EnableDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The serial number that uniquely identifies the MFA device. For virtual MFA + // devices, the serial number is the device ARN. + // + // SerialNumber is a required field + SerialNumber *string `min:"9" type:"string" required:"true"` + + // The user with whom the MFA device is associated. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s MFADevice) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MFADevice) GoString() string { + return s.String() +} + +// SetEnableDate sets the EnableDate field's value. +func (s *MFADevice) SetEnableDate(v time.Time) *MFADevice { + s.EnableDate = &v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *MFADevice) SetSerialNumber(v string) *MFADevice { + s.SerialNumber = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *MFADevice) SetUserName(v string) *MFADevice { + s.UserName = &v + return s +} + +// Contains information about a managed policy, including the policy's ARN, +// versions, and the number of principal entities (users, groups, and roles) +// that the policy is attached to. +// +// This data type is used as a response element in the GetAccountAuthorizationDetails +// operation. +// +// For more information about managed policies, see Managed Policies and Inline +// Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type ManagedPolicyDetail struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + Arn *string `min:"20" type:"string"` + + // The number of principal entities (users, groups, and roles) that the policy + // is attached to. + AttachmentCount *int64 `type:"integer"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the policy was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The identifier for the version of the policy that is set as the default (operative) + // version. + // + // For more information about policy versions, see Versioning for Managed Policies + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the Using IAM guide. + DefaultVersionId *string `type:"string"` + + // A friendly description of the policy. + Description *string `type:"string"` + + // Specifies whether the policy can be attached to an IAM user, group, or role. + IsAttachable *bool `type:"boolean"` + + // The path to the policy. + // + // For more information about paths, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + Path *string `type:"string"` + + // The stable and unique string identifying the policy. + // + // For more information about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + PolicyId *string `min:"16" type:"string"` + + // The friendly name (not ARN) identifying the policy. + PolicyName *string `min:"1" type:"string"` + + // A list containing information about the versions of the policy. + PolicyVersionList []*PolicyVersion `type:"list"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the policy was last updated. + // + // When a policy has only one version, this field contains the date and time + // when the policy was created. When a policy has more than one version, this + // field contains the date and time when the most recent policy version was + // created. + UpdateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s ManagedPolicyDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ManagedPolicyDetail) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *ManagedPolicyDetail) SetArn(v string) *ManagedPolicyDetail { + s.Arn = &v + return s +} + +// SetAttachmentCount sets the AttachmentCount field's value. +func (s *ManagedPolicyDetail) SetAttachmentCount(v int64) *ManagedPolicyDetail { + s.AttachmentCount = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *ManagedPolicyDetail) SetCreateDate(v time.Time) *ManagedPolicyDetail { + s.CreateDate = &v + return s +} + +// SetDefaultVersionId sets the DefaultVersionId field's value. +func (s *ManagedPolicyDetail) SetDefaultVersionId(v string) *ManagedPolicyDetail { + s.DefaultVersionId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ManagedPolicyDetail) SetDescription(v string) *ManagedPolicyDetail { + s.Description = &v + return s +} + +// SetIsAttachable sets the IsAttachable field's value. +func (s *ManagedPolicyDetail) SetIsAttachable(v bool) *ManagedPolicyDetail { + s.IsAttachable = &v + return s +} + +// SetPath sets the Path field's value. +func (s *ManagedPolicyDetail) SetPath(v string) *ManagedPolicyDetail { + s.Path = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *ManagedPolicyDetail) SetPolicyId(v string) *ManagedPolicyDetail { + s.PolicyId = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *ManagedPolicyDetail) SetPolicyName(v string) *ManagedPolicyDetail { + s.PolicyName = &v + return s +} + +// SetPolicyVersionList sets the PolicyVersionList field's value. +func (s *ManagedPolicyDetail) SetPolicyVersionList(v []*PolicyVersion) *ManagedPolicyDetail { + s.PolicyVersionList = v + return s +} + +// SetUpdateDate sets the UpdateDate field's value. +func (s *ManagedPolicyDetail) SetUpdateDate(v time.Time) *ManagedPolicyDetail { + s.UpdateDate = &v + return s +} + +// Contains the Amazon Resource Name (ARN) for an IAM OpenID Connect provider. +type OpenIDConnectProviderListEntry struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + Arn *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s OpenIDConnectProviderListEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OpenIDConnectProviderListEntry) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *OpenIDConnectProviderListEntry) SetArn(v string) *OpenIDConnectProviderListEntry { + s.Arn = &v + return s +} + +// Contains information about AWS Organizations's effect on a policy simulation. +type OrganizationsDecisionDetail struct { + _ struct{} `type:"structure"` + + // Specifies whether the simulated operation is allowed by the AWS Organizations + // service control policies that impact the simulated user's account. + AllowedByOrganizations *bool `type:"boolean"` +} + +// String returns the string representation +func (s OrganizationsDecisionDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OrganizationsDecisionDetail) GoString() string { + return s.String() +} + +// SetAllowedByOrganizations sets the AllowedByOrganizations field's value. +func (s *OrganizationsDecisionDetail) SetAllowedByOrganizations(v bool) *OrganizationsDecisionDetail { + s.AllowedByOrganizations = &v + return s +} + +// Contains information about the account password policy. +// +// This data type is used as a response element in the GetAccountPasswordPolicy +// operation. +type PasswordPolicy struct { + _ struct{} `type:"structure"` + + // Specifies whether IAM users are allowed to change their own password. + AllowUsersToChangePassword *bool `type:"boolean"` + + // Indicates whether passwords in the account expire. Returns true if MaxPasswordAge + // contains a value greater than 0. Returns false if MaxPasswordAge is 0 or + // not present. + ExpirePasswords *bool `type:"boolean"` + + // Specifies whether IAM users are prevented from setting a new password after + // their password has expired. + HardExpiry *bool `type:"boolean"` + + // The number of days that an IAM user password is valid. + MaxPasswordAge *int64 `min:"1" type:"integer"` + + // Minimum length to require for IAM user passwords. + MinimumPasswordLength *int64 `min:"6" type:"integer"` + + // Specifies the number of previous passwords that IAM users are prevented from + // reusing. + PasswordReusePrevention *int64 `min:"1" type:"integer"` + + // Specifies whether to require lowercase characters for IAM user passwords. + RequireLowercaseCharacters *bool `type:"boolean"` + + // Specifies whether to require numbers for IAM user passwords. + RequireNumbers *bool `type:"boolean"` + + // Specifies whether to require symbols for IAM user passwords. + RequireSymbols *bool `type:"boolean"` + + // Specifies whether to require uppercase characters for IAM user passwords. + RequireUppercaseCharacters *bool `type:"boolean"` +} + +// String returns the string representation +func (s PasswordPolicy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PasswordPolicy) GoString() string { + return s.String() +} + +// SetAllowUsersToChangePassword sets the AllowUsersToChangePassword field's value. +func (s *PasswordPolicy) SetAllowUsersToChangePassword(v bool) *PasswordPolicy { + s.AllowUsersToChangePassword = &v + return s +} + +// SetExpirePasswords sets the ExpirePasswords field's value. +func (s *PasswordPolicy) SetExpirePasswords(v bool) *PasswordPolicy { + s.ExpirePasswords = &v + return s +} + +// SetHardExpiry sets the HardExpiry field's value. +func (s *PasswordPolicy) SetHardExpiry(v bool) *PasswordPolicy { + s.HardExpiry = &v + return s +} + +// SetMaxPasswordAge sets the MaxPasswordAge field's value. +func (s *PasswordPolicy) SetMaxPasswordAge(v int64) *PasswordPolicy { + s.MaxPasswordAge = &v + return s +} + +// SetMinimumPasswordLength sets the MinimumPasswordLength field's value. +func (s *PasswordPolicy) SetMinimumPasswordLength(v int64) *PasswordPolicy { + s.MinimumPasswordLength = &v + return s +} + +// SetPasswordReusePrevention sets the PasswordReusePrevention field's value. +func (s *PasswordPolicy) SetPasswordReusePrevention(v int64) *PasswordPolicy { + s.PasswordReusePrevention = &v + return s +} + +// SetRequireLowercaseCharacters sets the RequireLowercaseCharacters field's value. +func (s *PasswordPolicy) SetRequireLowercaseCharacters(v bool) *PasswordPolicy { + s.RequireLowercaseCharacters = &v + return s +} + +// SetRequireNumbers sets the RequireNumbers field's value. +func (s *PasswordPolicy) SetRequireNumbers(v bool) *PasswordPolicy { + s.RequireNumbers = &v + return s +} + +// SetRequireSymbols sets the RequireSymbols field's value. +func (s *PasswordPolicy) SetRequireSymbols(v bool) *PasswordPolicy { + s.RequireSymbols = &v + return s +} + +// SetRequireUppercaseCharacters sets the RequireUppercaseCharacters field's value. +func (s *PasswordPolicy) SetRequireUppercaseCharacters(v bool) *PasswordPolicy { + s.RequireUppercaseCharacters = &v + return s +} + +// Contains information about a managed policy. +// +// This data type is used as a response element in the CreatePolicy, GetPolicy, +// and ListPolicies operations. +// +// For more information about managed policies, refer to Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type Policy struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + Arn *string `min:"20" type:"string"` + + // The number of entities (users, groups, and roles) that the policy is attached + // to. + AttachmentCount *int64 `type:"integer"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the policy was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The identifier for the version of the policy that is set as the default version. + DefaultVersionId *string `type:"string"` + + // A friendly description of the policy. + // + // This element is included in the response to the GetPolicy operation. It is + // not included in the response to the ListPolicies operation. + Description *string `type:"string"` + + // Specifies whether the policy can be attached to an IAM user, group, or role. + IsAttachable *bool `type:"boolean"` + + // The path to the policy. + // + // For more information about paths, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + Path *string `type:"string"` + + // The stable and unique string identifying the policy. + // + // For more information about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + PolicyId *string `min:"16" type:"string"` + + // The friendly name (not ARN) identifying the policy. + PolicyName *string `min:"1" type:"string"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the policy was last updated. + // + // When a policy has only one version, this field contains the date and time + // when the policy was created. When a policy has more than one version, this + // field contains the date and time when the most recent policy version was + // created. + UpdateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s Policy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Policy) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *Policy) SetArn(v string) *Policy { + s.Arn = &v + return s +} + +// SetAttachmentCount sets the AttachmentCount field's value. +func (s *Policy) SetAttachmentCount(v int64) *Policy { + s.AttachmentCount = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *Policy) SetCreateDate(v time.Time) *Policy { + s.CreateDate = &v + return s +} + +// SetDefaultVersionId sets the DefaultVersionId field's value. +func (s *Policy) SetDefaultVersionId(v string) *Policy { + s.DefaultVersionId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Policy) SetDescription(v string) *Policy { + s.Description = &v + return s +} + +// SetIsAttachable sets the IsAttachable field's value. +func (s *Policy) SetIsAttachable(v bool) *Policy { + s.IsAttachable = &v + return s +} + +// SetPath sets the Path field's value. +func (s *Policy) SetPath(v string) *Policy { + s.Path = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *Policy) SetPolicyId(v string) *Policy { + s.PolicyId = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *Policy) SetPolicyName(v string) *Policy { + s.PolicyName = &v + return s +} + +// SetUpdateDate sets the UpdateDate field's value. +func (s *Policy) SetUpdateDate(v time.Time) *Policy { + s.UpdateDate = &v + return s +} + +// Contains information about an IAM policy, including the policy document. +// +// This data type is used as a response element in the GetAccountAuthorizationDetails +// operation. +type PolicyDetail struct { + _ struct{} `type:"structure"` + + // The policy document. + PolicyDocument *string `min:"1" type:"string"` + + // The name of the policy. + PolicyName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PolicyDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyDetail) GoString() string { + return s.String() +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *PolicyDetail) SetPolicyDocument(v string) *PolicyDetail { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PolicyDetail) SetPolicyName(v string) *PolicyDetail { + s.PolicyName = &v + return s +} + +// Contains information about a group that a managed policy is attached to. +// +// This data type is used as a response element in the ListEntitiesForPolicy +// operation. +// +// For more information about managed policies, refer to Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type PolicyGroup struct { + _ struct{} `type:"structure"` + + // The stable and unique string identifying the group. For more information + // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) + // in the IAM User Guide. + GroupId *string `min:"16" type:"string"` + + // The name (friendly name, not ARN) identifying the group. + GroupName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PolicyGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyGroup) GoString() string { + return s.String() +} + +// SetGroupId sets the GroupId field's value. +func (s *PolicyGroup) SetGroupId(v string) *PolicyGroup { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *PolicyGroup) SetGroupName(v string) *PolicyGroup { + s.GroupName = &v + return s +} + +// Contains information about a role that a managed policy is attached to. +// +// This data type is used as a response element in the ListEntitiesForPolicy +// operation. +// +// For more information about managed policies, refer to Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type PolicyRole struct { + _ struct{} `type:"structure"` + + // The stable and unique string identifying the role. For more information about + // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) + // in the IAM User Guide. + RoleId *string `min:"16" type:"string"` + + // The name (friendly name, not ARN) identifying the role. + RoleName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PolicyRole) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyRole) GoString() string { + return s.String() +} + +// SetRoleId sets the RoleId field's value. +func (s *PolicyRole) SetRoleId(v string) *PolicyRole { + s.RoleId = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *PolicyRole) SetRoleName(v string) *PolicyRole { + s.RoleName = &v + return s +} + +// Contains information about a user that a managed policy is attached to. +// +// This data type is used as a response element in the ListEntitiesForPolicy +// operation. +// +// For more information about managed policies, refer to Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type PolicyUser struct { + _ struct{} `type:"structure"` + + // The stable and unique string identifying the user. For more information about + // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) + // in the IAM User Guide. + UserId *string `min:"16" type:"string"` + + // The name (friendly name, not ARN) identifying the user. + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PolicyUser) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyUser) GoString() string { + return s.String() +} + +// SetUserId sets the UserId field's value. +func (s *PolicyUser) SetUserId(v string) *PolicyUser { + s.UserId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *PolicyUser) SetUserName(v string) *PolicyUser { + s.UserName = &v + return s +} + +// Contains information about a version of a managed policy. +// +// This data type is used as a response element in the CreatePolicyVersion, +// GetPolicyVersion, ListPolicyVersions, and GetAccountAuthorizationDetails +// operations. +// +// For more information about managed policies, refer to Managed Policies and +// Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) +// in the Using IAM guide. +type PolicyVersion struct { + _ struct{} `type:"structure"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the policy version was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The policy document. + // + // The policy document is returned in the response to the GetPolicyVersion and + // GetAccountAuthorizationDetails operations. It is not returned in the response + // to the CreatePolicyVersion or ListPolicyVersions operations. + // + // The policy document returned in this structure is URL-encoded compliant with + // RFC 3986 (https://tools.ietf.org/html/rfc3986). You can use a URL decoding + // method to convert the policy back to plain JSON text. For example, if you + // use Java, you can use the decode method of the java.net.URLDecoder utility + // class in the Java SDK. Other languages and SDKs provide similar functionality. + Document *string `min:"1" type:"string"` + + // Specifies whether the policy version is set as the policy's default version. + IsDefaultVersion *bool `type:"boolean"` + + // The identifier for the policy version. + // + // Policy version identifiers always begin with v (always lowercase). When a + // policy is created, the first policy version is v1. + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s PolicyVersion) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyVersion) GoString() string { + return s.String() +} + +// SetCreateDate sets the CreateDate field's value. +func (s *PolicyVersion) SetCreateDate(v time.Time) *PolicyVersion { + s.CreateDate = &v + return s +} + +// SetDocument sets the Document field's value. +func (s *PolicyVersion) SetDocument(v string) *PolicyVersion { + s.Document = &v + return s +} + +// SetIsDefaultVersion sets the IsDefaultVersion field's value. +func (s *PolicyVersion) SetIsDefaultVersion(v bool) *PolicyVersion { + s.IsDefaultVersion = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *PolicyVersion) SetVersionId(v string) *PolicyVersion { + s.VersionId = &v + return s +} + +// Contains the row and column of a location of a Statement element in a policy +// document. +// +// This data type is used as a member of the Statement type. +type Position struct { + _ struct{} `type:"structure"` + + // The column in the line containing the specified position in the document. + Column *int64 `type:"integer"` + + // The line containing the specified position in the document. + Line *int64 `type:"integer"` +} + +// String returns the string representation +func (s Position) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Position) GoString() string { + return s.String() +} + +// SetColumn sets the Column field's value. +func (s *Position) SetColumn(v int64) *Position { + s.Column = &v + return s +} + +// SetLine sets the Line field's value. +func (s *Position) SetLine(v int64) *Position { + s.Line = &v + return s +} + +type PutGroupPolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the group to associate the policy with. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The policy document. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the policy document. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutGroupPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutGroupPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutGroupPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutGroupPolicyInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *PutGroupPolicyInput) SetGroupName(v string) *PutGroupPolicyInput { + s.GroupName = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *PutGroupPolicyInput) SetPolicyDocument(v string) *PutGroupPolicyInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PutGroupPolicyInput) SetPolicyName(v string) *PutGroupPolicyInput { + s.PolicyName = &v + return s +} + +type PutGroupPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutGroupPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutGroupPolicyOutput) GoString() string { + return s.String() +} + +type PutRolePolicyInput struct { + _ struct{} `type:"structure"` + + // The policy document. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the policy document. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The name of the role to associate the policy with. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutRolePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRolePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutRolePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutRolePolicyInput"} + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *PutRolePolicyInput) SetPolicyDocument(v string) *PutRolePolicyInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PutRolePolicyInput) SetPolicyName(v string) *PutRolePolicyInput { + s.PolicyName = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *PutRolePolicyInput) SetRoleName(v string) *PutRolePolicyInput { + s.RoleName = &v + return s +} + +type PutRolePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutRolePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRolePolicyOutput) GoString() string { + return s.String() +} + +type PutUserPolicyInput struct { + _ struct{} `type:"structure"` + + // The policy document. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the policy document. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // The name of the user to associate the policy with. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutUserPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutUserPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutUserPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutUserPolicyInput"} + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *PutUserPolicyInput) SetPolicyDocument(v string) *PutUserPolicyInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PutUserPolicyInput) SetPolicyName(v string) *PutUserPolicyInput { + s.PolicyName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *PutUserPolicyInput) SetUserName(v string) *PutUserPolicyInput { + s.UserName = &v + return s +} + +type PutUserPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutUserPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutUserPolicyOutput) GoString() string { + return s.String() +} + +type RemoveClientIDFromOpenIDConnectProviderInput struct { + _ struct{} `type:"structure"` + + // The client ID (also known as audience) to remove from the IAM OIDC provider + // resource. For more information about client IDs, see CreateOpenIDConnectProvider. + // + // ClientID is a required field + ClientID *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM OIDC provider resource to remove + // the client ID from. You can get a list of OIDC provider ARNs by using the + // ListOpenIDConnectProviders operation. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // OpenIDConnectProviderArn is a required field + OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s RemoveClientIDFromOpenIDConnectProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveClientIDFromOpenIDConnectProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveClientIDFromOpenIDConnectProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveClientIDFromOpenIDConnectProviderInput"} + if s.ClientID == nil { + invalidParams.Add(request.NewErrParamRequired("ClientID")) + } + if s.ClientID != nil && len(*s.ClientID) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientID", 1)) + } + if s.OpenIDConnectProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("OpenIDConnectProviderArn")) + } + if s.OpenIDConnectProviderArn != nil && len(*s.OpenIDConnectProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("OpenIDConnectProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientID sets the ClientID field's value. +func (s *RemoveClientIDFromOpenIDConnectProviderInput) SetClientID(v string) *RemoveClientIDFromOpenIDConnectProviderInput { + s.ClientID = &v + return s +} + +// SetOpenIDConnectProviderArn sets the OpenIDConnectProviderArn field's value. +func (s *RemoveClientIDFromOpenIDConnectProviderInput) SetOpenIDConnectProviderArn(v string) *RemoveClientIDFromOpenIDConnectProviderInput { + s.OpenIDConnectProviderArn = &v + return s +} + +type RemoveClientIDFromOpenIDConnectProviderOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveClientIDFromOpenIDConnectProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveClientIDFromOpenIDConnectProviderOutput) GoString() string { + return s.String() +} + +type RemoveRoleFromInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the instance profile to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // InstanceProfileName is a required field + InstanceProfileName *string `min:"1" type:"string" required:"true"` + + // The name of the role to remove. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RemoveRoleFromInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveRoleFromInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveRoleFromInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveRoleFromInstanceProfileInput"} + if s.InstanceProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceProfileName")) + } + if s.InstanceProfileName != nil && len(*s.InstanceProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceProfileName", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceProfileName sets the InstanceProfileName field's value. +func (s *RemoveRoleFromInstanceProfileInput) SetInstanceProfileName(v string) *RemoveRoleFromInstanceProfileInput { + s.InstanceProfileName = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *RemoveRoleFromInstanceProfileInput) SetRoleName(v string) *RemoveRoleFromInstanceProfileInput { + s.RoleName = &v + return s +} + +type RemoveRoleFromInstanceProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveRoleFromInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveRoleFromInstanceProfileOutput) GoString() string { + return s.String() +} + +type RemoveUserFromGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the group to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The name of the user to remove. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RemoveUserFromGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveUserFromGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveUserFromGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveUserFromGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *RemoveUserFromGroupInput) SetGroupName(v string) *RemoveUserFromGroupInput { + s.GroupName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *RemoveUserFromGroupInput) SetUserName(v string) *RemoveUserFromGroupInput { + s.UserName = &v + return s +} + +type RemoveUserFromGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveUserFromGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveUserFromGroupOutput) GoString() string { + return s.String() +} + +type ResetServiceSpecificCredentialInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the service-specific credential. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // ServiceSpecificCredentialId is a required field + ServiceSpecificCredentialId *string `min:"20" type:"string" required:"true"` + + // The name of the IAM user associated with the service-specific credential. + // If this value is not specified, then the operation assumes the user whose + // credentials are used to call the operation. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ResetServiceSpecificCredentialInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetServiceSpecificCredentialInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetServiceSpecificCredentialInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetServiceSpecificCredentialInput"} + if s.ServiceSpecificCredentialId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceSpecificCredentialId")) + } + if s.ServiceSpecificCredentialId != nil && len(*s.ServiceSpecificCredentialId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("ServiceSpecificCredentialId", 20)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. +func (s *ResetServiceSpecificCredentialInput) SetServiceSpecificCredentialId(v string) *ResetServiceSpecificCredentialInput { + s.ServiceSpecificCredentialId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ResetServiceSpecificCredentialInput) SetUserName(v string) *ResetServiceSpecificCredentialInput { + s.UserName = &v + return s +} + +type ResetServiceSpecificCredentialOutput struct { + _ struct{} `type:"structure"` + + // A structure with details about the updated service-specific credential, including + // the new password. + // + // This is the only time that you can access the password. You cannot recover + // the password later, but you can reset it again. + ServiceSpecificCredential *ServiceSpecificCredential `type:"structure"` +} + +// String returns the string representation +func (s ResetServiceSpecificCredentialOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetServiceSpecificCredentialOutput) GoString() string { + return s.String() +} + +// SetServiceSpecificCredential sets the ServiceSpecificCredential field's value. +func (s *ResetServiceSpecificCredentialOutput) SetServiceSpecificCredential(v *ServiceSpecificCredential) *ResetServiceSpecificCredentialOutput { + s.ServiceSpecificCredential = v + return s +} + +// Contains the result of the simulation of a single API operation call on a +// single resource. +// +// This data type is used by a member of the EvaluationResult data type. +type ResourceSpecificResult struct { + _ struct{} `type:"structure"` + + // Additional details about the results of the evaluation decision. When there + // are both IAM policies and resource policies, this parameter explains how + // each set of policies contributes to the final evaluation decision. When simulating + // cross-account access to a resource, both the resource-based policy and the + // caller's IAM policy must grant access. + EvalDecisionDetails map[string]*string `type:"map"` + + // The result of the simulation of the simulated API operation on the resource + // specified in EvalResourceName. + // + // EvalResourceDecision is a required field + EvalResourceDecision *string `type:"string" required:"true" enum:"PolicyEvaluationDecisionType"` + + // The name of the simulated resource, in Amazon Resource Name (ARN) format. + // + // EvalResourceName is a required field + EvalResourceName *string `min:"1" type:"string" required:"true"` + + // A list of the statements in the input policies that determine the result + // for this part of the simulation. Remember that even if multiple statements + // allow the operation on the resource, if any statement denies that operation, + // then the explicit deny overrides any allow, and the deny statement is the + // only entry included in the result. + MatchedStatements []*Statement `type:"list"` + + // A list of context keys that are required by the included input policies but + // that were not provided by one of the input parameters. This list is used + // when a list of ARNs is included in the ResourceArns parameter instead of + // "*". If you do not specify individual resources, by setting ResourceArns + // to "*" or by not including the ResourceArns parameter, then any missing context + // values are instead included under the EvaluationResults section. To discover + // the context keys used by a set of policies, you can call GetContextKeysForCustomPolicy + // or GetContextKeysForPrincipalPolicy. + MissingContextValues []*string `type:"list"` +} + +// String returns the string representation +func (s ResourceSpecificResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceSpecificResult) GoString() string { + return s.String() +} + +// SetEvalDecisionDetails sets the EvalDecisionDetails field's value. +func (s *ResourceSpecificResult) SetEvalDecisionDetails(v map[string]*string) *ResourceSpecificResult { + s.EvalDecisionDetails = v + return s +} + +// SetEvalResourceDecision sets the EvalResourceDecision field's value. +func (s *ResourceSpecificResult) SetEvalResourceDecision(v string) *ResourceSpecificResult { + s.EvalResourceDecision = &v + return s +} + +// SetEvalResourceName sets the EvalResourceName field's value. +func (s *ResourceSpecificResult) SetEvalResourceName(v string) *ResourceSpecificResult { + s.EvalResourceName = &v + return s +} + +// SetMatchedStatements sets the MatchedStatements field's value. +func (s *ResourceSpecificResult) SetMatchedStatements(v []*Statement) *ResourceSpecificResult { + s.MatchedStatements = v + return s +} + +// SetMissingContextValues sets the MissingContextValues field's value. +func (s *ResourceSpecificResult) SetMissingContextValues(v []*string) *ResourceSpecificResult { + s.MissingContextValues = v + return s +} + +type ResyncMFADeviceInput struct { + _ struct{} `type:"structure"` + + // An authentication code emitted by the device. + // + // The format for this parameter is a sequence of six digits. + // + // AuthenticationCode1 is a required field + AuthenticationCode1 *string `min:"6" type:"string" required:"true"` + + // A subsequent authentication code emitted by the device. + // + // The format for this parameter is a sequence of six digits. + // + // AuthenticationCode2 is a required field + AuthenticationCode2 *string `min:"6" type:"string" required:"true"` + + // Serial number that uniquely identifies the MFA device. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // SerialNumber is a required field + SerialNumber *string `min:"9" type:"string" required:"true"` + + // The name of the user whose MFA device you want to resynchronize. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ResyncMFADeviceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResyncMFADeviceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResyncMFADeviceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResyncMFADeviceInput"} + if s.AuthenticationCode1 == nil { + invalidParams.Add(request.NewErrParamRequired("AuthenticationCode1")) + } + if s.AuthenticationCode1 != nil && len(*s.AuthenticationCode1) < 6 { + invalidParams.Add(request.NewErrParamMinLen("AuthenticationCode1", 6)) + } + if s.AuthenticationCode2 == nil { + invalidParams.Add(request.NewErrParamRequired("AuthenticationCode2")) + } + if s.AuthenticationCode2 != nil && len(*s.AuthenticationCode2) < 6 { + invalidParams.Add(request.NewErrParamMinLen("AuthenticationCode2", 6)) + } + if s.SerialNumber == nil { + invalidParams.Add(request.NewErrParamRequired("SerialNumber")) + } + if s.SerialNumber != nil && len(*s.SerialNumber) < 9 { + invalidParams.Add(request.NewErrParamMinLen("SerialNumber", 9)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthenticationCode1 sets the AuthenticationCode1 field's value. +func (s *ResyncMFADeviceInput) SetAuthenticationCode1(v string) *ResyncMFADeviceInput { + s.AuthenticationCode1 = &v + return s +} + +// SetAuthenticationCode2 sets the AuthenticationCode2 field's value. +func (s *ResyncMFADeviceInput) SetAuthenticationCode2(v string) *ResyncMFADeviceInput { + s.AuthenticationCode2 = &v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *ResyncMFADeviceInput) SetSerialNumber(v string) *ResyncMFADeviceInput { + s.SerialNumber = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ResyncMFADeviceInput) SetUserName(v string) *ResyncMFADeviceInput { + s.UserName = &v + return s +} + +type ResyncMFADeviceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ResyncMFADeviceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResyncMFADeviceOutput) GoString() string { + return s.String() +} + +// Contains information about an IAM role. This structure is returned as a response +// element in several API operations that interact with roles. +type Role struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) specifying the role. For more information + // about ARNs and how to use them in policies, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide guide. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // The policy that grants an entity permission to assume the role. + AssumeRolePolicyDocument *string `min:"1" type:"string"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the role was created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // A description of the role that you provide. + Description *string `type:"string"` + + // The maximum session duration (in seconds) for the specified role. Anyone + // who uses the AWS CLI or API to assume the role can specify the duration using + // the optional DurationSeconds API parameter or duration-seconds CLI parameter. + MaxSessionDuration *int64 `min:"3600" type:"integer"` + + // The path to the role. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Path is a required field + Path *string `min:"1" type:"string" required:"true"` + + // The stable and unique string identifying the role. For more information about + // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // RoleId is a required field + RoleId *string `min:"16" type:"string" required:"true"` + + // The friendly name that identifies the role. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Role) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Role) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *Role) SetArn(v string) *Role { + s.Arn = &v + return s +} + +// SetAssumeRolePolicyDocument sets the AssumeRolePolicyDocument field's value. +func (s *Role) SetAssumeRolePolicyDocument(v string) *Role { + s.AssumeRolePolicyDocument = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *Role) SetCreateDate(v time.Time) *Role { + s.CreateDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Role) SetDescription(v string) *Role { + s.Description = &v + return s +} + +// SetMaxSessionDuration sets the MaxSessionDuration field's value. +func (s *Role) SetMaxSessionDuration(v int64) *Role { + s.MaxSessionDuration = &v + return s +} + +// SetPath sets the Path field's value. +func (s *Role) SetPath(v string) *Role { + s.Path = &v + return s +} + +// SetRoleId sets the RoleId field's value. +func (s *Role) SetRoleId(v string) *Role { + s.RoleId = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *Role) SetRoleName(v string) *Role { + s.RoleName = &v + return s +} + +// Contains information about an IAM role, including all of the role's policies. +// +// This data type is used as a response element in the GetAccountAuthorizationDetails +// operation. +type RoleDetail struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + Arn *string `min:"20" type:"string"` + + // The trust policy that grants permission to assume the role. + AssumeRolePolicyDocument *string `min:"1" type:"string"` + + // A list of managed policies attached to the role. These policies are the role's + // access (permissions) policies. + AttachedManagedPolicies []*AttachedPolicy `type:"list"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the role was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // A list of instance profiles that contain this role. + InstanceProfileList []*InstanceProfile `type:"list"` + + // The path to the role. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + Path *string `min:"1" type:"string"` + + // The stable and unique string identifying the role. For more information about + // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + RoleId *string `min:"16" type:"string"` + + // The friendly name that identifies the role. + RoleName *string `min:"1" type:"string"` + + // A list of inline policies embedded in the role. These policies are the role's + // access (permissions) policies. + RolePolicyList []*PolicyDetail `type:"list"` +} + +// String returns the string representation +func (s RoleDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RoleDetail) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *RoleDetail) SetArn(v string) *RoleDetail { + s.Arn = &v + return s +} + +// SetAssumeRolePolicyDocument sets the AssumeRolePolicyDocument field's value. +func (s *RoleDetail) SetAssumeRolePolicyDocument(v string) *RoleDetail { + s.AssumeRolePolicyDocument = &v + return s +} + +// SetAttachedManagedPolicies sets the AttachedManagedPolicies field's value. +func (s *RoleDetail) SetAttachedManagedPolicies(v []*AttachedPolicy) *RoleDetail { + s.AttachedManagedPolicies = v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *RoleDetail) SetCreateDate(v time.Time) *RoleDetail { + s.CreateDate = &v + return s +} + +// SetInstanceProfileList sets the InstanceProfileList field's value. +func (s *RoleDetail) SetInstanceProfileList(v []*InstanceProfile) *RoleDetail { + s.InstanceProfileList = v + return s +} + +// SetPath sets the Path field's value. +func (s *RoleDetail) SetPath(v string) *RoleDetail { + s.Path = &v + return s +} + +// SetRoleId sets the RoleId field's value. +func (s *RoleDetail) SetRoleId(v string) *RoleDetail { + s.RoleId = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *RoleDetail) SetRoleName(v string) *RoleDetail { + s.RoleName = &v + return s +} + +// SetRolePolicyList sets the RolePolicyList field's value. +func (s *RoleDetail) SetRolePolicyList(v []*PolicyDetail) *RoleDetail { + s.RolePolicyList = v + return s +} + +// An object that contains details about how a service-linked role is used, +// if that information is returned by the service. +// +// This data type is used as a response element in the GetServiceLinkedRoleDeletionStatus +// operation. +type RoleUsageType struct { + _ struct{} `type:"structure"` + + // The name of the region where the service-linked role is being used. + Region *string `min:"1" type:"string"` + + // The name of the resource that is using the service-linked role. + Resources []*string `type:"list"` +} + +// String returns the string representation +func (s RoleUsageType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RoleUsageType) GoString() string { + return s.String() +} + +// SetRegion sets the Region field's value. +func (s *RoleUsageType) SetRegion(v string) *RoleUsageType { + s.Region = &v + return s +} + +// SetResources sets the Resources field's value. +func (s *RoleUsageType) SetResources(v []*string) *RoleUsageType { + s.Resources = v + return s +} + +// Contains the list of SAML providers for this account. +type SAMLProviderListEntry struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the SAML provider. + Arn *string `min:"20" type:"string"` + + // The date and time when the SAML provider was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The expiration date and time for the SAML provider. + ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s SAMLProviderListEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SAMLProviderListEntry) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *SAMLProviderListEntry) SetArn(v string) *SAMLProviderListEntry { + s.Arn = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *SAMLProviderListEntry) SetCreateDate(v time.Time) *SAMLProviderListEntry { + s.CreateDate = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *SAMLProviderListEntry) SetValidUntil(v time.Time) *SAMLProviderListEntry { + s.ValidUntil = &v + return s +} + +// Contains information about an SSH public key. +// +// This data type is used as a response element in the GetSSHPublicKey and UploadSSHPublicKey +// operations. +type SSHPublicKey struct { + _ struct{} `type:"structure"` + + // The MD5 message digest of the SSH public key. + // + // Fingerprint is a required field + Fingerprint *string `min:"48" type:"string" required:"true"` + + // The SSH public key. + // + // SSHPublicKeyBody is a required field + SSHPublicKeyBody *string `min:"1" type:"string" required:"true"` + + // The unique identifier for the SSH public key. + // + // SSHPublicKeyId is a required field + SSHPublicKeyId *string `min:"20" type:"string" required:"true"` + + // The status of the SSH public key. Active means that the key can be used for + // authentication with an AWS CodeCommit repository. Inactive means that the + // key cannot be used. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the SSH public key was uploaded. + UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The name of the IAM user associated with the SSH public key. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s SSHPublicKey) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSHPublicKey) GoString() string { + return s.String() +} + +// SetFingerprint sets the Fingerprint field's value. +func (s *SSHPublicKey) SetFingerprint(v string) *SSHPublicKey { + s.Fingerprint = &v + return s +} + +// SetSSHPublicKeyBody sets the SSHPublicKeyBody field's value. +func (s *SSHPublicKey) SetSSHPublicKeyBody(v string) *SSHPublicKey { + s.SSHPublicKeyBody = &v + return s +} + +// SetSSHPublicKeyId sets the SSHPublicKeyId field's value. +func (s *SSHPublicKey) SetSSHPublicKeyId(v string) *SSHPublicKey { + s.SSHPublicKeyId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *SSHPublicKey) SetStatus(v string) *SSHPublicKey { + s.Status = &v + return s +} + +// SetUploadDate sets the UploadDate field's value. +func (s *SSHPublicKey) SetUploadDate(v time.Time) *SSHPublicKey { + s.UploadDate = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *SSHPublicKey) SetUserName(v string) *SSHPublicKey { + s.UserName = &v + return s +} + +// Contains information about an SSH public key, without the key's body or fingerprint. +// +// This data type is used as a response element in the ListSSHPublicKeys operation. +type SSHPublicKeyMetadata struct { + _ struct{} `type:"structure"` + + // The unique identifier for the SSH public key. + // + // SSHPublicKeyId is a required field + SSHPublicKeyId *string `min:"20" type:"string" required:"true"` + + // The status of the SSH public key. Active means that the key can be used for + // authentication with an AWS CodeCommit repository. Inactive means that the + // key cannot be used. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the SSH public key was uploaded. + // + // UploadDate is a required field + UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The name of the IAM user associated with the SSH public key. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s SSHPublicKeyMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSHPublicKeyMetadata) GoString() string { + return s.String() +} + +// SetSSHPublicKeyId sets the SSHPublicKeyId field's value. +func (s *SSHPublicKeyMetadata) SetSSHPublicKeyId(v string) *SSHPublicKeyMetadata { + s.SSHPublicKeyId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *SSHPublicKeyMetadata) SetStatus(v string) *SSHPublicKeyMetadata { + s.Status = &v + return s +} + +// SetUploadDate sets the UploadDate field's value. +func (s *SSHPublicKeyMetadata) SetUploadDate(v time.Time) *SSHPublicKeyMetadata { + s.UploadDate = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *SSHPublicKeyMetadata) SetUserName(v string) *SSHPublicKeyMetadata { + s.UserName = &v + return s +} + +// Contains information about a server certificate. +// +// This data type is used as a response element in the GetServerCertificate +// operation. +type ServerCertificate struct { + _ struct{} `type:"structure"` + + // The contents of the public key certificate. + // + // CertificateBody is a required field + CertificateBody *string `min:"1" type:"string" required:"true"` + + // The contents of the public key certificate chain. + CertificateChain *string `min:"1" type:"string"` + + // The meta information of the server certificate, such as its name, path, ID, + // and ARN. + // + // ServerCertificateMetadata is a required field + ServerCertificateMetadata *ServerCertificateMetadata `type:"structure" required:"true"` +} + +// String returns the string representation +func (s ServerCertificate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerCertificate) GoString() string { + return s.String() +} + +// SetCertificateBody sets the CertificateBody field's value. +func (s *ServerCertificate) SetCertificateBody(v string) *ServerCertificate { + s.CertificateBody = &v + return s +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *ServerCertificate) SetCertificateChain(v string) *ServerCertificate { + s.CertificateChain = &v + return s +} + +// SetServerCertificateMetadata sets the ServerCertificateMetadata field's value. +func (s *ServerCertificate) SetServerCertificateMetadata(v *ServerCertificateMetadata) *ServerCertificate { + s.ServerCertificateMetadata = v + return s +} + +// Contains information about a server certificate without its certificate body, +// certificate chain, and private key. +// +// This data type is used as a response element in the UploadServerCertificate +// and ListServerCertificates operations. +type ServerCertificateMetadata struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) specifying the server certificate. For more + // information about ARNs and how to use them in policies, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // The date on which the certificate is set to expire. + Expiration *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The path to the server certificate. For more information about paths, see + // IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Path is a required field + Path *string `min:"1" type:"string" required:"true"` + + // The stable and unique string identifying the server certificate. For more + // information about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // ServerCertificateId is a required field + ServerCertificateId *string `min:"16" type:"string" required:"true"` + + // The name that identifies the server certificate. + // + // ServerCertificateName is a required field + ServerCertificateName *string `min:"1" type:"string" required:"true"` + + // The date when the server certificate was uploaded. + UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s ServerCertificateMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerCertificateMetadata) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *ServerCertificateMetadata) SetArn(v string) *ServerCertificateMetadata { + s.Arn = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *ServerCertificateMetadata) SetExpiration(v time.Time) *ServerCertificateMetadata { + s.Expiration = &v + return s +} + +// SetPath sets the Path field's value. +func (s *ServerCertificateMetadata) SetPath(v string) *ServerCertificateMetadata { + s.Path = &v + return s +} + +// SetServerCertificateId sets the ServerCertificateId field's value. +func (s *ServerCertificateMetadata) SetServerCertificateId(v string) *ServerCertificateMetadata { + s.ServerCertificateId = &v + return s +} + +// SetServerCertificateName sets the ServerCertificateName field's value. +func (s *ServerCertificateMetadata) SetServerCertificateName(v string) *ServerCertificateMetadata { + s.ServerCertificateName = &v + return s +} + +// SetUploadDate sets the UploadDate field's value. +func (s *ServerCertificateMetadata) SetUploadDate(v time.Time) *ServerCertificateMetadata { + s.UploadDate = &v + return s +} + +// Contains the details of a service-specific credential. +type ServiceSpecificCredential struct { + _ struct{} `type:"structure"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the service-specific credential were created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The name of the service associated with the service-specific credential. + // + // ServiceName is a required field + ServiceName *string `type:"string" required:"true"` + + // The generated password for the service-specific credential. + // + // ServicePassword is a required field + ServicePassword *string `type:"string" required:"true"` + + // The unique identifier for the service-specific credential. + // + // ServiceSpecificCredentialId is a required field + ServiceSpecificCredentialId *string `min:"20" type:"string" required:"true"` + + // The generated user name for the service-specific credential. This value is + // generated by combining the IAM user's name combined with the ID number of + // the AWS account, as in jane-at-123456789012, for example. This value cannot + // be configured by the user. + // + // ServiceUserName is a required field + ServiceUserName *string `min:"17" type:"string" required:"true"` + + // The status of the service-specific credential. Active means that the key + // is valid for API calls, while Inactive means it is not. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user associated with the service-specific credential. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ServiceSpecificCredential) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceSpecificCredential) GoString() string { + return s.String() +} + +// SetCreateDate sets the CreateDate field's value. +func (s *ServiceSpecificCredential) SetCreateDate(v time.Time) *ServiceSpecificCredential { + s.CreateDate = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *ServiceSpecificCredential) SetServiceName(v string) *ServiceSpecificCredential { + s.ServiceName = &v + return s +} + +// SetServicePassword sets the ServicePassword field's value. +func (s *ServiceSpecificCredential) SetServicePassword(v string) *ServiceSpecificCredential { + s.ServicePassword = &v + return s +} + +// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. +func (s *ServiceSpecificCredential) SetServiceSpecificCredentialId(v string) *ServiceSpecificCredential { + s.ServiceSpecificCredentialId = &v + return s +} + +// SetServiceUserName sets the ServiceUserName field's value. +func (s *ServiceSpecificCredential) SetServiceUserName(v string) *ServiceSpecificCredential { + s.ServiceUserName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ServiceSpecificCredential) SetStatus(v string) *ServiceSpecificCredential { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ServiceSpecificCredential) SetUserName(v string) *ServiceSpecificCredential { + s.UserName = &v + return s +} + +// Contains additional details about a service-specific credential. +type ServiceSpecificCredentialMetadata struct { + _ struct{} `type:"structure"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the service-specific credential were created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The name of the service associated with the service-specific credential. + // + // ServiceName is a required field + ServiceName *string `type:"string" required:"true"` + + // The unique identifier for the service-specific credential. + // + // ServiceSpecificCredentialId is a required field + ServiceSpecificCredentialId *string `min:"20" type:"string" required:"true"` + + // The generated user name for the service-specific credential. + // + // ServiceUserName is a required field + ServiceUserName *string `min:"17" type:"string" required:"true"` + + // The status of the service-specific credential. Active means that the key + // is valid for API calls, while Inactive means it is not. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user associated with the service-specific credential. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ServiceSpecificCredentialMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceSpecificCredentialMetadata) GoString() string { + return s.String() +} + +// SetCreateDate sets the CreateDate field's value. +func (s *ServiceSpecificCredentialMetadata) SetCreateDate(v time.Time) *ServiceSpecificCredentialMetadata { + s.CreateDate = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *ServiceSpecificCredentialMetadata) SetServiceName(v string) *ServiceSpecificCredentialMetadata { + s.ServiceName = &v + return s +} + +// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. +func (s *ServiceSpecificCredentialMetadata) SetServiceSpecificCredentialId(v string) *ServiceSpecificCredentialMetadata { + s.ServiceSpecificCredentialId = &v + return s +} + +// SetServiceUserName sets the ServiceUserName field's value. +func (s *ServiceSpecificCredentialMetadata) SetServiceUserName(v string) *ServiceSpecificCredentialMetadata { + s.ServiceUserName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ServiceSpecificCredentialMetadata) SetStatus(v string) *ServiceSpecificCredentialMetadata { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ServiceSpecificCredentialMetadata) SetUserName(v string) *ServiceSpecificCredentialMetadata { + s.UserName = &v + return s +} + +type SetDefaultPolicyVersionInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy whose default version you + // want to set. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The version of the policy to set as the default (operative) version. + // + // For more information about managed policy versions, see Versioning for Managed + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the IAM User Guide. + // + // VersionId is a required field + VersionId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s SetDefaultPolicyVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetDefaultPolicyVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetDefaultPolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetDefaultPolicyVersionInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.VersionId == nil { + invalidParams.Add(request.NewErrParamRequired("VersionId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *SetDefaultPolicyVersionInput) SetPolicyArn(v string) *SetDefaultPolicyVersionInput { + s.PolicyArn = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *SetDefaultPolicyVersionInput) SetVersionId(v string) *SetDefaultPolicyVersionInput { + s.VersionId = &v + return s +} + +type SetDefaultPolicyVersionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SetDefaultPolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetDefaultPolicyVersionOutput) GoString() string { + return s.String() +} + +// Contains information about an X.509 signing certificate. +// +// This data type is used as a response element in the UploadSigningCertificate +// and ListSigningCertificates operations. +type SigningCertificate struct { + _ struct{} `type:"structure"` + + // The contents of the signing certificate. + // + // CertificateBody is a required field + CertificateBody *string `min:"1" type:"string" required:"true"` + + // The ID for the signing certificate. + // + // CertificateId is a required field + CertificateId *string `min:"24" type:"string" required:"true"` + + // The status of the signing certificate. Active means that the key is valid + // for API calls, while Inactive means it is not. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The date when the signing certificate was uploaded. + UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The name of the user the signing certificate is associated with. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s SigningCertificate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SigningCertificate) GoString() string { + return s.String() +} + +// SetCertificateBody sets the CertificateBody field's value. +func (s *SigningCertificate) SetCertificateBody(v string) *SigningCertificate { + s.CertificateBody = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *SigningCertificate) SetCertificateId(v string) *SigningCertificate { + s.CertificateId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *SigningCertificate) SetStatus(v string) *SigningCertificate { + s.Status = &v + return s +} + +// SetUploadDate sets the UploadDate field's value. +func (s *SigningCertificate) SetUploadDate(v time.Time) *SigningCertificate { + s.UploadDate = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *SigningCertificate) SetUserName(v string) *SigningCertificate { + s.UserName = &v + return s +} + +type SimulateCustomPolicyInput struct { + _ struct{} `type:"structure"` + + // A list of names of API operations to evaluate in the simulation. Each operation + // is evaluated against each resource. Each operation must include the service + // identifier, such as iam:CreateUser. + // + // ActionNames is a required field + ActionNames []*string `type:"list" required:"true"` + + // The ARN of the IAM user that you want to use as the simulated caller of the + // API operations. CallerArn is required if you include a ResourcePolicy so + // that the policy's Principal element has a value to use in evaluating the + // policy. + // + // You can specify only the ARN of an IAM user. You cannot specify the ARN of + // an assumed role, federated user, or a service principal. + CallerArn *string `min:"1" type:"string"` + + // A list of context keys and corresponding values for the simulation to use. + // Whenever a context key is evaluated in one of the simulated IAM permission + // policies, the corresponding value is supplied. + ContextEntries []*ContextEntry `type:"list"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // A list of policy documents to include in the simulation. Each document is + // specified as a string containing the complete, valid JSON text of an IAM + // policy. Do not include any resource-based policies in this parameter. Any + // resource-based policy must be submitted with the ResourcePolicy parameter. + // The policies cannot be "scope-down" policies, such as you could include in + // a call to GetFederationToken (http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetFederationToken.html) + // or one of the AssumeRole (http://docs.aws.amazon.com/IAM/latest/APIReference/API_AssumeRole.html) + // API operations. In other words, do not use policies designed to restrict + // what a user can do while using the temporary credentials. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyInputList is a required field + PolicyInputList []*string `type:"list" required:"true"` + + // A list of ARNs of AWS resources to include in the simulation. If this parameter + // is not provided then the value defaults to * (all resources). Each API in + // the ActionNames parameter is evaluated for each resource in this list. The + // simulation determines the access result (allowed or denied) of each combination + // and reports it in the response. + // + // The simulation does not automatically retrieve policies for the specified + // resources. If you want to include a resource policy in the simulation, then + // you must include the policy as a string in the ResourcePolicy parameter. + // + // If you include a ResourcePolicy, then it must be applicable to all of the + // resources included in the simulation or you receive an invalid input error. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + ResourceArns []*string `type:"list"` + + // Specifies the type of simulation to run. Different API operations that support + // resource-based policies require different combinations of resources. By specifying + // the type of simulation to run, you enable the policy simulator to enforce + // the presence of the required resources to ensure reliable simulation results. + // If your simulation does not match one of the following scenarios, then you + // can omit this parameter. The following list shows each of the supported scenario + // values and the resources that you must define to run the simulation. + // + // Each of the EC2 scenarios requires that you specify instance, image, and + // security-group resources. If your scenario includes an EBS volume, then you + // must specify that volume as a resource. If the EC2 scenario includes VPC, + // then you must supply the network-interface resource. If it includes an IP + // subnet, then you must specify the subnet resource. For more information on + // the EC2 scenario options, see Supported Platforms (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) + // in the Amazon EC2 User Guide. + // + // * EC2-Classic-InstanceStore + // + // instance, image, security-group + // + // * EC2-Classic-EBS + // + // instance, image, security-group, volume + // + // * EC2-VPC-InstanceStore + // + // instance, image, security-group, network-interface + // + // * EC2-VPC-InstanceStore-Subnet + // + // instance, image, security-group, network-interface, subnet + // + // * EC2-VPC-EBS + // + // instance, image, security-group, network-interface, volume + // + // * EC2-VPC-EBS-Subnet + // + // instance, image, security-group, network-interface, subnet, volume + ResourceHandlingOption *string `min:"1" type:"string"` + + // An AWS account ID that specifies the owner of any simulated resource that + // does not identify its owner in the resource ARN, such as an S3 bucket or + // object. If ResourceOwner is specified, it is also used as the account owner + // of any ResourcePolicy included in the simulation. If the ResourceOwner parameter + // is not specified, then the owner of the resources and the resource policy + // defaults to the account of the identity provided in CallerArn. This parameter + // is required only if you specify a resource-based policy and account that + // owns the resource is different from the account that owns the simulated calling + // user CallerArn. + ResourceOwner *string `min:"1" type:"string"` + + // A resource-based policy to include in the simulation provided as a string. + // Each resource in the simulation is treated as if it had this policy attached. + // You can include only one resource-based policy in a simulation. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + ResourcePolicy *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s SimulateCustomPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SimulateCustomPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SimulateCustomPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SimulateCustomPolicyInput"} + if s.ActionNames == nil { + invalidParams.Add(request.NewErrParamRequired("ActionNames")) + } + if s.CallerArn != nil && len(*s.CallerArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CallerArn", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PolicyInputList == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyInputList")) + } + if s.ResourceHandlingOption != nil && len(*s.ResourceHandlingOption) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceHandlingOption", 1)) + } + if s.ResourceOwner != nil && len(*s.ResourceOwner) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceOwner", 1)) + } + if s.ResourcePolicy != nil && len(*s.ResourcePolicy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourcePolicy", 1)) + } + if s.ContextEntries != nil { + for i, v := range s.ContextEntries { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ContextEntries", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActionNames sets the ActionNames field's value. +func (s *SimulateCustomPolicyInput) SetActionNames(v []*string) *SimulateCustomPolicyInput { + s.ActionNames = v + return s +} + +// SetCallerArn sets the CallerArn field's value. +func (s *SimulateCustomPolicyInput) SetCallerArn(v string) *SimulateCustomPolicyInput { + s.CallerArn = &v + return s +} + +// SetContextEntries sets the ContextEntries field's value. +func (s *SimulateCustomPolicyInput) SetContextEntries(v []*ContextEntry) *SimulateCustomPolicyInput { + s.ContextEntries = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *SimulateCustomPolicyInput) SetMarker(v string) *SimulateCustomPolicyInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *SimulateCustomPolicyInput) SetMaxItems(v int64) *SimulateCustomPolicyInput { + s.MaxItems = &v + return s +} + +// SetPolicyInputList sets the PolicyInputList field's value. +func (s *SimulateCustomPolicyInput) SetPolicyInputList(v []*string) *SimulateCustomPolicyInput { + s.PolicyInputList = v + return s +} + +// SetResourceArns sets the ResourceArns field's value. +func (s *SimulateCustomPolicyInput) SetResourceArns(v []*string) *SimulateCustomPolicyInput { + s.ResourceArns = v + return s +} + +// SetResourceHandlingOption sets the ResourceHandlingOption field's value. +func (s *SimulateCustomPolicyInput) SetResourceHandlingOption(v string) *SimulateCustomPolicyInput { + s.ResourceHandlingOption = &v + return s +} + +// SetResourceOwner sets the ResourceOwner field's value. +func (s *SimulateCustomPolicyInput) SetResourceOwner(v string) *SimulateCustomPolicyInput { + s.ResourceOwner = &v + return s +} + +// SetResourcePolicy sets the ResourcePolicy field's value. +func (s *SimulateCustomPolicyInput) SetResourcePolicy(v string) *SimulateCustomPolicyInput { + s.ResourcePolicy = &v + return s +} + +// Contains the response to a successful SimulatePrincipalPolicy or SimulateCustomPolicy +// request. +type SimulatePolicyResponse struct { + _ struct{} `type:"structure"` + + // The results of the simulation. + EvaluationResults []*EvaluationResult `type:"list"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s SimulatePolicyResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SimulatePolicyResponse) GoString() string { + return s.String() +} + +// SetEvaluationResults sets the EvaluationResults field's value. +func (s *SimulatePolicyResponse) SetEvaluationResults(v []*EvaluationResult) *SimulatePolicyResponse { + s.EvaluationResults = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *SimulatePolicyResponse) SetIsTruncated(v bool) *SimulatePolicyResponse { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *SimulatePolicyResponse) SetMarker(v string) *SimulatePolicyResponse { + s.Marker = &v + return s +} + +type SimulatePrincipalPolicyInput struct { + _ struct{} `type:"structure"` + + // A list of names of API operations to evaluate in the simulation. Each operation + // is evaluated for each resource. Each operation must include the service identifier, + // such as iam:CreateUser. + // + // ActionNames is a required field + ActionNames []*string `type:"list" required:"true"` + + // The ARN of the IAM user that you want to specify as the simulated caller + // of the API operations. If you do not specify a CallerArn, it defaults to + // the ARN of the user that you specify in PolicySourceArn, if you specified + // a user. If you include both a PolicySourceArn (for example, arn:aws:iam::123456789012:user/David) + // and a CallerArn (for example, arn:aws:iam::123456789012:user/Bob), the result + // is that you simulate calling the API operations as Bob, as if Bob had David's + // policies. + // + // You can specify only the ARN of an IAM user. You cannot specify the ARN of + // an assumed role, federated user, or a service principal. + // + // CallerArn is required if you include a ResourcePolicy and the PolicySourceArn + // is not the ARN for an IAM user. This is required so that the resource-based + // policy's Principal element has a value to use in evaluating the policy. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + CallerArn *string `min:"1" type:"string"` + + // A list of context keys and corresponding values for the simulation to use. + // Whenever a context key is evaluated in one of the simulated IAM permission + // policies, the corresponding value is supplied. + ContextEntries []*ContextEntry `type:"list"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // An optional list of additional policy documents to include in the simulation. + // Each document is specified as a string containing the complete, valid JSON + // text of an IAM policy. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + PolicyInputList []*string `type:"list"` + + // The Amazon Resource Name (ARN) of a user, group, or role whose policies you + // want to include in the simulation. If you specify a user, group, or role, + // the simulation includes all policies that are associated with that entity. + // If you specify a user, the simulation also includes all policies that are + // attached to any groups the user belongs to. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicySourceArn is a required field + PolicySourceArn *string `min:"20" type:"string" required:"true"` + + // A list of ARNs of AWS resources to include in the simulation. If this parameter + // is not provided, then the value defaults to * (all resources). Each API in + // the ActionNames parameter is evaluated for each resource in this list. The + // simulation determines the access result (allowed or denied) of each combination + // and reports it in the response. + // + // The simulation does not automatically retrieve policies for the specified + // resources. If you want to include a resource policy in the simulation, then + // you must include the policy as a string in the ResourcePolicy parameter. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + ResourceArns []*string `type:"list"` + + // Specifies the type of simulation to run. Different API operations that support + // resource-based policies require different combinations of resources. By specifying + // the type of simulation to run, you enable the policy simulator to enforce + // the presence of the required resources to ensure reliable simulation results. + // If your simulation does not match one of the following scenarios, then you + // can omit this parameter. The following list shows each of the supported scenario + // values and the resources that you must define to run the simulation. + // + // Each of the EC2 scenarios requires that you specify instance, image, and + // security-group resources. If your scenario includes an EBS volume, then you + // must specify that volume as a resource. If the EC2 scenario includes VPC, + // then you must supply the network-interface resource. If it includes an IP + // subnet, then you must specify the subnet resource. For more information on + // the EC2 scenario options, see Supported Platforms (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) + // in the Amazon EC2 User Guide. + // + // * EC2-Classic-InstanceStore + // + // instance, image, security-group + // + // * EC2-Classic-EBS + // + // instance, image, security-group, volume + // + // * EC2-VPC-InstanceStore + // + // instance, image, security-group, network-interface + // + // * EC2-VPC-InstanceStore-Subnet + // + // instance, image, security-group, network-interface, subnet + // + // * EC2-VPC-EBS + // + // instance, image, security-group, network-interface, volume + // + // * EC2-VPC-EBS-Subnet + // + // instance, image, security-group, network-interface, subnet, volume + ResourceHandlingOption *string `min:"1" type:"string"` + + // An AWS account ID that specifies the owner of any simulated resource that + // does not identify its owner in the resource ARN, such as an S3 bucket or + // object. If ResourceOwner is specified, it is also used as the account owner + // of any ResourcePolicy included in the simulation. If the ResourceOwner parameter + // is not specified, then the owner of the resources and the resource policy + // defaults to the account of the identity provided in CallerArn. This parameter + // is required only if you specify a resource-based policy and account that + // owns the resource is different from the account that owns the simulated calling + // user CallerArn. + ResourceOwner *string `min:"1" type:"string"` + + // A resource-based policy to include in the simulation provided as a string. + // Each resource in the simulation is treated as if it had this policy attached. + // You can include only one resource-based policy in a simulation. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + ResourcePolicy *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s SimulatePrincipalPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SimulatePrincipalPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SimulatePrincipalPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SimulatePrincipalPolicyInput"} + if s.ActionNames == nil { + invalidParams.Add(request.NewErrParamRequired("ActionNames")) + } + if s.CallerArn != nil && len(*s.CallerArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CallerArn", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PolicySourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicySourceArn")) + } + if s.PolicySourceArn != nil && len(*s.PolicySourceArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicySourceArn", 20)) + } + if s.ResourceHandlingOption != nil && len(*s.ResourceHandlingOption) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceHandlingOption", 1)) + } + if s.ResourceOwner != nil && len(*s.ResourceOwner) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceOwner", 1)) + } + if s.ResourcePolicy != nil && len(*s.ResourcePolicy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourcePolicy", 1)) + } + if s.ContextEntries != nil { + for i, v := range s.ContextEntries { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ContextEntries", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActionNames sets the ActionNames field's value. +func (s *SimulatePrincipalPolicyInput) SetActionNames(v []*string) *SimulatePrincipalPolicyInput { + s.ActionNames = v + return s +} + +// SetCallerArn sets the CallerArn field's value. +func (s *SimulatePrincipalPolicyInput) SetCallerArn(v string) *SimulatePrincipalPolicyInput { + s.CallerArn = &v + return s +} + +// SetContextEntries sets the ContextEntries field's value. +func (s *SimulatePrincipalPolicyInput) SetContextEntries(v []*ContextEntry) *SimulatePrincipalPolicyInput { + s.ContextEntries = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *SimulatePrincipalPolicyInput) SetMarker(v string) *SimulatePrincipalPolicyInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *SimulatePrincipalPolicyInput) SetMaxItems(v int64) *SimulatePrincipalPolicyInput { + s.MaxItems = &v + return s +} + +// SetPolicyInputList sets the PolicyInputList field's value. +func (s *SimulatePrincipalPolicyInput) SetPolicyInputList(v []*string) *SimulatePrincipalPolicyInput { + s.PolicyInputList = v + return s +} + +// SetPolicySourceArn sets the PolicySourceArn field's value. +func (s *SimulatePrincipalPolicyInput) SetPolicySourceArn(v string) *SimulatePrincipalPolicyInput { + s.PolicySourceArn = &v + return s +} + +// SetResourceArns sets the ResourceArns field's value. +func (s *SimulatePrincipalPolicyInput) SetResourceArns(v []*string) *SimulatePrincipalPolicyInput { + s.ResourceArns = v + return s +} + +// SetResourceHandlingOption sets the ResourceHandlingOption field's value. +func (s *SimulatePrincipalPolicyInput) SetResourceHandlingOption(v string) *SimulatePrincipalPolicyInput { + s.ResourceHandlingOption = &v + return s +} + +// SetResourceOwner sets the ResourceOwner field's value. +func (s *SimulatePrincipalPolicyInput) SetResourceOwner(v string) *SimulatePrincipalPolicyInput { + s.ResourceOwner = &v + return s +} + +// SetResourcePolicy sets the ResourcePolicy field's value. +func (s *SimulatePrincipalPolicyInput) SetResourcePolicy(v string) *SimulatePrincipalPolicyInput { + s.ResourcePolicy = &v + return s +} + +// Contains a reference to a Statement element in a policy document that determines +// the result of the simulation. +// +// This data type is used by the MatchedStatements member of the EvaluationResult +// type. +type Statement struct { + _ struct{} `type:"structure"` + + // The row and column of the end of a Statement in an IAM policy. + EndPosition *Position `type:"structure"` + + // The identifier of the policy that was provided as an input. + SourcePolicyId *string `type:"string"` + + // The type of the policy. + SourcePolicyType *string `type:"string" enum:"PolicySourceType"` + + // The row and column of the beginning of the Statement in an IAM policy. + StartPosition *Position `type:"structure"` +} + +// String returns the string representation +func (s Statement) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Statement) GoString() string { + return s.String() +} + +// SetEndPosition sets the EndPosition field's value. +func (s *Statement) SetEndPosition(v *Position) *Statement { + s.EndPosition = v + return s +} + +// SetSourcePolicyId sets the SourcePolicyId field's value. +func (s *Statement) SetSourcePolicyId(v string) *Statement { + s.SourcePolicyId = &v + return s +} + +// SetSourcePolicyType sets the SourcePolicyType field's value. +func (s *Statement) SetSourcePolicyType(v string) *Statement { + s.SourcePolicyType = &v + return s +} + +// SetStartPosition sets the StartPosition field's value. +func (s *Statement) SetStartPosition(v *Position) *Statement { + s.StartPosition = v + return s +} + +type UpdateAccessKeyInput struct { + _ struct{} `type:"structure"` + + // The access key ID of the secret access key you want to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // AccessKeyId is a required field + AccessKeyId *string `min:"16" type:"string" required:"true"` + + // The status you want to assign to the secret access key. Active means that + // the key can be used for API calls to AWS, while Inactive means that the key + // cannot be used. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the user whose key you want to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateAccessKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAccessKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAccessKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAccessKeyInput"} + if s.AccessKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("AccessKeyId")) + } + if s.AccessKeyId != nil && len(*s.AccessKeyId) < 16 { + invalidParams.Add(request.NewErrParamMinLen("AccessKeyId", 16)) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *UpdateAccessKeyInput) SetAccessKeyId(v string) *UpdateAccessKeyInput { + s.AccessKeyId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *UpdateAccessKeyInput) SetStatus(v string) *UpdateAccessKeyInput { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UpdateAccessKeyInput) SetUserName(v string) *UpdateAccessKeyInput { + s.UserName = &v + return s +} + +type UpdateAccessKeyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateAccessKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAccessKeyOutput) GoString() string { + return s.String() +} + +type UpdateAccountPasswordPolicyInput struct { + _ struct{} `type:"structure"` + + // Allows all IAM users in your account to use the AWS Management Console to + // change their own passwords. For more information, see Letting IAM Users Change + // Their Own Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/HowToPwdIAMUser.html) + // in the IAM User Guide. + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that IAM users in the account do + // not automatically have permissions to change their own password. + AllowUsersToChangePassword *bool `type:"boolean"` + + // Prevents IAM users from setting a new password after their password has expired. + // The IAM user cannot be accessed until an administrator resets the password. + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that IAM users can change their + // passwords after they expire and continue to sign in as the user. + HardExpiry *bool `type:"boolean"` + + // The number of days that an IAM user password is valid. + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of 0. The result is that IAM user passwords never expire. + MaxPasswordAge *int64 `min:"1" type:"integer"` + + // The minimum number of characters allowed in an IAM user password. + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of 6. + MinimumPasswordLength *int64 `min:"6" type:"integer"` + + // Specifies the number of previous passwords that IAM users are prevented from + // reusing. + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of 0. The result is that IAM users are not prevented from + // reusing previous passwords. + PasswordReusePrevention *int64 `min:"1" type:"integer"` + + // Specifies whether IAM user passwords must contain at least one lowercase + // character from the ISO basic Latin alphabet (a to z). + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one lowercase character. + RequireLowercaseCharacters *bool `type:"boolean"` + + // Specifies whether IAM user passwords must contain at least one numeric character + // (0 to 9). + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one numeric character. + RequireNumbers *bool `type:"boolean"` + + // Specifies whether IAM user passwords must contain at least one of the following + // non-alphanumeric characters: + // + // ! @ # $ % ^ & * ( ) _ + - = [ ] { } | ' + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one symbol character. + RequireSymbols *bool `type:"boolean"` + + // Specifies whether IAM user passwords must contain at least one uppercase + // character from the ISO basic Latin alphabet (A to Z). + // + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one uppercase character. + RequireUppercaseCharacters *bool `type:"boolean"` +} + +// String returns the string representation +func (s UpdateAccountPasswordPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAccountPasswordPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAccountPasswordPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAccountPasswordPolicyInput"} + if s.MaxPasswordAge != nil && *s.MaxPasswordAge < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxPasswordAge", 1)) + } + if s.MinimumPasswordLength != nil && *s.MinimumPasswordLength < 6 { + invalidParams.Add(request.NewErrParamMinValue("MinimumPasswordLength", 6)) + } + if s.PasswordReusePrevention != nil && *s.PasswordReusePrevention < 1 { + invalidParams.Add(request.NewErrParamMinValue("PasswordReusePrevention", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowUsersToChangePassword sets the AllowUsersToChangePassword field's value. +func (s *UpdateAccountPasswordPolicyInput) SetAllowUsersToChangePassword(v bool) *UpdateAccountPasswordPolicyInput { + s.AllowUsersToChangePassword = &v + return s +} + +// SetHardExpiry sets the HardExpiry field's value. +func (s *UpdateAccountPasswordPolicyInput) SetHardExpiry(v bool) *UpdateAccountPasswordPolicyInput { + s.HardExpiry = &v + return s +} + +// SetMaxPasswordAge sets the MaxPasswordAge field's value. +func (s *UpdateAccountPasswordPolicyInput) SetMaxPasswordAge(v int64) *UpdateAccountPasswordPolicyInput { + s.MaxPasswordAge = &v + return s +} + +// SetMinimumPasswordLength sets the MinimumPasswordLength field's value. +func (s *UpdateAccountPasswordPolicyInput) SetMinimumPasswordLength(v int64) *UpdateAccountPasswordPolicyInput { + s.MinimumPasswordLength = &v + return s +} + +// SetPasswordReusePrevention sets the PasswordReusePrevention field's value. +func (s *UpdateAccountPasswordPolicyInput) SetPasswordReusePrevention(v int64) *UpdateAccountPasswordPolicyInput { + s.PasswordReusePrevention = &v + return s +} + +// SetRequireLowercaseCharacters sets the RequireLowercaseCharacters field's value. +func (s *UpdateAccountPasswordPolicyInput) SetRequireLowercaseCharacters(v bool) *UpdateAccountPasswordPolicyInput { + s.RequireLowercaseCharacters = &v + return s +} + +// SetRequireNumbers sets the RequireNumbers field's value. +func (s *UpdateAccountPasswordPolicyInput) SetRequireNumbers(v bool) *UpdateAccountPasswordPolicyInput { + s.RequireNumbers = &v + return s +} + +// SetRequireSymbols sets the RequireSymbols field's value. +func (s *UpdateAccountPasswordPolicyInput) SetRequireSymbols(v bool) *UpdateAccountPasswordPolicyInput { + s.RequireSymbols = &v + return s +} + +// SetRequireUppercaseCharacters sets the RequireUppercaseCharacters field's value. +func (s *UpdateAccountPasswordPolicyInput) SetRequireUppercaseCharacters(v bool) *UpdateAccountPasswordPolicyInput { + s.RequireUppercaseCharacters = &v + return s +} + +type UpdateAccountPasswordPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateAccountPasswordPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAccountPasswordPolicyOutput) GoString() string { + return s.String() +} + +type UpdateAssumeRolePolicyInput struct { + _ struct{} `type:"structure"` + + // The policy that grants an entity permission to assume the role. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyDocument is a required field + PolicyDocument *string `min:"1" type:"string" required:"true"` + + // The name of the role to update with the new policy. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateAssumeRolePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAssumeRolePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAssumeRolePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAssumeRolePolicyInput"} + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *UpdateAssumeRolePolicyInput) SetPolicyDocument(v string) *UpdateAssumeRolePolicyInput { + s.PolicyDocument = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *UpdateAssumeRolePolicyInput) SetRoleName(v string) *UpdateAssumeRolePolicyInput { + s.RoleName = &v + return s +} + +type UpdateAssumeRolePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateAssumeRolePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAssumeRolePolicyOutput) GoString() string { + return s.String() +} + +type UpdateGroupInput struct { + _ struct{} `type:"structure"` + + // Name of the IAM group to update. If you're changing the name of the group, + // this is the original name. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // New name for the IAM group. Only include this if changing the group's name. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + NewGroupName *string `min:"1" type:"string"` + + // New path for the IAM group. Only include this if changing the group's path. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + NewPath *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.NewGroupName != nil && len(*s.NewGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewGroupName", 1)) + } + if s.NewPath != nil && len(*s.NewPath) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewPath", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *UpdateGroupInput) SetGroupName(v string) *UpdateGroupInput { + s.GroupName = &v + return s +} + +// SetNewGroupName sets the NewGroupName field's value. +func (s *UpdateGroupInput) SetNewGroupName(v string) *UpdateGroupInput { + s.NewGroupName = &v + return s +} + +// SetNewPath sets the NewPath field's value. +func (s *UpdateGroupInput) SetNewPath(v string) *UpdateGroupInput { + s.NewPath = &v + return s +} + +type UpdateGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGroupOutput) GoString() string { + return s.String() +} + +type UpdateLoginProfileInput struct { + _ struct{} `type:"structure"` + + // The new password for the specified IAM user. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // However, the format can be further restricted by the account administrator + // by setting a password policy on the AWS account. For more information, see + // UpdateAccountPasswordPolicy. + Password *string `min:"1" type:"string"` + + // Allows this new password to be used only once by requiring the specified + // IAM user to set a new password on next sign-in. + PasswordResetRequired *bool `type:"boolean"` + + // The name of the user whose password you want to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateLoginProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateLoginProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateLoginProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateLoginProfileInput"} + if s.Password != nil && len(*s.Password) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Password", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPassword sets the Password field's value. +func (s *UpdateLoginProfileInput) SetPassword(v string) *UpdateLoginProfileInput { + s.Password = &v + return s +} + +// SetPasswordResetRequired sets the PasswordResetRequired field's value. +func (s *UpdateLoginProfileInput) SetPasswordResetRequired(v bool) *UpdateLoginProfileInput { + s.PasswordResetRequired = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UpdateLoginProfileInput) SetUserName(v string) *UpdateLoginProfileInput { + s.UserName = &v + return s +} + +type UpdateLoginProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateLoginProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateLoginProfileOutput) GoString() string { + return s.String() +} + +type UpdateOpenIDConnectProviderThumbprintInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM OIDC provider resource object for + // which you want to update the thumbprint. You can get a list of OIDC provider + // ARNs by using the ListOpenIDConnectProviders operation. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // OpenIDConnectProviderArn is a required field + OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` + + // A list of certificate thumbprints that are associated with the specified + // IAM OpenID Connect provider. For more information, see CreateOpenIDConnectProvider. + // + // ThumbprintList is a required field + ThumbprintList []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s UpdateOpenIDConnectProviderThumbprintInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateOpenIDConnectProviderThumbprintInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateOpenIDConnectProviderThumbprintInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateOpenIDConnectProviderThumbprintInput"} + if s.OpenIDConnectProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("OpenIDConnectProviderArn")) + } + if s.OpenIDConnectProviderArn != nil && len(*s.OpenIDConnectProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("OpenIDConnectProviderArn", 20)) + } + if s.ThumbprintList == nil { + invalidParams.Add(request.NewErrParamRequired("ThumbprintList")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOpenIDConnectProviderArn sets the OpenIDConnectProviderArn field's value. +func (s *UpdateOpenIDConnectProviderThumbprintInput) SetOpenIDConnectProviderArn(v string) *UpdateOpenIDConnectProviderThumbprintInput { + s.OpenIDConnectProviderArn = &v + return s +} + +// SetThumbprintList sets the ThumbprintList field's value. +func (s *UpdateOpenIDConnectProviderThumbprintInput) SetThumbprintList(v []*string) *UpdateOpenIDConnectProviderThumbprintInput { + s.ThumbprintList = v + return s +} + +type UpdateOpenIDConnectProviderThumbprintOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateOpenIDConnectProviderThumbprintOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateOpenIDConnectProviderThumbprintOutput) GoString() string { + return s.String() +} + +type UpdateRoleDescriptionInput struct { + _ struct{} `type:"structure"` + + // The new description that you want to apply to the specified role. + // + // Description is a required field + Description *string `type:"string" required:"true"` + + // The name of the role that you want to modify. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateRoleDescriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRoleDescriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRoleDescriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRoleDescriptionInput"} + if s.Description == nil { + invalidParams.Add(request.NewErrParamRequired("Description")) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateRoleDescriptionInput) SetDescription(v string) *UpdateRoleDescriptionInput { + s.Description = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *UpdateRoleDescriptionInput) SetRoleName(v string) *UpdateRoleDescriptionInput { + s.RoleName = &v + return s +} + +type UpdateRoleDescriptionOutput struct { + _ struct{} `type:"structure"` + + // A structure that contains details about the modified role. + Role *Role `type:"structure"` +} + +// String returns the string representation +func (s UpdateRoleDescriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRoleDescriptionOutput) GoString() string { + return s.String() +} + +// SetRole sets the Role field's value. +func (s *UpdateRoleDescriptionOutput) SetRole(v *Role) *UpdateRoleDescriptionOutput { + s.Role = v + return s +} + +type UpdateRoleInput struct { + _ struct{} `type:"structure"` + + // The new description that you want to apply to the specified role. + Description *string `type:"string"` + + // The maximum session duration (in seconds) that you want to set for the specified + // role. If you do not specify a value for this setting, the default maximum + // of one hour is applied. This setting can have a value from 1 hour to 12 hours. + // + // Anyone who assumes the role from the AWS CLI or API can use the DurationSeconds + // API parameter or the duration-seconds CLI parameter to request a longer session. + // The MaxSessionDuration setting determines the maximum duration that can be + // requested using the DurationSeconds parameter. If users don't specify a value + // for the DurationSeconds parameter, their security credentials are valid for + // one hour by default. This applies when you use the AssumeRole* API operations + // or the assume-role* CLI operations but does not apply when you use those + // operations to create a console URL. For more information, see Using IAM Roles + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) in the + // IAM User Guide. + MaxSessionDuration *int64 `min:"3600" type:"integer"` + + // The name of the role that you want to modify. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRoleInput"} + if s.MaxSessionDuration != nil && *s.MaxSessionDuration < 3600 { + invalidParams.Add(request.NewErrParamMinValue("MaxSessionDuration", 3600)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateRoleInput) SetDescription(v string) *UpdateRoleInput { + s.Description = &v + return s +} + +// SetMaxSessionDuration sets the MaxSessionDuration field's value. +func (s *UpdateRoleInput) SetMaxSessionDuration(v int64) *UpdateRoleInput { + s.MaxSessionDuration = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *UpdateRoleInput) SetRoleName(v string) *UpdateRoleInput { + s.RoleName = &v + return s +} + +type UpdateRoleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRoleOutput) GoString() string { + return s.String() +} + +type UpdateSAMLProviderInput struct { + _ struct{} `type:"structure"` + + // An XML document generated by an identity provider (IdP) that supports SAML + // 2.0. The document includes the issuer's name, expiration information, and + // keys that can be used to validate the SAML authentication response (assertions) + // that are received from the IdP. You must generate the metadata document using + // the identity management software that is used as your organization's IdP. + // + // SAMLMetadataDocument is a required field + SAMLMetadataDocument *string `min:"1000" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the SAML provider to update. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // SAMLProviderArn is a required field + SAMLProviderArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateSAMLProviderInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSAMLProviderInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSAMLProviderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSAMLProviderInput"} + if s.SAMLMetadataDocument == nil { + invalidParams.Add(request.NewErrParamRequired("SAMLMetadataDocument")) + } + if s.SAMLMetadataDocument != nil && len(*s.SAMLMetadataDocument) < 1000 { + invalidParams.Add(request.NewErrParamMinLen("SAMLMetadataDocument", 1000)) + } + if s.SAMLProviderArn == nil { + invalidParams.Add(request.NewErrParamRequired("SAMLProviderArn")) + } + if s.SAMLProviderArn != nil && len(*s.SAMLProviderArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("SAMLProviderArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSAMLMetadataDocument sets the SAMLMetadataDocument field's value. +func (s *UpdateSAMLProviderInput) SetSAMLMetadataDocument(v string) *UpdateSAMLProviderInput { + s.SAMLMetadataDocument = &v + return s +} + +// SetSAMLProviderArn sets the SAMLProviderArn field's value. +func (s *UpdateSAMLProviderInput) SetSAMLProviderArn(v string) *UpdateSAMLProviderInput { + s.SAMLProviderArn = &v + return s +} + +// Contains the response to a successful UpdateSAMLProvider request. +type UpdateSAMLProviderOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the SAML provider that was updated. + SAMLProviderArn *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s UpdateSAMLProviderOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSAMLProviderOutput) GoString() string { + return s.String() +} + +// SetSAMLProviderArn sets the SAMLProviderArn field's value. +func (s *UpdateSAMLProviderOutput) SetSAMLProviderArn(v string) *UpdateSAMLProviderOutput { + s.SAMLProviderArn = &v + return s +} + +type UpdateSSHPublicKeyInput struct { + _ struct{} `type:"structure"` + + // The unique identifier for the SSH public key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // SSHPublicKeyId is a required field + SSHPublicKeyId *string `min:"20" type:"string" required:"true"` + + // The status to assign to the SSH public key. Active means that the key can + // be used for authentication with an AWS CodeCommit repository. Inactive means + // that the key cannot be used. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user associated with the SSH public key. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateSSHPublicKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSSHPublicKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSSHPublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSSHPublicKeyInput"} + if s.SSHPublicKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("SSHPublicKeyId")) + } + if s.SSHPublicKeyId != nil && len(*s.SSHPublicKeyId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("SSHPublicKeyId", 20)) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSSHPublicKeyId sets the SSHPublicKeyId field's value. +func (s *UpdateSSHPublicKeyInput) SetSSHPublicKeyId(v string) *UpdateSSHPublicKeyInput { + s.SSHPublicKeyId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *UpdateSSHPublicKeyInput) SetStatus(v string) *UpdateSSHPublicKeyInput { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UpdateSSHPublicKeyInput) SetUserName(v string) *UpdateSSHPublicKeyInput { + s.UserName = &v + return s +} + +type UpdateSSHPublicKeyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateSSHPublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSSHPublicKeyOutput) GoString() string { + return s.String() +} + +type UpdateServerCertificateInput struct { + _ struct{} `type:"structure"` + + // The new path for the server certificate. Include this only if you are updating + // the server certificate's path. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + NewPath *string `min:"1" type:"string"` + + // The new name for the server certificate. Include this only if you are updating + // the server certificate's name. The name of the certificate cannot contain + // any spaces. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + NewServerCertificateName *string `min:"1" type:"string"` + + // The name of the server certificate that you want to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // ServerCertificateName is a required field + ServerCertificateName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateServerCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateServerCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateServerCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateServerCertificateInput"} + if s.NewPath != nil && len(*s.NewPath) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewPath", 1)) + } + if s.NewServerCertificateName != nil && len(*s.NewServerCertificateName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewServerCertificateName", 1)) + } + if s.ServerCertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("ServerCertificateName")) + } + if s.ServerCertificateName != nil && len(*s.ServerCertificateName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServerCertificateName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNewPath sets the NewPath field's value. +func (s *UpdateServerCertificateInput) SetNewPath(v string) *UpdateServerCertificateInput { + s.NewPath = &v + return s +} + +// SetNewServerCertificateName sets the NewServerCertificateName field's value. +func (s *UpdateServerCertificateInput) SetNewServerCertificateName(v string) *UpdateServerCertificateInput { + s.NewServerCertificateName = &v + return s +} + +// SetServerCertificateName sets the ServerCertificateName field's value. +func (s *UpdateServerCertificateInput) SetServerCertificateName(v string) *UpdateServerCertificateInput { + s.ServerCertificateName = &v + return s +} + +type UpdateServerCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateServerCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateServerCertificateOutput) GoString() string { + return s.String() +} + +type UpdateServiceSpecificCredentialInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the service-specific credential. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // ServiceSpecificCredentialId is a required field + ServiceSpecificCredentialId *string `min:"20" type:"string" required:"true"` + + // The status to be assigned to the service-specific credential. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user associated with the service-specific credential. + // If you do not specify this value, then the operation assumes the user whose + // credentials are used to call the operation. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateServiceSpecificCredentialInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateServiceSpecificCredentialInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateServiceSpecificCredentialInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateServiceSpecificCredentialInput"} + if s.ServiceSpecificCredentialId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceSpecificCredentialId")) + } + if s.ServiceSpecificCredentialId != nil && len(*s.ServiceSpecificCredentialId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("ServiceSpecificCredentialId", 20)) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. +func (s *UpdateServiceSpecificCredentialInput) SetServiceSpecificCredentialId(v string) *UpdateServiceSpecificCredentialInput { + s.ServiceSpecificCredentialId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *UpdateServiceSpecificCredentialInput) SetStatus(v string) *UpdateServiceSpecificCredentialInput { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UpdateServiceSpecificCredentialInput) SetUserName(v string) *UpdateServiceSpecificCredentialInput { + s.UserName = &v + return s +} + +type UpdateServiceSpecificCredentialOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateServiceSpecificCredentialOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateServiceSpecificCredentialOutput) GoString() string { + return s.String() +} + +type UpdateSigningCertificateInput struct { + _ struct{} `type:"structure"` + + // The ID of the signing certificate you want to update. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that can consist of any upper or lowercased letter + // or digit. + // + // CertificateId is a required field + CertificateId *string `min:"24" type:"string" required:"true"` + + // The status you want to assign to the certificate. Active means that the certificate + // can be used for API calls to AWS Inactive means that the certificate cannot + // be used. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user the signing certificate belongs to. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateSigningCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSigningCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSigningCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSigningCertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 24 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 24)) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *UpdateSigningCertificateInput) SetCertificateId(v string) *UpdateSigningCertificateInput { + s.CertificateId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *UpdateSigningCertificateInput) SetStatus(v string) *UpdateSigningCertificateInput { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UpdateSigningCertificateInput) SetUserName(v string) *UpdateSigningCertificateInput { + s.UserName = &v + return s +} + +type UpdateSigningCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateSigningCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSigningCertificateOutput) GoString() string { + return s.String() +} + +type UpdateUserInput struct { + _ struct{} `type:"structure"` + + // New path for the IAM user. Include this parameter only if you're changing + // the user's path. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + NewPath *string `min:"1" type:"string"` + + // New name for the user. Include this parameter only if you're changing the + // user's name. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + NewUserName *string `min:"1" type:"string"` + + // Name of the user to update. If you're changing the name of the user, this + // is the original user name. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateUserInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateUserInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateUserInput"} + if s.NewPath != nil && len(*s.NewPath) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewPath", 1)) + } + if s.NewUserName != nil && len(*s.NewUserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewUserName", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNewPath sets the NewPath field's value. +func (s *UpdateUserInput) SetNewPath(v string) *UpdateUserInput { + s.NewPath = &v + return s +} + +// SetNewUserName sets the NewUserName field's value. +func (s *UpdateUserInput) SetNewUserName(v string) *UpdateUserInput { + s.NewUserName = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UpdateUserInput) SetUserName(v string) *UpdateUserInput { + s.UserName = &v + return s +} + +type UpdateUserOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateUserOutput) GoString() string { + return s.String() +} + +type UploadSSHPublicKeyInput struct { + _ struct{} `type:"structure"` + + // The SSH public key. The public key must be encoded in ssh-rsa format or PEM + // format. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // SSHPublicKeyBody is a required field + SSHPublicKeyBody *string `min:"1" type:"string" required:"true"` + + // The name of the IAM user to associate the SSH public key with. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UploadSSHPublicKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadSSHPublicKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UploadSSHPublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UploadSSHPublicKeyInput"} + if s.SSHPublicKeyBody == nil { + invalidParams.Add(request.NewErrParamRequired("SSHPublicKeyBody")) + } + if s.SSHPublicKeyBody != nil && len(*s.SSHPublicKeyBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SSHPublicKeyBody", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSSHPublicKeyBody sets the SSHPublicKeyBody field's value. +func (s *UploadSSHPublicKeyInput) SetSSHPublicKeyBody(v string) *UploadSSHPublicKeyInput { + s.SSHPublicKeyBody = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UploadSSHPublicKeyInput) SetUserName(v string) *UploadSSHPublicKeyInput { + s.UserName = &v + return s +} + +// Contains the response to a successful UploadSSHPublicKey request. +type UploadSSHPublicKeyOutput struct { + _ struct{} `type:"structure"` + + // Contains information about the SSH public key. + SSHPublicKey *SSHPublicKey `type:"structure"` +} + +// String returns the string representation +func (s UploadSSHPublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadSSHPublicKeyOutput) GoString() string { + return s.String() +} + +// SetSSHPublicKey sets the SSHPublicKey field's value. +func (s *UploadSSHPublicKeyOutput) SetSSHPublicKey(v *SSHPublicKey) *UploadSSHPublicKeyOutput { + s.SSHPublicKey = v + return s +} + +type UploadServerCertificateInput struct { + _ struct{} `type:"structure"` + + // The contents of the public key certificate in PEM-encoded format. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // CertificateBody is a required field + CertificateBody *string `min:"1" type:"string" required:"true"` + + // The contents of the certificate chain. This is typically a concatenation + // of the PEM-encoded public key certificates of the chain. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + CertificateChain *string `min:"1" type:"string"` + + // The path for the server certificate. For more information about paths, see + // IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the IAM User Guide. + // + // This parameter is optional. If it is not included, it defaults to a slash + // (/). This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of either a forward slash (/) by itself + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. + // + // If you are uploading a server certificate specifically for use with Amazon + // CloudFront distributions, you must specify a path using the path parameter. + // The path must begin with /cloudfront and must include a trailing slash (for + // example, /cloudfront/test/). + Path *string `min:"1" type:"string"` + + // The contents of the private key in PEM-encoded format. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PrivateKey is a required field + PrivateKey *string `min:"1" type:"string" required:"true"` + + // The name for the server certificate. Do not include the path in this value. + // The name of the certificate cannot contain any spaces. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // ServerCertificateName is a required field + ServerCertificateName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UploadServerCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadServerCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UploadServerCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UploadServerCertificateInput"} + if s.CertificateBody == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateBody")) + } + if s.CertificateBody != nil && len(*s.CertificateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CertificateBody", 1)) + } + if s.CertificateChain != nil && len(*s.CertificateChain) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CertificateChain", 1)) + } + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + if s.PrivateKey == nil { + invalidParams.Add(request.NewErrParamRequired("PrivateKey")) + } + if s.PrivateKey != nil && len(*s.PrivateKey) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PrivateKey", 1)) + } + if s.ServerCertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("ServerCertificateName")) + } + if s.ServerCertificateName != nil && len(*s.ServerCertificateName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServerCertificateName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateBody sets the CertificateBody field's value. +func (s *UploadServerCertificateInput) SetCertificateBody(v string) *UploadServerCertificateInput { + s.CertificateBody = &v + return s +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *UploadServerCertificateInput) SetCertificateChain(v string) *UploadServerCertificateInput { + s.CertificateChain = &v + return s +} + +// SetPath sets the Path field's value. +func (s *UploadServerCertificateInput) SetPath(v string) *UploadServerCertificateInput { + s.Path = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *UploadServerCertificateInput) SetPrivateKey(v string) *UploadServerCertificateInput { + s.PrivateKey = &v + return s +} + +// SetServerCertificateName sets the ServerCertificateName field's value. +func (s *UploadServerCertificateInput) SetServerCertificateName(v string) *UploadServerCertificateInput { + s.ServerCertificateName = &v + return s +} + +// Contains the response to a successful UploadServerCertificate request. +type UploadServerCertificateOutput struct { + _ struct{} `type:"structure"` + + // The meta information of the uploaded server certificate without its certificate + // body, certificate chain, and private key. + ServerCertificateMetadata *ServerCertificateMetadata `type:"structure"` +} + +// String returns the string representation +func (s UploadServerCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadServerCertificateOutput) GoString() string { + return s.String() +} + +// SetServerCertificateMetadata sets the ServerCertificateMetadata field's value. +func (s *UploadServerCertificateOutput) SetServerCertificateMetadata(v *ServerCertificateMetadata) *UploadServerCertificateOutput { + s.ServerCertificateMetadata = v + return s +} + +type UploadSigningCertificateInput struct { + _ struct{} `type:"structure"` + + // The contents of the signing certificate. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // CertificateBody is a required field + CertificateBody *string `min:"1" type:"string" required:"true"` + + // The name of the user the signing certificate is for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + UserName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UploadSigningCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadSigningCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UploadSigningCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UploadSigningCertificateInput"} + if s.CertificateBody == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateBody")) + } + if s.CertificateBody != nil && len(*s.CertificateBody) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CertificateBody", 1)) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateBody sets the CertificateBody field's value. +func (s *UploadSigningCertificateInput) SetCertificateBody(v string) *UploadSigningCertificateInput { + s.CertificateBody = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UploadSigningCertificateInput) SetUserName(v string) *UploadSigningCertificateInput { + s.UserName = &v + return s +} + +// Contains the response to a successful UploadSigningCertificate request. +type UploadSigningCertificateOutput struct { + _ struct{} `type:"structure"` + + // Information about the certificate. + // + // Certificate is a required field + Certificate *SigningCertificate `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UploadSigningCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadSigningCertificateOutput) GoString() string { + return s.String() +} + +// SetCertificate sets the Certificate field's value. +func (s *UploadSigningCertificateOutput) SetCertificate(v *SigningCertificate) *UploadSigningCertificateOutput { + s.Certificate = v + return s +} + +// Contains information about an IAM user entity. +// +// This data type is used as a response element in the following operations: +// +// * CreateUser +// +// * GetUser +// +// * ListUsers +type User struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that identifies the user. For more information + // about ARNs and how to use ARNs in policies, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the user was created. + // + // CreateDate is a required field + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the user's password was last used to sign in to an AWS website. For + // a list of AWS websites that capture a user's last sign-in time, see the Credential + // Reports (http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html) + // topic in the Using IAM guide. If a password is used more than once in a five-minute + // span, only the first use is returned in this field. If the field is null + // (no value) then it indicates that they never signed in with a password. This + // can be because: + // + // * The user never had a password. + // + // * A password exists but has not been used since IAM started tracking this + // information on October 20th, 2014. + // + // A null does not mean that the user never had a password. Also, if the user + // does not currently have a password, but had one in the past, then this field + // contains the date and time the most recent password was used. + // + // This value is returned only in the GetUser and ListUsers operations. + PasswordLastUsed *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The path to the user. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // Path is a required field + Path *string `min:"1" type:"string" required:"true"` + + // The stable and unique string identifying the user. For more information about + // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + // + // UserId is a required field + UserId *string `min:"16" type:"string" required:"true"` + + // The friendly name identifying the user. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s User) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s User) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *User) SetArn(v string) *User { + s.Arn = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *User) SetCreateDate(v time.Time) *User { + s.CreateDate = &v + return s +} + +// SetPasswordLastUsed sets the PasswordLastUsed field's value. +func (s *User) SetPasswordLastUsed(v time.Time) *User { + s.PasswordLastUsed = &v + return s +} + +// SetPath sets the Path field's value. +func (s *User) SetPath(v string) *User { + s.Path = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *User) SetUserId(v string) *User { + s.UserId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *User) SetUserName(v string) *User { + s.UserName = &v + return s +} + +// Contains information about an IAM user, including all the user's policies +// and all the IAM groups the user is in. +// +// This data type is used as a response element in the GetAccountAuthorizationDetails +// operation. +type UserDetail struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN). ARNs are unique identifiers for AWS resources. + // + // For more information about ARNs, go to Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + Arn *string `min:"20" type:"string"` + + // A list of the managed policies attached to the user. + AttachedManagedPolicies []*AttachedPolicy `type:"list"` + + // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), + // when the user was created. + CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // A list of IAM groups that the user is in. + GroupList []*string `type:"list"` + + // The path to the user. For more information about paths, see IAM Identifiers + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + Path *string `min:"1" type:"string"` + + // The stable and unique string identifying the user. For more information about + // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) + // in the Using IAM guide. + UserId *string `min:"16" type:"string"` + + // The friendly name identifying the user. + UserName *string `min:"1" type:"string"` + + // A list of the inline policies embedded in the user. + UserPolicyList []*PolicyDetail `type:"list"` +} + +// String returns the string representation +func (s UserDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UserDetail) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *UserDetail) SetArn(v string) *UserDetail { + s.Arn = &v + return s +} + +// SetAttachedManagedPolicies sets the AttachedManagedPolicies field's value. +func (s *UserDetail) SetAttachedManagedPolicies(v []*AttachedPolicy) *UserDetail { + s.AttachedManagedPolicies = v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *UserDetail) SetCreateDate(v time.Time) *UserDetail { + s.CreateDate = &v + return s +} + +// SetGroupList sets the GroupList field's value. +func (s *UserDetail) SetGroupList(v []*string) *UserDetail { + s.GroupList = v + return s +} + +// SetPath sets the Path field's value. +func (s *UserDetail) SetPath(v string) *UserDetail { + s.Path = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *UserDetail) SetUserId(v string) *UserDetail { + s.UserId = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *UserDetail) SetUserName(v string) *UserDetail { + s.UserName = &v + return s +} + +// SetUserPolicyList sets the UserPolicyList field's value. +func (s *UserDetail) SetUserPolicyList(v []*PolicyDetail) *UserDetail { + s.UserPolicyList = v + return s +} + +// Contains information about a virtual MFA device. +type VirtualMFADevice struct { + _ struct{} `type:"structure"` + + // The Base32 seed defined as specified in RFC3548 (https://tools.ietf.org/html/rfc3548.txt). + // The Base32StringSeed is Base64-encoded. + // + // Base32StringSeed is automatically base64 encoded/decoded by the SDK. + Base32StringSeed []byte `type:"blob"` + + // The date and time on which the virtual MFA device was enabled. + EnableDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // A QR code PNG image that encodes otpauth://totp/$virtualMFADeviceName@$AccountName?secret=$Base32String + // where $virtualMFADeviceName is one of the create call arguments, AccountName + // is the user name if set (otherwise, the account ID otherwise), and Base32String + // is the seed in Base32 format. The Base32String value is Base64-encoded. + // + // QRCodePNG is automatically base64 encoded/decoded by the SDK. + QRCodePNG []byte `type:"blob"` + + // The serial number associated with VirtualMFADevice. + // + // SerialNumber is a required field + SerialNumber *string `min:"9" type:"string" required:"true"` + + // The IAM user associated with this virtual MFA device. + User *User `type:"structure"` +} + +// String returns the string representation +func (s VirtualMFADevice) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VirtualMFADevice) GoString() string { + return s.String() +} + +// SetBase32StringSeed sets the Base32StringSeed field's value. +func (s *VirtualMFADevice) SetBase32StringSeed(v []byte) *VirtualMFADevice { + s.Base32StringSeed = v + return s +} + +// SetEnableDate sets the EnableDate field's value. +func (s *VirtualMFADevice) SetEnableDate(v time.Time) *VirtualMFADevice { + s.EnableDate = &v + return s +} + +// SetQRCodePNG sets the QRCodePNG field's value. +func (s *VirtualMFADevice) SetQRCodePNG(v []byte) *VirtualMFADevice { + s.QRCodePNG = v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *VirtualMFADevice) SetSerialNumber(v string) *VirtualMFADevice { + s.SerialNumber = &v + return s +} + +// SetUser sets the User field's value. +func (s *VirtualMFADevice) SetUser(v *User) *VirtualMFADevice { + s.User = v + return s +} + +const ( + // ContextKeyTypeEnumString is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumString = "string" + + // ContextKeyTypeEnumStringList is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumStringList = "stringList" + + // ContextKeyTypeEnumNumeric is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumNumeric = "numeric" + + // ContextKeyTypeEnumNumericList is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumNumericList = "numericList" + + // ContextKeyTypeEnumBoolean is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumBoolean = "boolean" + + // ContextKeyTypeEnumBooleanList is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumBooleanList = "booleanList" + + // ContextKeyTypeEnumIp is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumIp = "ip" + + // ContextKeyTypeEnumIpList is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumIpList = "ipList" + + // ContextKeyTypeEnumBinary is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumBinary = "binary" + + // ContextKeyTypeEnumBinaryList is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumBinaryList = "binaryList" + + // ContextKeyTypeEnumDate is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumDate = "date" + + // ContextKeyTypeEnumDateList is a ContextKeyTypeEnum enum value + ContextKeyTypeEnumDateList = "dateList" +) + +const ( + // DeletionTaskStatusTypeSucceeded is a DeletionTaskStatusType enum value + DeletionTaskStatusTypeSucceeded = "SUCCEEDED" + + // DeletionTaskStatusTypeInProgress is a DeletionTaskStatusType enum value + DeletionTaskStatusTypeInProgress = "IN_PROGRESS" + + // DeletionTaskStatusTypeFailed is a DeletionTaskStatusType enum value + DeletionTaskStatusTypeFailed = "FAILED" + + // DeletionTaskStatusTypeNotStarted is a DeletionTaskStatusType enum value + DeletionTaskStatusTypeNotStarted = "NOT_STARTED" +) + +const ( + // EntityTypeUser is a EntityType enum value + EntityTypeUser = "User" + + // EntityTypeRole is a EntityType enum value + EntityTypeRole = "Role" + + // EntityTypeGroup is a EntityType enum value + EntityTypeGroup = "Group" + + // EntityTypeLocalManagedPolicy is a EntityType enum value + EntityTypeLocalManagedPolicy = "LocalManagedPolicy" + + // EntityTypeAwsmanagedPolicy is a EntityType enum value + EntityTypeAwsmanagedPolicy = "AWSManagedPolicy" +) + +const ( + // PolicyEvaluationDecisionTypeAllowed is a PolicyEvaluationDecisionType enum value + PolicyEvaluationDecisionTypeAllowed = "allowed" + + // PolicyEvaluationDecisionTypeExplicitDeny is a PolicyEvaluationDecisionType enum value + PolicyEvaluationDecisionTypeExplicitDeny = "explicitDeny" + + // PolicyEvaluationDecisionTypeImplicitDeny is a PolicyEvaluationDecisionType enum value + PolicyEvaluationDecisionTypeImplicitDeny = "implicitDeny" +) + +const ( + // PolicySourceTypeUser is a PolicySourceType enum value + PolicySourceTypeUser = "user" + + // PolicySourceTypeGroup is a PolicySourceType enum value + PolicySourceTypeGroup = "group" + + // PolicySourceTypeRole is a PolicySourceType enum value + PolicySourceTypeRole = "role" + + // PolicySourceTypeAwsManaged is a PolicySourceType enum value + PolicySourceTypeAwsManaged = "aws-managed" + + // PolicySourceTypeUserManaged is a PolicySourceType enum value + PolicySourceTypeUserManaged = "user-managed" + + // PolicySourceTypeResource is a PolicySourceType enum value + PolicySourceTypeResource = "resource" + + // PolicySourceTypeNone is a PolicySourceType enum value + PolicySourceTypeNone = "none" +) + +const ( + // ReportFormatTypeTextCsv is a ReportFormatType enum value + ReportFormatTypeTextCsv = "text/csv" +) + +const ( + // ReportStateTypeStarted is a ReportStateType enum value + ReportStateTypeStarted = "STARTED" + + // ReportStateTypeInprogress is a ReportStateType enum value + ReportStateTypeInprogress = "INPROGRESS" + + // ReportStateTypeComplete is a ReportStateType enum value + ReportStateTypeComplete = "COMPLETE" +) + +const ( + // AssignmentStatusTypeAssigned is a assignmentStatusType enum value + AssignmentStatusTypeAssigned = "Assigned" + + // AssignmentStatusTypeUnassigned is a assignmentStatusType enum value + AssignmentStatusTypeUnassigned = "Unassigned" + + // AssignmentStatusTypeAny is a assignmentStatusType enum value + AssignmentStatusTypeAny = "Any" +) + +const ( + // EncodingTypeSsh is a encodingType enum value + EncodingTypeSsh = "SSH" + + // EncodingTypePem is a encodingType enum value + EncodingTypePem = "PEM" +) + +const ( + // PolicyScopeTypeAll is a policyScopeType enum value + PolicyScopeTypeAll = "All" + + // PolicyScopeTypeAws is a policyScopeType enum value + PolicyScopeTypeAws = "AWS" + + // PolicyScopeTypeLocal is a policyScopeType enum value + PolicyScopeTypeLocal = "Local" +) + +const ( + // StatusTypeActive is a statusType enum value + StatusTypeActive = "Active" + + // StatusTypeInactive is a statusType enum value + StatusTypeInactive = "Inactive" +) + +const ( + // SummaryKeyTypeUsers is a summaryKeyType enum value + SummaryKeyTypeUsers = "Users" + + // SummaryKeyTypeUsersQuota is a summaryKeyType enum value + SummaryKeyTypeUsersQuota = "UsersQuota" + + // SummaryKeyTypeGroups is a summaryKeyType enum value + SummaryKeyTypeGroups = "Groups" + + // SummaryKeyTypeGroupsQuota is a summaryKeyType enum value + SummaryKeyTypeGroupsQuota = "GroupsQuota" + + // SummaryKeyTypeServerCertificates is a summaryKeyType enum value + SummaryKeyTypeServerCertificates = "ServerCertificates" + + // SummaryKeyTypeServerCertificatesQuota is a summaryKeyType enum value + SummaryKeyTypeServerCertificatesQuota = "ServerCertificatesQuota" + + // SummaryKeyTypeUserPolicySizeQuota is a summaryKeyType enum value + SummaryKeyTypeUserPolicySizeQuota = "UserPolicySizeQuota" + + // SummaryKeyTypeGroupPolicySizeQuota is a summaryKeyType enum value + SummaryKeyTypeGroupPolicySizeQuota = "GroupPolicySizeQuota" + + // SummaryKeyTypeGroupsPerUserQuota is a summaryKeyType enum value + SummaryKeyTypeGroupsPerUserQuota = "GroupsPerUserQuota" + + // SummaryKeyTypeSigningCertificatesPerUserQuota is a summaryKeyType enum value + SummaryKeyTypeSigningCertificatesPerUserQuota = "SigningCertificatesPerUserQuota" + + // SummaryKeyTypeAccessKeysPerUserQuota is a summaryKeyType enum value + SummaryKeyTypeAccessKeysPerUserQuota = "AccessKeysPerUserQuota" + + // SummaryKeyTypeMfadevices is a summaryKeyType enum value + SummaryKeyTypeMfadevices = "MFADevices" + + // SummaryKeyTypeMfadevicesInUse is a summaryKeyType enum value + SummaryKeyTypeMfadevicesInUse = "MFADevicesInUse" + + // SummaryKeyTypeAccountMfaenabled is a summaryKeyType enum value + SummaryKeyTypeAccountMfaenabled = "AccountMFAEnabled" + + // SummaryKeyTypeAccountAccessKeysPresent is a summaryKeyType enum value + SummaryKeyTypeAccountAccessKeysPresent = "AccountAccessKeysPresent" + + // SummaryKeyTypeAccountSigningCertificatesPresent is a summaryKeyType enum value + SummaryKeyTypeAccountSigningCertificatesPresent = "AccountSigningCertificatesPresent" + + // SummaryKeyTypeAttachedPoliciesPerGroupQuota is a summaryKeyType enum value + SummaryKeyTypeAttachedPoliciesPerGroupQuota = "AttachedPoliciesPerGroupQuota" + + // SummaryKeyTypeAttachedPoliciesPerRoleQuota is a summaryKeyType enum value + SummaryKeyTypeAttachedPoliciesPerRoleQuota = "AttachedPoliciesPerRoleQuota" + + // SummaryKeyTypeAttachedPoliciesPerUserQuota is a summaryKeyType enum value + SummaryKeyTypeAttachedPoliciesPerUserQuota = "AttachedPoliciesPerUserQuota" + + // SummaryKeyTypePolicies is a summaryKeyType enum value + SummaryKeyTypePolicies = "Policies" + + // SummaryKeyTypePoliciesQuota is a summaryKeyType enum value + SummaryKeyTypePoliciesQuota = "PoliciesQuota" + + // SummaryKeyTypePolicySizeQuota is a summaryKeyType enum value + SummaryKeyTypePolicySizeQuota = "PolicySizeQuota" + + // SummaryKeyTypePolicyVersionsInUse is a summaryKeyType enum value + SummaryKeyTypePolicyVersionsInUse = "PolicyVersionsInUse" + + // SummaryKeyTypePolicyVersionsInUseQuota is a summaryKeyType enum value + SummaryKeyTypePolicyVersionsInUseQuota = "PolicyVersionsInUseQuota" + + // SummaryKeyTypeVersionsPerPolicyQuota is a summaryKeyType enum value + SummaryKeyTypeVersionsPerPolicyQuota = "VersionsPerPolicyQuota" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/doc.go b/vendor/github.com/aws/aws-sdk-go/service/iam/doc.go new file mode 100644 index 00000000..d8766fbf --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/doc.go @@ -0,0 +1,80 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package iam provides the client and types for making API +// requests to AWS Identity and Access Management. +// +// AWS Identity and Access Management (IAM) is a web service that you can use +// to manage users and user permissions under your AWS account. This guide provides +// descriptions of IAM actions that you can call programmatically. For general +// information about IAM, see AWS Identity and Access Management (IAM) (http://aws.amazon.com/iam/). +// For the user guide for IAM, see Using IAM (http://docs.aws.amazon.com/IAM/latest/UserGuide/). +// +// AWS provides SDKs that consist of libraries and sample code for various programming +// languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs +// provide a convenient way to create programmatic access to IAM and AWS. For +// example, the SDKs take care of tasks such as cryptographically signing requests +// (see below), managing errors, and retrying requests automatically. For information +// about the AWS SDKs, including how to download and install them, see the Tools +// for Amazon Web Services (http://aws.amazon.com/tools/) page. +// +// We recommend that you use the AWS SDKs to make programmatic API calls to +// IAM. However, you can also use the IAM Query API to make direct calls to +// the IAM web service. To learn more about the IAM Query API, see Making Query +// Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in the Using IAM guide. IAM supports GET and POST requests for all actions. +// That is, the API does not require you to use GET for some actions and POST +// for others. However, GET requests are subject to the limitation size of a +// URL. Therefore, for operations that require larger sizes, use a POST request. +// +// Signing Requests +// +// Requests must be signed using an access key ID and a secret access key. We +// strongly recommend that you do not use your AWS account access key ID and +// secret access key for everyday work with IAM. You can use the access key +// ID and secret access key for an IAM user or you can use the AWS Security +// Token Service to generate temporary security credentials and use those to +// sign requests. +// +// To sign requests, we recommend that you use Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// If you have an existing application that uses Signature Version 2, you do +// not have to update it to use Signature Version 4. However, some operations +// now require Signature Version 4. The documentation for operations that require +// version 4 indicate this requirement. +// +// Additional Resources +// +// For more information, see the following: +// +// * AWS Security Credentials (http://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html). +// This topic provides general information about the types of credentials +// used for accessing AWS. +// +// * IAM Best Practices (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html). +// This topic presents a list of suggestions for using the IAM service to +// help secure your AWS resources. +// +// * Signing AWS API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html). +// This set of topics walk you through the process of signing a request using +// an access key ID and secret access key. +// +// See https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08 for more information on this service. +// +// See iam package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/iam/ +// +// Using the Client +// +// To contact AWS Identity and Access Management with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Identity and Access Management client IAM for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/iam/#New +package iam diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go b/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go new file mode 100644 index 00000000..470e19b3 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go @@ -0,0 +1,185 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package iam + +const ( + + // ErrCodeCredentialReportExpiredException for service response error code + // "ReportExpired". + // + // The request was rejected because the most recent credential report has expired. + // To generate a new credential report, use GenerateCredentialReport. For more + // information about credential report expiration, see Getting Credential Reports + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html) + // in the IAM User Guide. + ErrCodeCredentialReportExpiredException = "ReportExpired" + + // ErrCodeCredentialReportNotPresentException for service response error code + // "ReportNotPresent". + // + // The request was rejected because the credential report does not exist. To + // generate a credential report, use GenerateCredentialReport. + ErrCodeCredentialReportNotPresentException = "ReportNotPresent" + + // ErrCodeCredentialReportNotReadyException for service response error code + // "ReportInProgress". + // + // The request was rejected because the credential report is still being generated. + ErrCodeCredentialReportNotReadyException = "ReportInProgress" + + // ErrCodeDeleteConflictException for service response error code + // "DeleteConflict". + // + // The request was rejected because it attempted to delete a resource that has + // attached subordinate entities. The error message describes these entities. + ErrCodeDeleteConflictException = "DeleteConflict" + + // ErrCodeDuplicateCertificateException for service response error code + // "DuplicateCertificate". + // + // The request was rejected because the same certificate is associated with + // an IAM user in the account. + ErrCodeDuplicateCertificateException = "DuplicateCertificate" + + // ErrCodeDuplicateSSHPublicKeyException for service response error code + // "DuplicateSSHPublicKey". + // + // The request was rejected because the SSH public key is already associated + // with the specified IAM user. + ErrCodeDuplicateSSHPublicKeyException = "DuplicateSSHPublicKey" + + // ErrCodeEntityAlreadyExistsException for service response error code + // "EntityAlreadyExists". + // + // The request was rejected because it attempted to create a resource that already + // exists. + ErrCodeEntityAlreadyExistsException = "EntityAlreadyExists" + + // ErrCodeEntityTemporarilyUnmodifiableException for service response error code + // "EntityTemporarilyUnmodifiable". + // + // The request was rejected because it referenced an entity that is temporarily + // unmodifiable, such as a user name that was deleted and then recreated. The + // error indicates that the request is likely to succeed if you try again after + // waiting several minutes. The error message describes the entity. + ErrCodeEntityTemporarilyUnmodifiableException = "EntityTemporarilyUnmodifiable" + + // ErrCodeInvalidAuthenticationCodeException for service response error code + // "InvalidAuthenticationCode". + // + // The request was rejected because the authentication code was not recognized. + // The error message describes the specific error. + ErrCodeInvalidAuthenticationCodeException = "InvalidAuthenticationCode" + + // ErrCodeInvalidCertificateException for service response error code + // "InvalidCertificate". + // + // The request was rejected because the certificate is invalid. + ErrCodeInvalidCertificateException = "InvalidCertificate" + + // ErrCodeInvalidInputException for service response error code + // "InvalidInput". + // + // The request was rejected because an invalid or out-of-range value was supplied + // for an input parameter. + ErrCodeInvalidInputException = "InvalidInput" + + // ErrCodeInvalidPublicKeyException for service response error code + // "InvalidPublicKey". + // + // The request was rejected because the public key is malformed or otherwise + // invalid. + ErrCodeInvalidPublicKeyException = "InvalidPublicKey" + + // ErrCodeInvalidUserTypeException for service response error code + // "InvalidUserType". + // + // The request was rejected because the type of user for the transaction was + // incorrect. + ErrCodeInvalidUserTypeException = "InvalidUserType" + + // ErrCodeKeyPairMismatchException for service response error code + // "KeyPairMismatch". + // + // The request was rejected because the public key certificate and the private + // key do not match. + ErrCodeKeyPairMismatchException = "KeyPairMismatch" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceeded". + // + // The request was rejected because it attempted to create resources beyond + // the current AWS account limits. The error message describes the limit exceeded. + ErrCodeLimitExceededException = "LimitExceeded" + + // ErrCodeMalformedCertificateException for service response error code + // "MalformedCertificate". + // + // The request was rejected because the certificate was malformed or expired. + // The error message describes the specific error. + ErrCodeMalformedCertificateException = "MalformedCertificate" + + // ErrCodeMalformedPolicyDocumentException for service response error code + // "MalformedPolicyDocument". + // + // The request was rejected because the policy document was malformed. The error + // message describes the specific error. + ErrCodeMalformedPolicyDocumentException = "MalformedPolicyDocument" + + // ErrCodeNoSuchEntityException for service response error code + // "NoSuchEntity". + // + // The request was rejected because it referenced an entity that does not exist. + // The error message describes the entity. + ErrCodeNoSuchEntityException = "NoSuchEntity" + + // ErrCodePasswordPolicyViolationException for service response error code + // "PasswordPolicyViolation". + // + // The request was rejected because the provided password did not meet the requirements + // imposed by the account password policy. + ErrCodePasswordPolicyViolationException = "PasswordPolicyViolation" + + // ErrCodePolicyEvaluationException for service response error code + // "PolicyEvaluation". + // + // The request failed because a provided policy could not be successfully evaluated. + // An additional detailed message indicates the source of the failure. + ErrCodePolicyEvaluationException = "PolicyEvaluation" + + // ErrCodePolicyNotAttachableException for service response error code + // "PolicyNotAttachable". + // + // The request failed because AWS service role policies can only be attached + // to the service-linked role for that service. + ErrCodePolicyNotAttachableException = "PolicyNotAttachable" + + // ErrCodeServiceFailureException for service response error code + // "ServiceFailure". + // + // The request processing has failed because of an unknown error, exception + // or failure. + ErrCodeServiceFailureException = "ServiceFailure" + + // ErrCodeServiceNotSupportedException for service response error code + // "NotSupportedService". + // + // The specified service does not support service-specific credentials. + ErrCodeServiceNotSupportedException = "NotSupportedService" + + // ErrCodeUnmodifiableEntityException for service response error code + // "UnmodifiableEntity". + // + // The request was rejected because only the service that depends on the service-linked + // role can modify or delete the role on your behalf. The error message includes + // the name of the service that depends on this service-linked role. You must + // request the change through that service. + ErrCodeUnmodifiableEntityException = "UnmodifiableEntity" + + // ErrCodeUnrecognizedPublicKeyEncodingException for service response error code + // "UnrecognizedPublicKeyEncoding". + // + // The request was rejected because the public key encoding format is unsupported + // or unrecognized. + ErrCodeUnrecognizedPublicKeyEncodingException = "UnrecognizedPublicKeyEncoding" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/service.go b/vendor/github.com/aws/aws-sdk-go/service/iam/service.go new file mode 100644 index 00000000..4f798c63 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/service.go @@ -0,0 +1,93 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package iam + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/query" +) + +// IAM provides the API operation methods for making requests to +// AWS Identity and Access Management. See this package's package overview docs +// for details on the service. +// +// IAM methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type IAM struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "iam" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the IAM client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a IAM client from just a session. +// svc := iam.New(mySession) +// +// // Create a IAM client with additional configuration +// svc := iam.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *IAM { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *IAM { + svc := &IAM{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2010-05-08", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(query.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(query.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(query.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(query.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a IAM operation and runs any +// custom request initialization. +func (c *IAM) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/iam/waiters.go new file mode 100644 index 00000000..7a35d9e3 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/waiters.go @@ -0,0 +1,112 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package iam + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilInstanceProfileExists uses the IAM API operation +// GetInstanceProfile to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *IAM) WaitUntilInstanceProfileExists(input *GetInstanceProfileInput) error { + return c.WaitUntilInstanceProfileExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilInstanceProfileExistsWithContext is an extended version of WaitUntilInstanceProfileExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) WaitUntilInstanceProfileExistsWithContext(ctx aws.Context, input *GetInstanceProfileInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilInstanceProfileExists", + MaxAttempts: 40, + Delay: request.ConstantWaiterDelay(1 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 200, + }, + { + State: request.RetryWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 404, + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *GetInstanceProfileInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetInstanceProfileRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilUserExists uses the IAM API operation +// GetUser to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *IAM) WaitUntilUserExists(input *GetUserInput) error { + return c.WaitUntilUserExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilUserExistsWithContext is an extended version of WaitUntilUserExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) WaitUntilUserExistsWithContext(ctx aws.Context, input *GetUserInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilUserExists", + MaxAttempts: 20, + Delay: request.ConstantWaiterDelay(1 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 200, + }, + { + State: request.RetryWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "NoSuchEntity", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *GetUserInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetUserRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go new file mode 100644 index 00000000..a27823fd --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go @@ -0,0 +1,21247 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package s3 + +import ( + "fmt" + "io" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/restxml" +) + +const opAbortMultipartUpload = "AbortMultipartUpload" + +// AbortMultipartUploadRequest generates a "aws/request.Request" representing the +// client's request for the AbortMultipartUpload operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AbortMultipartUpload for more information on using the AbortMultipartUpload +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AbortMultipartUploadRequest method. +// req, resp := client.AbortMultipartUploadRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload +func (c *S3) AbortMultipartUploadRequest(input *AbortMultipartUploadInput) (req *request.Request, output *AbortMultipartUploadOutput) { + op := &request.Operation{ + Name: opAbortMultipartUpload, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &AbortMultipartUploadInput{} + } + + output = &AbortMultipartUploadOutput{} + req = c.newRequest(op, input, output) + return +} + +// AbortMultipartUpload API operation for Amazon Simple Storage Service. +// +// Aborts a multipart upload. +// +// To verify that all parts have been removed, so you don't get charged for +// the part storage, you should call the List Parts operation and ensure the +// parts list is empty. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation AbortMultipartUpload for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchUpload "NoSuchUpload" +// The specified multipart upload does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload +func (c *S3) AbortMultipartUpload(input *AbortMultipartUploadInput) (*AbortMultipartUploadOutput, error) { + req, out := c.AbortMultipartUploadRequest(input) + return out, req.Send() +} + +// AbortMultipartUploadWithContext is the same as AbortMultipartUpload with the addition of +// the ability to pass a context and additional request options. +// +// See AbortMultipartUpload for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) AbortMultipartUploadWithContext(ctx aws.Context, input *AbortMultipartUploadInput, opts ...request.Option) (*AbortMultipartUploadOutput, error) { + req, out := c.AbortMultipartUploadRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCompleteMultipartUpload = "CompleteMultipartUpload" + +// CompleteMultipartUploadRequest generates a "aws/request.Request" representing the +// client's request for the CompleteMultipartUpload operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CompleteMultipartUpload for more information on using the CompleteMultipartUpload +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CompleteMultipartUploadRequest method. +// req, resp := client.CompleteMultipartUploadRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload +func (c *S3) CompleteMultipartUploadRequest(input *CompleteMultipartUploadInput) (req *request.Request, output *CompleteMultipartUploadOutput) { + op := &request.Operation{ + Name: opCompleteMultipartUpload, + HTTPMethod: "POST", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &CompleteMultipartUploadInput{} + } + + output = &CompleteMultipartUploadOutput{} + req = c.newRequest(op, input, output) + return +} + +// CompleteMultipartUpload API operation for Amazon Simple Storage Service. +// +// Completes a multipart upload by assembling previously uploaded parts. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation CompleteMultipartUpload for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload +func (c *S3) CompleteMultipartUpload(input *CompleteMultipartUploadInput) (*CompleteMultipartUploadOutput, error) { + req, out := c.CompleteMultipartUploadRequest(input) + return out, req.Send() +} + +// CompleteMultipartUploadWithContext is the same as CompleteMultipartUpload with the addition of +// the ability to pass a context and additional request options. +// +// See CompleteMultipartUpload for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) CompleteMultipartUploadWithContext(ctx aws.Context, input *CompleteMultipartUploadInput, opts ...request.Option) (*CompleteMultipartUploadOutput, error) { + req, out := c.CompleteMultipartUploadRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCopyObject = "CopyObject" + +// CopyObjectRequest generates a "aws/request.Request" representing the +// client's request for the CopyObject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CopyObject for more information on using the CopyObject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CopyObjectRequest method. +// req, resp := client.CopyObjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObject +func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, output *CopyObjectOutput) { + op := &request.Operation{ + Name: opCopyObject, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &CopyObjectInput{} + } + + output = &CopyObjectOutput{} + req = c.newRequest(op, input, output) + return +} + +// CopyObject API operation for Amazon Simple Storage Service. +// +// Creates a copy of an object that is already stored in Amazon S3. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation CopyObject for usage and error information. +// +// Returned Error Codes: +// * ErrCodeObjectNotInActiveTierError "ObjectNotInActiveTierError" +// The source object of the COPY operation is not in the active tier and is +// only stored in Amazon Glacier. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObject +func (c *S3) CopyObject(input *CopyObjectInput) (*CopyObjectOutput, error) { + req, out := c.CopyObjectRequest(input) + return out, req.Send() +} + +// CopyObjectWithContext is the same as CopyObject with the addition of +// the ability to pass a context and additional request options. +// +// See CopyObject for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) CopyObjectWithContext(ctx aws.Context, input *CopyObjectInput, opts ...request.Option) (*CopyObjectOutput, error) { + req, out := c.CopyObjectRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateBucket = "CreateBucket" + +// CreateBucketRequest generates a "aws/request.Request" representing the +// client's request for the CreateBucket operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateBucket for more information on using the CreateBucket +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateBucketRequest method. +// req, resp := client.CreateBucketRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket +func (c *S3) CreateBucketRequest(input *CreateBucketInput) (req *request.Request, output *CreateBucketOutput) { + op := &request.Operation{ + Name: opCreateBucket, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}", + } + + if input == nil { + input = &CreateBucketInput{} + } + + output = &CreateBucketOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateBucket API operation for Amazon Simple Storage Service. +// +// Creates a new bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation CreateBucket for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBucketAlreadyExists "BucketAlreadyExists" +// The requested bucket name is not available. The bucket namespace is shared +// by all users of the system. Please select a different name and try again. +// +// * ErrCodeBucketAlreadyOwnedByYou "BucketAlreadyOwnedByYou" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket +func (c *S3) CreateBucket(input *CreateBucketInput) (*CreateBucketOutput, error) { + req, out := c.CreateBucketRequest(input) + return out, req.Send() +} + +// CreateBucketWithContext is the same as CreateBucket with the addition of +// the ability to pass a context and additional request options. +// +// See CreateBucket for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) CreateBucketWithContext(ctx aws.Context, input *CreateBucketInput, opts ...request.Option) (*CreateBucketOutput, error) { + req, out := c.CreateBucketRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateMultipartUpload = "CreateMultipartUpload" + +// CreateMultipartUploadRequest generates a "aws/request.Request" representing the +// client's request for the CreateMultipartUpload operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateMultipartUpload for more information on using the CreateMultipartUpload +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateMultipartUploadRequest method. +// req, resp := client.CreateMultipartUploadRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUpload +func (c *S3) CreateMultipartUploadRequest(input *CreateMultipartUploadInput) (req *request.Request, output *CreateMultipartUploadOutput) { + op := &request.Operation{ + Name: opCreateMultipartUpload, + HTTPMethod: "POST", + HTTPPath: "/{Bucket}/{Key+}?uploads", + } + + if input == nil { + input = &CreateMultipartUploadInput{} + } + + output = &CreateMultipartUploadOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateMultipartUpload API operation for Amazon Simple Storage Service. +// +// Initiates a multipart upload and returns an upload ID. +// +// Note: After you initiate multipart upload and upload one or more parts, you +// must either complete or abort multipart upload in order to stop getting charged +// for storage of the uploaded parts. Only after you either complete or abort +// multipart upload, Amazon S3 frees up the parts storage and stops charging +// you for the parts storage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation CreateMultipartUpload for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUpload +func (c *S3) CreateMultipartUpload(input *CreateMultipartUploadInput) (*CreateMultipartUploadOutput, error) { + req, out := c.CreateMultipartUploadRequest(input) + return out, req.Send() +} + +// CreateMultipartUploadWithContext is the same as CreateMultipartUpload with the addition of +// the ability to pass a context and additional request options. +// +// See CreateMultipartUpload for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) CreateMultipartUploadWithContext(ctx aws.Context, input *CreateMultipartUploadInput, opts ...request.Option) (*CreateMultipartUploadOutput, error) { + req, out := c.CreateMultipartUploadRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucket = "DeleteBucket" + +// DeleteBucketRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucket operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucket for more information on using the DeleteBucket +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketRequest method. +// req, resp := client.DeleteBucketRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket +func (c *S3) DeleteBucketRequest(input *DeleteBucketInput) (req *request.Request, output *DeleteBucketOutput) { + op := &request.Operation{ + Name: opDeleteBucket, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}", + } + + if input == nil { + input = &DeleteBucketInput{} + } + + output = &DeleteBucketOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucket API operation for Amazon Simple Storage Service. +// +// Deletes the bucket. All objects (including all object versions and Delete +// Markers) in the bucket must be deleted before the bucket itself can be deleted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucket for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket +func (c *S3) DeleteBucket(input *DeleteBucketInput) (*DeleteBucketOutput, error) { + req, out := c.DeleteBucketRequest(input) + return out, req.Send() +} + +// DeleteBucketWithContext is the same as DeleteBucket with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucket for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketWithContext(ctx aws.Context, input *DeleteBucketInput, opts ...request.Option) (*DeleteBucketOutput, error) { + req, out := c.DeleteBucketRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketAnalyticsConfiguration = "DeleteBucketAnalyticsConfiguration" + +// DeleteBucketAnalyticsConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketAnalyticsConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketAnalyticsConfiguration for more information on using the DeleteBucketAnalyticsConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketAnalyticsConfigurationRequest method. +// req, resp := client.DeleteBucketAnalyticsConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfiguration +func (c *S3) DeleteBucketAnalyticsConfigurationRequest(input *DeleteBucketAnalyticsConfigurationInput) (req *request.Request, output *DeleteBucketAnalyticsConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteBucketAnalyticsConfiguration, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?analytics", + } + + if input == nil { + input = &DeleteBucketAnalyticsConfigurationInput{} + } + + output = &DeleteBucketAnalyticsConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketAnalyticsConfiguration API operation for Amazon Simple Storage Service. +// +// Deletes an analytics configuration for the bucket (specified by the analytics +// configuration ID). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketAnalyticsConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfiguration +func (c *S3) DeleteBucketAnalyticsConfiguration(input *DeleteBucketAnalyticsConfigurationInput) (*DeleteBucketAnalyticsConfigurationOutput, error) { + req, out := c.DeleteBucketAnalyticsConfigurationRequest(input) + return out, req.Send() +} + +// DeleteBucketAnalyticsConfigurationWithContext is the same as DeleteBucketAnalyticsConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketAnalyticsConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketAnalyticsConfigurationWithContext(ctx aws.Context, input *DeleteBucketAnalyticsConfigurationInput, opts ...request.Option) (*DeleteBucketAnalyticsConfigurationOutput, error) { + req, out := c.DeleteBucketAnalyticsConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketCors = "DeleteBucketCors" + +// DeleteBucketCorsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketCors operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketCors for more information on using the DeleteBucketCors +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketCorsRequest method. +// req, resp := client.DeleteBucketCorsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCors +func (c *S3) DeleteBucketCorsRequest(input *DeleteBucketCorsInput) (req *request.Request, output *DeleteBucketCorsOutput) { + op := &request.Operation{ + Name: opDeleteBucketCors, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?cors", + } + + if input == nil { + input = &DeleteBucketCorsInput{} + } + + output = &DeleteBucketCorsOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketCors API operation for Amazon Simple Storage Service. +// +// Deletes the cors configuration information set for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketCors for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCors +func (c *S3) DeleteBucketCors(input *DeleteBucketCorsInput) (*DeleteBucketCorsOutput, error) { + req, out := c.DeleteBucketCorsRequest(input) + return out, req.Send() +} + +// DeleteBucketCorsWithContext is the same as DeleteBucketCors with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketCors for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketCorsWithContext(ctx aws.Context, input *DeleteBucketCorsInput, opts ...request.Option) (*DeleteBucketCorsOutput, error) { + req, out := c.DeleteBucketCorsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketEncryption = "DeleteBucketEncryption" + +// DeleteBucketEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketEncryption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketEncryption for more information on using the DeleteBucketEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketEncryptionRequest method. +// req, resp := client.DeleteBucketEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketEncryption +func (c *S3) DeleteBucketEncryptionRequest(input *DeleteBucketEncryptionInput) (req *request.Request, output *DeleteBucketEncryptionOutput) { + op := &request.Operation{ + Name: opDeleteBucketEncryption, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?encryption", + } + + if input == nil { + input = &DeleteBucketEncryptionInput{} + } + + output = &DeleteBucketEncryptionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketEncryption API operation for Amazon Simple Storage Service. +// +// Deletes the server-side encryption configuration from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketEncryption for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketEncryption +func (c *S3) DeleteBucketEncryption(input *DeleteBucketEncryptionInput) (*DeleteBucketEncryptionOutput, error) { + req, out := c.DeleteBucketEncryptionRequest(input) + return out, req.Send() +} + +// DeleteBucketEncryptionWithContext is the same as DeleteBucketEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketEncryptionWithContext(ctx aws.Context, input *DeleteBucketEncryptionInput, opts ...request.Option) (*DeleteBucketEncryptionOutput, error) { + req, out := c.DeleteBucketEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketInventoryConfiguration = "DeleteBucketInventoryConfiguration" + +// DeleteBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketInventoryConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketInventoryConfiguration for more information on using the DeleteBucketInventoryConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketInventoryConfigurationRequest method. +// req, resp := client.DeleteBucketInventoryConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfiguration +func (c *S3) DeleteBucketInventoryConfigurationRequest(input *DeleteBucketInventoryConfigurationInput) (req *request.Request, output *DeleteBucketInventoryConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteBucketInventoryConfiguration, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?inventory", + } + + if input == nil { + input = &DeleteBucketInventoryConfigurationInput{} + } + + output = &DeleteBucketInventoryConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketInventoryConfiguration API operation for Amazon Simple Storage Service. +// +// Deletes an inventory configuration (identified by the inventory ID) from +// the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketInventoryConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfiguration +func (c *S3) DeleteBucketInventoryConfiguration(input *DeleteBucketInventoryConfigurationInput) (*DeleteBucketInventoryConfigurationOutput, error) { + req, out := c.DeleteBucketInventoryConfigurationRequest(input) + return out, req.Send() +} + +// DeleteBucketInventoryConfigurationWithContext is the same as DeleteBucketInventoryConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketInventoryConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketInventoryConfigurationWithContext(ctx aws.Context, input *DeleteBucketInventoryConfigurationInput, opts ...request.Option) (*DeleteBucketInventoryConfigurationOutput, error) { + req, out := c.DeleteBucketInventoryConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketLifecycle = "DeleteBucketLifecycle" + +// DeleteBucketLifecycleRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketLifecycle operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketLifecycle for more information on using the DeleteBucketLifecycle +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketLifecycleRequest method. +// req, resp := client.DeleteBucketLifecycleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycle +func (c *S3) DeleteBucketLifecycleRequest(input *DeleteBucketLifecycleInput) (req *request.Request, output *DeleteBucketLifecycleOutput) { + op := &request.Operation{ + Name: opDeleteBucketLifecycle, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?lifecycle", + } + + if input == nil { + input = &DeleteBucketLifecycleInput{} + } + + output = &DeleteBucketLifecycleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketLifecycle API operation for Amazon Simple Storage Service. +// +// Deletes the lifecycle configuration from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketLifecycle for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycle +func (c *S3) DeleteBucketLifecycle(input *DeleteBucketLifecycleInput) (*DeleteBucketLifecycleOutput, error) { + req, out := c.DeleteBucketLifecycleRequest(input) + return out, req.Send() +} + +// DeleteBucketLifecycleWithContext is the same as DeleteBucketLifecycle with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketLifecycle for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketLifecycleWithContext(ctx aws.Context, input *DeleteBucketLifecycleInput, opts ...request.Option) (*DeleteBucketLifecycleOutput, error) { + req, out := c.DeleteBucketLifecycleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketMetricsConfiguration = "DeleteBucketMetricsConfiguration" + +// DeleteBucketMetricsConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketMetricsConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketMetricsConfiguration for more information on using the DeleteBucketMetricsConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketMetricsConfigurationRequest method. +// req, resp := client.DeleteBucketMetricsConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfiguration +func (c *S3) DeleteBucketMetricsConfigurationRequest(input *DeleteBucketMetricsConfigurationInput) (req *request.Request, output *DeleteBucketMetricsConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteBucketMetricsConfiguration, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?metrics", + } + + if input == nil { + input = &DeleteBucketMetricsConfigurationInput{} + } + + output = &DeleteBucketMetricsConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketMetricsConfiguration API operation for Amazon Simple Storage Service. +// +// Deletes a metrics configuration (specified by the metrics configuration ID) +// from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketMetricsConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfiguration +func (c *S3) DeleteBucketMetricsConfiguration(input *DeleteBucketMetricsConfigurationInput) (*DeleteBucketMetricsConfigurationOutput, error) { + req, out := c.DeleteBucketMetricsConfigurationRequest(input) + return out, req.Send() +} + +// DeleteBucketMetricsConfigurationWithContext is the same as DeleteBucketMetricsConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketMetricsConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketMetricsConfigurationWithContext(ctx aws.Context, input *DeleteBucketMetricsConfigurationInput, opts ...request.Option) (*DeleteBucketMetricsConfigurationOutput, error) { + req, out := c.DeleteBucketMetricsConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketPolicy = "DeleteBucketPolicy" + +// DeleteBucketPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketPolicy for more information on using the DeleteBucketPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketPolicyRequest method. +// req, resp := client.DeleteBucketPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicy +func (c *S3) DeleteBucketPolicyRequest(input *DeleteBucketPolicyInput) (req *request.Request, output *DeleteBucketPolicyOutput) { + op := &request.Operation{ + Name: opDeleteBucketPolicy, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?policy", + } + + if input == nil { + input = &DeleteBucketPolicyInput{} + } + + output = &DeleteBucketPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketPolicy API operation for Amazon Simple Storage Service. +// +// Deletes the policy from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketPolicy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicy +func (c *S3) DeleteBucketPolicy(input *DeleteBucketPolicyInput) (*DeleteBucketPolicyOutput, error) { + req, out := c.DeleteBucketPolicyRequest(input) + return out, req.Send() +} + +// DeleteBucketPolicyWithContext is the same as DeleteBucketPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketPolicyWithContext(ctx aws.Context, input *DeleteBucketPolicyInput, opts ...request.Option) (*DeleteBucketPolicyOutput, error) { + req, out := c.DeleteBucketPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketReplication = "DeleteBucketReplication" + +// DeleteBucketReplicationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketReplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketReplication for more information on using the DeleteBucketReplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketReplicationRequest method. +// req, resp := client.DeleteBucketReplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplication +func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput) (req *request.Request, output *DeleteBucketReplicationOutput) { + op := &request.Operation{ + Name: opDeleteBucketReplication, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?replication", + } + + if input == nil { + input = &DeleteBucketReplicationInput{} + } + + output = &DeleteBucketReplicationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketReplication API operation for Amazon Simple Storage Service. +// +// Deletes the replication configuration from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketReplication for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplication +func (c *S3) DeleteBucketReplication(input *DeleteBucketReplicationInput) (*DeleteBucketReplicationOutput, error) { + req, out := c.DeleteBucketReplicationRequest(input) + return out, req.Send() +} + +// DeleteBucketReplicationWithContext is the same as DeleteBucketReplication with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketReplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketReplicationWithContext(ctx aws.Context, input *DeleteBucketReplicationInput, opts ...request.Option) (*DeleteBucketReplicationOutput, error) { + req, out := c.DeleteBucketReplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketTagging = "DeleteBucketTagging" + +// DeleteBucketTaggingRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketTagging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketTagging for more information on using the DeleteBucketTagging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketTaggingRequest method. +// req, resp := client.DeleteBucketTaggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTagging +func (c *S3) DeleteBucketTaggingRequest(input *DeleteBucketTaggingInput) (req *request.Request, output *DeleteBucketTaggingOutput) { + op := &request.Operation{ + Name: opDeleteBucketTagging, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?tagging", + } + + if input == nil { + input = &DeleteBucketTaggingInput{} + } + + output = &DeleteBucketTaggingOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketTagging API operation for Amazon Simple Storage Service. +// +// Deletes the tags from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketTagging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTagging +func (c *S3) DeleteBucketTagging(input *DeleteBucketTaggingInput) (*DeleteBucketTaggingOutput, error) { + req, out := c.DeleteBucketTaggingRequest(input) + return out, req.Send() +} + +// DeleteBucketTaggingWithContext is the same as DeleteBucketTagging with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketTagging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketTaggingWithContext(ctx aws.Context, input *DeleteBucketTaggingInput, opts ...request.Option) (*DeleteBucketTaggingOutput, error) { + req, out := c.DeleteBucketTaggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBucketWebsite = "DeleteBucketWebsite" + +// DeleteBucketWebsiteRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketWebsite operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketWebsite for more information on using the DeleteBucketWebsite +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketWebsiteRequest method. +// req, resp := client.DeleteBucketWebsiteRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsite +func (c *S3) DeleteBucketWebsiteRequest(input *DeleteBucketWebsiteInput) (req *request.Request, output *DeleteBucketWebsiteOutput) { + op := &request.Operation{ + Name: opDeleteBucketWebsite, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?website", + } + + if input == nil { + input = &DeleteBucketWebsiteInput{} + } + + output = &DeleteBucketWebsiteOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketWebsite API operation for Amazon Simple Storage Service. +// +// This operation removes the website configuration from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketWebsite for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsite +func (c *S3) DeleteBucketWebsite(input *DeleteBucketWebsiteInput) (*DeleteBucketWebsiteOutput, error) { + req, out := c.DeleteBucketWebsiteRequest(input) + return out, req.Send() +} + +// DeleteBucketWebsiteWithContext is the same as DeleteBucketWebsite with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketWebsite for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketWebsiteWithContext(ctx aws.Context, input *DeleteBucketWebsiteInput, opts ...request.Option) (*DeleteBucketWebsiteOutput, error) { + req, out := c.DeleteBucketWebsiteRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteObject = "DeleteObject" + +// DeleteObjectRequest generates a "aws/request.Request" representing the +// client's request for the DeleteObject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteObject for more information on using the DeleteObject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteObjectRequest method. +// req, resp := client.DeleteObjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject +func (c *S3) DeleteObjectRequest(input *DeleteObjectInput) (req *request.Request, output *DeleteObjectOutput) { + op := &request.Operation{ + Name: opDeleteObject, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &DeleteObjectInput{} + } + + output = &DeleteObjectOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteObject API operation for Amazon Simple Storage Service. +// +// Removes the null version (if there is one) of an object and inserts a delete +// marker, which becomes the latest version of the object. If there isn't a +// null version, Amazon S3 does not remove any objects. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteObject for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject +func (c *S3) DeleteObject(input *DeleteObjectInput) (*DeleteObjectOutput, error) { + req, out := c.DeleteObjectRequest(input) + return out, req.Send() +} + +// DeleteObjectWithContext is the same as DeleteObject with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteObject for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteObjectWithContext(ctx aws.Context, input *DeleteObjectInput, opts ...request.Option) (*DeleteObjectOutput, error) { + req, out := c.DeleteObjectRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteObjectTagging = "DeleteObjectTagging" + +// DeleteObjectTaggingRequest generates a "aws/request.Request" representing the +// client's request for the DeleteObjectTagging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteObjectTagging for more information on using the DeleteObjectTagging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteObjectTaggingRequest method. +// req, resp := client.DeleteObjectTaggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTagging +func (c *S3) DeleteObjectTaggingRequest(input *DeleteObjectTaggingInput) (req *request.Request, output *DeleteObjectTaggingOutput) { + op := &request.Operation{ + Name: opDeleteObjectTagging, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}/{Key+}?tagging", + } + + if input == nil { + input = &DeleteObjectTaggingInput{} + } + + output = &DeleteObjectTaggingOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteObjectTagging API operation for Amazon Simple Storage Service. +// +// Removes the tag-set from an existing object. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteObjectTagging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTagging +func (c *S3) DeleteObjectTagging(input *DeleteObjectTaggingInput) (*DeleteObjectTaggingOutput, error) { + req, out := c.DeleteObjectTaggingRequest(input) + return out, req.Send() +} + +// DeleteObjectTaggingWithContext is the same as DeleteObjectTagging with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteObjectTagging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteObjectTaggingWithContext(ctx aws.Context, input *DeleteObjectTaggingInput, opts ...request.Option) (*DeleteObjectTaggingOutput, error) { + req, out := c.DeleteObjectTaggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteObjects = "DeleteObjects" + +// DeleteObjectsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteObjects operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteObjects for more information on using the DeleteObjects +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteObjectsRequest method. +// req, resp := client.DeleteObjectsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjects +func (c *S3) DeleteObjectsRequest(input *DeleteObjectsInput) (req *request.Request, output *DeleteObjectsOutput) { + op := &request.Operation{ + Name: opDeleteObjects, + HTTPMethod: "POST", + HTTPPath: "/{Bucket}?delete", + } + + if input == nil { + input = &DeleteObjectsInput{} + } + + output = &DeleteObjectsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteObjects API operation for Amazon Simple Storage Service. +// +// This operation enables you to delete multiple objects from a bucket using +// a single HTTP request. You may specify up to 1000 keys. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteObjects for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjects +func (c *S3) DeleteObjects(input *DeleteObjectsInput) (*DeleteObjectsOutput, error) { + req, out := c.DeleteObjectsRequest(input) + return out, req.Send() +} + +// DeleteObjectsWithContext is the same as DeleteObjects with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteObjects for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteObjectsWithContext(ctx aws.Context, input *DeleteObjectsInput, opts ...request.Option) (*DeleteObjectsOutput, error) { + req, out := c.DeleteObjectsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketAccelerateConfiguration = "GetBucketAccelerateConfiguration" + +// GetBucketAccelerateConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketAccelerateConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketAccelerateConfiguration for more information on using the GetBucketAccelerateConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketAccelerateConfigurationRequest method. +// req, resp := client.GetBucketAccelerateConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfiguration +func (c *S3) GetBucketAccelerateConfigurationRequest(input *GetBucketAccelerateConfigurationInput) (req *request.Request, output *GetBucketAccelerateConfigurationOutput) { + op := &request.Operation{ + Name: opGetBucketAccelerateConfiguration, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?accelerate", + } + + if input == nil { + input = &GetBucketAccelerateConfigurationInput{} + } + + output = &GetBucketAccelerateConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketAccelerateConfiguration API operation for Amazon Simple Storage Service. +// +// Returns the accelerate configuration of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketAccelerateConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfiguration +func (c *S3) GetBucketAccelerateConfiguration(input *GetBucketAccelerateConfigurationInput) (*GetBucketAccelerateConfigurationOutput, error) { + req, out := c.GetBucketAccelerateConfigurationRequest(input) + return out, req.Send() +} + +// GetBucketAccelerateConfigurationWithContext is the same as GetBucketAccelerateConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketAccelerateConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketAccelerateConfigurationWithContext(ctx aws.Context, input *GetBucketAccelerateConfigurationInput, opts ...request.Option) (*GetBucketAccelerateConfigurationOutput, error) { + req, out := c.GetBucketAccelerateConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketAcl = "GetBucketAcl" + +// GetBucketAclRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketAcl operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketAcl for more information on using the GetBucketAcl +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketAclRequest method. +// req, resp := client.GetBucketAclRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAcl +func (c *S3) GetBucketAclRequest(input *GetBucketAclInput) (req *request.Request, output *GetBucketAclOutput) { + op := &request.Operation{ + Name: opGetBucketAcl, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?acl", + } + + if input == nil { + input = &GetBucketAclInput{} + } + + output = &GetBucketAclOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketAcl API operation for Amazon Simple Storage Service. +// +// Gets the access control policy for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketAcl for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAcl +func (c *S3) GetBucketAcl(input *GetBucketAclInput) (*GetBucketAclOutput, error) { + req, out := c.GetBucketAclRequest(input) + return out, req.Send() +} + +// GetBucketAclWithContext is the same as GetBucketAcl with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketAcl for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketAclWithContext(ctx aws.Context, input *GetBucketAclInput, opts ...request.Option) (*GetBucketAclOutput, error) { + req, out := c.GetBucketAclRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketAnalyticsConfiguration = "GetBucketAnalyticsConfiguration" + +// GetBucketAnalyticsConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketAnalyticsConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketAnalyticsConfiguration for more information on using the GetBucketAnalyticsConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketAnalyticsConfigurationRequest method. +// req, resp := client.GetBucketAnalyticsConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfiguration +func (c *S3) GetBucketAnalyticsConfigurationRequest(input *GetBucketAnalyticsConfigurationInput) (req *request.Request, output *GetBucketAnalyticsConfigurationOutput) { + op := &request.Operation{ + Name: opGetBucketAnalyticsConfiguration, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?analytics", + } + + if input == nil { + input = &GetBucketAnalyticsConfigurationInput{} + } + + output = &GetBucketAnalyticsConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketAnalyticsConfiguration API operation for Amazon Simple Storage Service. +// +// Gets an analytics configuration for the bucket (specified by the analytics +// configuration ID). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketAnalyticsConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfiguration +func (c *S3) GetBucketAnalyticsConfiguration(input *GetBucketAnalyticsConfigurationInput) (*GetBucketAnalyticsConfigurationOutput, error) { + req, out := c.GetBucketAnalyticsConfigurationRequest(input) + return out, req.Send() +} + +// GetBucketAnalyticsConfigurationWithContext is the same as GetBucketAnalyticsConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketAnalyticsConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketAnalyticsConfigurationWithContext(ctx aws.Context, input *GetBucketAnalyticsConfigurationInput, opts ...request.Option) (*GetBucketAnalyticsConfigurationOutput, error) { + req, out := c.GetBucketAnalyticsConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketCors = "GetBucketCors" + +// GetBucketCorsRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketCors operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketCors for more information on using the GetBucketCors +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketCorsRequest method. +// req, resp := client.GetBucketCorsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCors +func (c *S3) GetBucketCorsRequest(input *GetBucketCorsInput) (req *request.Request, output *GetBucketCorsOutput) { + op := &request.Operation{ + Name: opGetBucketCors, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?cors", + } + + if input == nil { + input = &GetBucketCorsInput{} + } + + output = &GetBucketCorsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketCors API operation for Amazon Simple Storage Service. +// +// Returns the cors configuration for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketCors for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCors +func (c *S3) GetBucketCors(input *GetBucketCorsInput) (*GetBucketCorsOutput, error) { + req, out := c.GetBucketCorsRequest(input) + return out, req.Send() +} + +// GetBucketCorsWithContext is the same as GetBucketCors with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketCors for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketCorsWithContext(ctx aws.Context, input *GetBucketCorsInput, opts ...request.Option) (*GetBucketCorsOutput, error) { + req, out := c.GetBucketCorsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketEncryption = "GetBucketEncryption" + +// GetBucketEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketEncryption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketEncryption for more information on using the GetBucketEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketEncryptionRequest method. +// req, resp := client.GetBucketEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketEncryption +func (c *S3) GetBucketEncryptionRequest(input *GetBucketEncryptionInput) (req *request.Request, output *GetBucketEncryptionOutput) { + op := &request.Operation{ + Name: opGetBucketEncryption, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?encryption", + } + + if input == nil { + input = &GetBucketEncryptionInput{} + } + + output = &GetBucketEncryptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketEncryption API operation for Amazon Simple Storage Service. +// +// Returns the server-side encryption configuration of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketEncryption for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketEncryption +func (c *S3) GetBucketEncryption(input *GetBucketEncryptionInput) (*GetBucketEncryptionOutput, error) { + req, out := c.GetBucketEncryptionRequest(input) + return out, req.Send() +} + +// GetBucketEncryptionWithContext is the same as GetBucketEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketEncryptionWithContext(ctx aws.Context, input *GetBucketEncryptionInput, opts ...request.Option) (*GetBucketEncryptionOutput, error) { + req, out := c.GetBucketEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketInventoryConfiguration = "GetBucketInventoryConfiguration" + +// GetBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketInventoryConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketInventoryConfiguration for more information on using the GetBucketInventoryConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketInventoryConfigurationRequest method. +// req, resp := client.GetBucketInventoryConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfiguration +func (c *S3) GetBucketInventoryConfigurationRequest(input *GetBucketInventoryConfigurationInput) (req *request.Request, output *GetBucketInventoryConfigurationOutput) { + op := &request.Operation{ + Name: opGetBucketInventoryConfiguration, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?inventory", + } + + if input == nil { + input = &GetBucketInventoryConfigurationInput{} + } + + output = &GetBucketInventoryConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketInventoryConfiguration API operation for Amazon Simple Storage Service. +// +// Returns an inventory configuration (identified by the inventory ID) from +// the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketInventoryConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfiguration +func (c *S3) GetBucketInventoryConfiguration(input *GetBucketInventoryConfigurationInput) (*GetBucketInventoryConfigurationOutput, error) { + req, out := c.GetBucketInventoryConfigurationRequest(input) + return out, req.Send() +} + +// GetBucketInventoryConfigurationWithContext is the same as GetBucketInventoryConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketInventoryConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketInventoryConfigurationWithContext(ctx aws.Context, input *GetBucketInventoryConfigurationInput, opts ...request.Option) (*GetBucketInventoryConfigurationOutput, error) { + req, out := c.GetBucketInventoryConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketLifecycle = "GetBucketLifecycle" + +// GetBucketLifecycleRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketLifecycle operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketLifecycle for more information on using the GetBucketLifecycle +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketLifecycleRequest method. +// req, resp := client.GetBucketLifecycleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle +func (c *S3) GetBucketLifecycleRequest(input *GetBucketLifecycleInput) (req *request.Request, output *GetBucketLifecycleOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, GetBucketLifecycle, has been deprecated") + } + op := &request.Operation{ + Name: opGetBucketLifecycle, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?lifecycle", + } + + if input == nil { + input = &GetBucketLifecycleInput{} + } + + output = &GetBucketLifecycleOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketLifecycle API operation for Amazon Simple Storage Service. +// +// Deprecated, see the GetBucketLifecycleConfiguration operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketLifecycle for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle +func (c *S3) GetBucketLifecycle(input *GetBucketLifecycleInput) (*GetBucketLifecycleOutput, error) { + req, out := c.GetBucketLifecycleRequest(input) + return out, req.Send() +} + +// GetBucketLifecycleWithContext is the same as GetBucketLifecycle with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketLifecycle for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketLifecycleWithContext(ctx aws.Context, input *GetBucketLifecycleInput, opts ...request.Option) (*GetBucketLifecycleOutput, error) { + req, out := c.GetBucketLifecycleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketLifecycleConfiguration = "GetBucketLifecycleConfiguration" + +// GetBucketLifecycleConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketLifecycleConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketLifecycleConfiguration for more information on using the GetBucketLifecycleConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketLifecycleConfigurationRequest method. +// req, resp := client.GetBucketLifecycleConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfiguration +func (c *S3) GetBucketLifecycleConfigurationRequest(input *GetBucketLifecycleConfigurationInput) (req *request.Request, output *GetBucketLifecycleConfigurationOutput) { + op := &request.Operation{ + Name: opGetBucketLifecycleConfiguration, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?lifecycle", + } + + if input == nil { + input = &GetBucketLifecycleConfigurationInput{} + } + + output = &GetBucketLifecycleConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketLifecycleConfiguration API operation for Amazon Simple Storage Service. +// +// Returns the lifecycle configuration information set on the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketLifecycleConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfiguration +func (c *S3) GetBucketLifecycleConfiguration(input *GetBucketLifecycleConfigurationInput) (*GetBucketLifecycleConfigurationOutput, error) { + req, out := c.GetBucketLifecycleConfigurationRequest(input) + return out, req.Send() +} + +// GetBucketLifecycleConfigurationWithContext is the same as GetBucketLifecycleConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketLifecycleConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketLifecycleConfigurationWithContext(ctx aws.Context, input *GetBucketLifecycleConfigurationInput, opts ...request.Option) (*GetBucketLifecycleConfigurationOutput, error) { + req, out := c.GetBucketLifecycleConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketLocation = "GetBucketLocation" + +// GetBucketLocationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketLocation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketLocation for more information on using the GetBucketLocation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketLocationRequest method. +// req, resp := client.GetBucketLocationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocation +func (c *S3) GetBucketLocationRequest(input *GetBucketLocationInput) (req *request.Request, output *GetBucketLocationOutput) { + op := &request.Operation{ + Name: opGetBucketLocation, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?location", + } + + if input == nil { + input = &GetBucketLocationInput{} + } + + output = &GetBucketLocationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketLocation API operation for Amazon Simple Storage Service. +// +// Returns the region the bucket resides in. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketLocation for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocation +func (c *S3) GetBucketLocation(input *GetBucketLocationInput) (*GetBucketLocationOutput, error) { + req, out := c.GetBucketLocationRequest(input) + return out, req.Send() +} + +// GetBucketLocationWithContext is the same as GetBucketLocation with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketLocation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketLocationWithContext(ctx aws.Context, input *GetBucketLocationInput, opts ...request.Option) (*GetBucketLocationOutput, error) { + req, out := c.GetBucketLocationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketLogging = "GetBucketLogging" + +// GetBucketLoggingRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketLogging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketLogging for more information on using the GetBucketLogging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketLoggingRequest method. +// req, resp := client.GetBucketLoggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLogging +func (c *S3) GetBucketLoggingRequest(input *GetBucketLoggingInput) (req *request.Request, output *GetBucketLoggingOutput) { + op := &request.Operation{ + Name: opGetBucketLogging, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?logging", + } + + if input == nil { + input = &GetBucketLoggingInput{} + } + + output = &GetBucketLoggingOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketLogging API operation for Amazon Simple Storage Service. +// +// Returns the logging status of a bucket and the permissions users have to +// view and modify that status. To use GET, you must be the bucket owner. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketLogging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLogging +func (c *S3) GetBucketLogging(input *GetBucketLoggingInput) (*GetBucketLoggingOutput, error) { + req, out := c.GetBucketLoggingRequest(input) + return out, req.Send() +} + +// GetBucketLoggingWithContext is the same as GetBucketLogging with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketLogging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketLoggingWithContext(ctx aws.Context, input *GetBucketLoggingInput, opts ...request.Option) (*GetBucketLoggingOutput, error) { + req, out := c.GetBucketLoggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketMetricsConfiguration = "GetBucketMetricsConfiguration" + +// GetBucketMetricsConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketMetricsConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketMetricsConfiguration for more information on using the GetBucketMetricsConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketMetricsConfigurationRequest method. +// req, resp := client.GetBucketMetricsConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfiguration +func (c *S3) GetBucketMetricsConfigurationRequest(input *GetBucketMetricsConfigurationInput) (req *request.Request, output *GetBucketMetricsConfigurationOutput) { + op := &request.Operation{ + Name: opGetBucketMetricsConfiguration, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?metrics", + } + + if input == nil { + input = &GetBucketMetricsConfigurationInput{} + } + + output = &GetBucketMetricsConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketMetricsConfiguration API operation for Amazon Simple Storage Service. +// +// Gets a metrics configuration (specified by the metrics configuration ID) +// from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketMetricsConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfiguration +func (c *S3) GetBucketMetricsConfiguration(input *GetBucketMetricsConfigurationInput) (*GetBucketMetricsConfigurationOutput, error) { + req, out := c.GetBucketMetricsConfigurationRequest(input) + return out, req.Send() +} + +// GetBucketMetricsConfigurationWithContext is the same as GetBucketMetricsConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketMetricsConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketMetricsConfigurationWithContext(ctx aws.Context, input *GetBucketMetricsConfigurationInput, opts ...request.Option) (*GetBucketMetricsConfigurationOutput, error) { + req, out := c.GetBucketMetricsConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketNotification = "GetBucketNotification" + +// GetBucketNotificationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketNotification operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketNotification for more information on using the GetBucketNotification +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketNotificationRequest method. +// req, resp := client.GetBucketNotificationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification +func (c *S3) GetBucketNotificationRequest(input *GetBucketNotificationConfigurationRequest) (req *request.Request, output *NotificationConfigurationDeprecated) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, GetBucketNotification, has been deprecated") + } + op := &request.Operation{ + Name: opGetBucketNotification, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?notification", + } + + if input == nil { + input = &GetBucketNotificationConfigurationRequest{} + } + + output = &NotificationConfigurationDeprecated{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketNotification API operation for Amazon Simple Storage Service. +// +// Deprecated, see the GetBucketNotificationConfiguration operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketNotification for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification +func (c *S3) GetBucketNotification(input *GetBucketNotificationConfigurationRequest) (*NotificationConfigurationDeprecated, error) { + req, out := c.GetBucketNotificationRequest(input) + return out, req.Send() +} + +// GetBucketNotificationWithContext is the same as GetBucketNotification with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketNotification for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketNotificationWithContext(ctx aws.Context, input *GetBucketNotificationConfigurationRequest, opts ...request.Option) (*NotificationConfigurationDeprecated, error) { + req, out := c.GetBucketNotificationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketNotificationConfiguration = "GetBucketNotificationConfiguration" + +// GetBucketNotificationConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketNotificationConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketNotificationConfiguration for more information on using the GetBucketNotificationConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketNotificationConfigurationRequest method. +// req, resp := client.GetBucketNotificationConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfiguration +func (c *S3) GetBucketNotificationConfigurationRequest(input *GetBucketNotificationConfigurationRequest) (req *request.Request, output *NotificationConfiguration) { + op := &request.Operation{ + Name: opGetBucketNotificationConfiguration, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?notification", + } + + if input == nil { + input = &GetBucketNotificationConfigurationRequest{} + } + + output = &NotificationConfiguration{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketNotificationConfiguration API operation for Amazon Simple Storage Service. +// +// Returns the notification configuration of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketNotificationConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfiguration +func (c *S3) GetBucketNotificationConfiguration(input *GetBucketNotificationConfigurationRequest) (*NotificationConfiguration, error) { + req, out := c.GetBucketNotificationConfigurationRequest(input) + return out, req.Send() +} + +// GetBucketNotificationConfigurationWithContext is the same as GetBucketNotificationConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketNotificationConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketNotificationConfigurationWithContext(ctx aws.Context, input *GetBucketNotificationConfigurationRequest, opts ...request.Option) (*NotificationConfiguration, error) { + req, out := c.GetBucketNotificationConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketPolicy = "GetBucketPolicy" + +// GetBucketPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketPolicy for more information on using the GetBucketPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketPolicyRequest method. +// req, resp := client.GetBucketPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicy +func (c *S3) GetBucketPolicyRequest(input *GetBucketPolicyInput) (req *request.Request, output *GetBucketPolicyOutput) { + op := &request.Operation{ + Name: opGetBucketPolicy, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?policy", + } + + if input == nil { + input = &GetBucketPolicyInput{} + } + + output = &GetBucketPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketPolicy API operation for Amazon Simple Storage Service. +// +// Returns the policy of a specified bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketPolicy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicy +func (c *S3) GetBucketPolicy(input *GetBucketPolicyInput) (*GetBucketPolicyOutput, error) { + req, out := c.GetBucketPolicyRequest(input) + return out, req.Send() +} + +// GetBucketPolicyWithContext is the same as GetBucketPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketPolicyWithContext(ctx aws.Context, input *GetBucketPolicyInput, opts ...request.Option) (*GetBucketPolicyOutput, error) { + req, out := c.GetBucketPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketReplication = "GetBucketReplication" + +// GetBucketReplicationRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketReplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketReplication for more information on using the GetBucketReplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketReplicationRequest method. +// req, resp := client.GetBucketReplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplication +func (c *S3) GetBucketReplicationRequest(input *GetBucketReplicationInput) (req *request.Request, output *GetBucketReplicationOutput) { + op := &request.Operation{ + Name: opGetBucketReplication, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?replication", + } + + if input == nil { + input = &GetBucketReplicationInput{} + } + + output = &GetBucketReplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketReplication API operation for Amazon Simple Storage Service. +// +// Returns the replication configuration of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketReplication for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplication +func (c *S3) GetBucketReplication(input *GetBucketReplicationInput) (*GetBucketReplicationOutput, error) { + req, out := c.GetBucketReplicationRequest(input) + return out, req.Send() +} + +// GetBucketReplicationWithContext is the same as GetBucketReplication with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketReplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketReplicationWithContext(ctx aws.Context, input *GetBucketReplicationInput, opts ...request.Option) (*GetBucketReplicationOutput, error) { + req, out := c.GetBucketReplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketRequestPayment = "GetBucketRequestPayment" + +// GetBucketRequestPaymentRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketRequestPayment operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketRequestPayment for more information on using the GetBucketRequestPayment +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketRequestPaymentRequest method. +// req, resp := client.GetBucketRequestPaymentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPayment +func (c *S3) GetBucketRequestPaymentRequest(input *GetBucketRequestPaymentInput) (req *request.Request, output *GetBucketRequestPaymentOutput) { + op := &request.Operation{ + Name: opGetBucketRequestPayment, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?requestPayment", + } + + if input == nil { + input = &GetBucketRequestPaymentInput{} + } + + output = &GetBucketRequestPaymentOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketRequestPayment API operation for Amazon Simple Storage Service. +// +// Returns the request payment configuration of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketRequestPayment for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPayment +func (c *S3) GetBucketRequestPayment(input *GetBucketRequestPaymentInput) (*GetBucketRequestPaymentOutput, error) { + req, out := c.GetBucketRequestPaymentRequest(input) + return out, req.Send() +} + +// GetBucketRequestPaymentWithContext is the same as GetBucketRequestPayment with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketRequestPayment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketRequestPaymentWithContext(ctx aws.Context, input *GetBucketRequestPaymentInput, opts ...request.Option) (*GetBucketRequestPaymentOutput, error) { + req, out := c.GetBucketRequestPaymentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketTagging = "GetBucketTagging" + +// GetBucketTaggingRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketTagging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketTagging for more information on using the GetBucketTagging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketTaggingRequest method. +// req, resp := client.GetBucketTaggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTagging +func (c *S3) GetBucketTaggingRequest(input *GetBucketTaggingInput) (req *request.Request, output *GetBucketTaggingOutput) { + op := &request.Operation{ + Name: opGetBucketTagging, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?tagging", + } + + if input == nil { + input = &GetBucketTaggingInput{} + } + + output = &GetBucketTaggingOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketTagging API operation for Amazon Simple Storage Service. +// +// Returns the tag set associated with the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketTagging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTagging +func (c *S3) GetBucketTagging(input *GetBucketTaggingInput) (*GetBucketTaggingOutput, error) { + req, out := c.GetBucketTaggingRequest(input) + return out, req.Send() +} + +// GetBucketTaggingWithContext is the same as GetBucketTagging with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketTagging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketTaggingWithContext(ctx aws.Context, input *GetBucketTaggingInput, opts ...request.Option) (*GetBucketTaggingOutput, error) { + req, out := c.GetBucketTaggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketVersioning = "GetBucketVersioning" + +// GetBucketVersioningRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketVersioning operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketVersioning for more information on using the GetBucketVersioning +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketVersioningRequest method. +// req, resp := client.GetBucketVersioningRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioning +func (c *S3) GetBucketVersioningRequest(input *GetBucketVersioningInput) (req *request.Request, output *GetBucketVersioningOutput) { + op := &request.Operation{ + Name: opGetBucketVersioning, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?versioning", + } + + if input == nil { + input = &GetBucketVersioningInput{} + } + + output = &GetBucketVersioningOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketVersioning API operation for Amazon Simple Storage Service. +// +// Returns the versioning state of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketVersioning for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioning +func (c *S3) GetBucketVersioning(input *GetBucketVersioningInput) (*GetBucketVersioningOutput, error) { + req, out := c.GetBucketVersioningRequest(input) + return out, req.Send() +} + +// GetBucketVersioningWithContext is the same as GetBucketVersioning with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketVersioning for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketVersioningWithContext(ctx aws.Context, input *GetBucketVersioningInput, opts ...request.Option) (*GetBucketVersioningOutput, error) { + req, out := c.GetBucketVersioningRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBucketWebsite = "GetBucketWebsite" + +// GetBucketWebsiteRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketWebsite operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketWebsite for more information on using the GetBucketWebsite +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketWebsiteRequest method. +// req, resp := client.GetBucketWebsiteRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsite +func (c *S3) GetBucketWebsiteRequest(input *GetBucketWebsiteInput) (req *request.Request, output *GetBucketWebsiteOutput) { + op := &request.Operation{ + Name: opGetBucketWebsite, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?website", + } + + if input == nil { + input = &GetBucketWebsiteInput{} + } + + output = &GetBucketWebsiteOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketWebsite API operation for Amazon Simple Storage Service. +// +// Returns the website configuration for a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketWebsite for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsite +func (c *S3) GetBucketWebsite(input *GetBucketWebsiteInput) (*GetBucketWebsiteOutput, error) { + req, out := c.GetBucketWebsiteRequest(input) + return out, req.Send() +} + +// GetBucketWebsiteWithContext is the same as GetBucketWebsite with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketWebsite for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketWebsiteWithContext(ctx aws.Context, input *GetBucketWebsiteInput, opts ...request.Option) (*GetBucketWebsiteOutput, error) { + req, out := c.GetBucketWebsiteRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetObject = "GetObject" + +// GetObjectRequest generates a "aws/request.Request" representing the +// client's request for the GetObject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetObject for more information on using the GetObject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetObjectRequest method. +// req, resp := client.GetObjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject +func (c *S3) GetObjectRequest(input *GetObjectInput) (req *request.Request, output *GetObjectOutput) { + op := &request.Operation{ + Name: opGetObject, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &GetObjectInput{} + } + + output = &GetObjectOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetObject API operation for Amazon Simple Storage Service. +// +// Retrieves objects from Amazon S3. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetObject for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchKey "NoSuchKey" +// The specified key does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject +func (c *S3) GetObject(input *GetObjectInput) (*GetObjectOutput, error) { + req, out := c.GetObjectRequest(input) + return out, req.Send() +} + +// GetObjectWithContext is the same as GetObject with the addition of +// the ability to pass a context and additional request options. +// +// See GetObject for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetObjectWithContext(ctx aws.Context, input *GetObjectInput, opts ...request.Option) (*GetObjectOutput, error) { + req, out := c.GetObjectRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetObjectAcl = "GetObjectAcl" + +// GetObjectAclRequest generates a "aws/request.Request" representing the +// client's request for the GetObjectAcl operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetObjectAcl for more information on using the GetObjectAcl +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetObjectAclRequest method. +// req, resp := client.GetObjectAclRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAcl +func (c *S3) GetObjectAclRequest(input *GetObjectAclInput) (req *request.Request, output *GetObjectAclOutput) { + op := &request.Operation{ + Name: opGetObjectAcl, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}/{Key+}?acl", + } + + if input == nil { + input = &GetObjectAclInput{} + } + + output = &GetObjectAclOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetObjectAcl API operation for Amazon Simple Storage Service. +// +// Returns the access control list (ACL) of an object. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetObjectAcl for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchKey "NoSuchKey" +// The specified key does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAcl +func (c *S3) GetObjectAcl(input *GetObjectAclInput) (*GetObjectAclOutput, error) { + req, out := c.GetObjectAclRequest(input) + return out, req.Send() +} + +// GetObjectAclWithContext is the same as GetObjectAcl with the addition of +// the ability to pass a context and additional request options. +// +// See GetObjectAcl for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetObjectAclWithContext(ctx aws.Context, input *GetObjectAclInput, opts ...request.Option) (*GetObjectAclOutput, error) { + req, out := c.GetObjectAclRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetObjectTagging = "GetObjectTagging" + +// GetObjectTaggingRequest generates a "aws/request.Request" representing the +// client's request for the GetObjectTagging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetObjectTagging for more information on using the GetObjectTagging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetObjectTaggingRequest method. +// req, resp := client.GetObjectTaggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTagging +func (c *S3) GetObjectTaggingRequest(input *GetObjectTaggingInput) (req *request.Request, output *GetObjectTaggingOutput) { + op := &request.Operation{ + Name: opGetObjectTagging, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}/{Key+}?tagging", + } + + if input == nil { + input = &GetObjectTaggingInput{} + } + + output = &GetObjectTaggingOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetObjectTagging API operation for Amazon Simple Storage Service. +// +// Returns the tag-set of an object. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetObjectTagging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTagging +func (c *S3) GetObjectTagging(input *GetObjectTaggingInput) (*GetObjectTaggingOutput, error) { + req, out := c.GetObjectTaggingRequest(input) + return out, req.Send() +} + +// GetObjectTaggingWithContext is the same as GetObjectTagging with the addition of +// the ability to pass a context and additional request options. +// +// See GetObjectTagging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetObjectTaggingWithContext(ctx aws.Context, input *GetObjectTaggingInput, opts ...request.Option) (*GetObjectTaggingOutput, error) { + req, out := c.GetObjectTaggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetObjectTorrent = "GetObjectTorrent" + +// GetObjectTorrentRequest generates a "aws/request.Request" representing the +// client's request for the GetObjectTorrent operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetObjectTorrent for more information on using the GetObjectTorrent +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetObjectTorrentRequest method. +// req, resp := client.GetObjectTorrentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrent +func (c *S3) GetObjectTorrentRequest(input *GetObjectTorrentInput) (req *request.Request, output *GetObjectTorrentOutput) { + op := &request.Operation{ + Name: opGetObjectTorrent, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}/{Key+}?torrent", + } + + if input == nil { + input = &GetObjectTorrentInput{} + } + + output = &GetObjectTorrentOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetObjectTorrent API operation for Amazon Simple Storage Service. +// +// Return torrent files from a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetObjectTorrent for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrent +func (c *S3) GetObjectTorrent(input *GetObjectTorrentInput) (*GetObjectTorrentOutput, error) { + req, out := c.GetObjectTorrentRequest(input) + return out, req.Send() +} + +// GetObjectTorrentWithContext is the same as GetObjectTorrent with the addition of +// the ability to pass a context and additional request options. +// +// See GetObjectTorrent for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetObjectTorrentWithContext(ctx aws.Context, input *GetObjectTorrentInput, opts ...request.Option) (*GetObjectTorrentOutput, error) { + req, out := c.GetObjectTorrentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opHeadBucket = "HeadBucket" + +// HeadBucketRequest generates a "aws/request.Request" representing the +// client's request for the HeadBucket operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See HeadBucket for more information on using the HeadBucket +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the HeadBucketRequest method. +// req, resp := client.HeadBucketRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucket +func (c *S3) HeadBucketRequest(input *HeadBucketInput) (req *request.Request, output *HeadBucketOutput) { + op := &request.Operation{ + Name: opHeadBucket, + HTTPMethod: "HEAD", + HTTPPath: "/{Bucket}", + } + + if input == nil { + input = &HeadBucketInput{} + } + + output = &HeadBucketOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// HeadBucket API operation for Amazon Simple Storage Service. +// +// This operation is useful to determine if a bucket exists and you have permission +// to access it. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation HeadBucket for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchBucket "NoSuchBucket" +// The specified bucket does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucket +func (c *S3) HeadBucket(input *HeadBucketInput) (*HeadBucketOutput, error) { + req, out := c.HeadBucketRequest(input) + return out, req.Send() +} + +// HeadBucketWithContext is the same as HeadBucket with the addition of +// the ability to pass a context and additional request options. +// +// See HeadBucket for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) HeadBucketWithContext(ctx aws.Context, input *HeadBucketInput, opts ...request.Option) (*HeadBucketOutput, error) { + req, out := c.HeadBucketRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opHeadObject = "HeadObject" + +// HeadObjectRequest generates a "aws/request.Request" representing the +// client's request for the HeadObject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See HeadObject for more information on using the HeadObject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the HeadObjectRequest method. +// req, resp := client.HeadObjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObject +func (c *S3) HeadObjectRequest(input *HeadObjectInput) (req *request.Request, output *HeadObjectOutput) { + op := &request.Operation{ + Name: opHeadObject, + HTTPMethod: "HEAD", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &HeadObjectInput{} + } + + output = &HeadObjectOutput{} + req = c.newRequest(op, input, output) + return +} + +// HeadObject API operation for Amazon Simple Storage Service. +// +// The HEAD operation retrieves metadata from an object without returning the +// object itself. This operation is useful if you're only interested in an object's +// metadata. To use HEAD, you must have READ access to the object. +// +// See http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#RESTErrorResponses +// for more information on returned errors. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation HeadObject for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObject +func (c *S3) HeadObject(input *HeadObjectInput) (*HeadObjectOutput, error) { + req, out := c.HeadObjectRequest(input) + return out, req.Send() +} + +// HeadObjectWithContext is the same as HeadObject with the addition of +// the ability to pass a context and additional request options. +// +// See HeadObject for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) HeadObjectWithContext(ctx aws.Context, input *HeadObjectInput, opts ...request.Option) (*HeadObjectOutput, error) { + req, out := c.HeadObjectRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListBucketAnalyticsConfigurations = "ListBucketAnalyticsConfigurations" + +// ListBucketAnalyticsConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the ListBucketAnalyticsConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListBucketAnalyticsConfigurations for more information on using the ListBucketAnalyticsConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListBucketAnalyticsConfigurationsRequest method. +// req, resp := client.ListBucketAnalyticsConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurations +func (c *S3) ListBucketAnalyticsConfigurationsRequest(input *ListBucketAnalyticsConfigurationsInput) (req *request.Request, output *ListBucketAnalyticsConfigurationsOutput) { + op := &request.Operation{ + Name: opListBucketAnalyticsConfigurations, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?analytics", + } + + if input == nil { + input = &ListBucketAnalyticsConfigurationsInput{} + } + + output = &ListBucketAnalyticsConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListBucketAnalyticsConfigurations API operation for Amazon Simple Storage Service. +// +// Lists the analytics configurations for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListBucketAnalyticsConfigurations for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurations +func (c *S3) ListBucketAnalyticsConfigurations(input *ListBucketAnalyticsConfigurationsInput) (*ListBucketAnalyticsConfigurationsOutput, error) { + req, out := c.ListBucketAnalyticsConfigurationsRequest(input) + return out, req.Send() +} + +// ListBucketAnalyticsConfigurationsWithContext is the same as ListBucketAnalyticsConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See ListBucketAnalyticsConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListBucketAnalyticsConfigurationsWithContext(ctx aws.Context, input *ListBucketAnalyticsConfigurationsInput, opts ...request.Option) (*ListBucketAnalyticsConfigurationsOutput, error) { + req, out := c.ListBucketAnalyticsConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListBucketInventoryConfigurations = "ListBucketInventoryConfigurations" + +// ListBucketInventoryConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the ListBucketInventoryConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListBucketInventoryConfigurations for more information on using the ListBucketInventoryConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListBucketInventoryConfigurationsRequest method. +// req, resp := client.ListBucketInventoryConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurations +func (c *S3) ListBucketInventoryConfigurationsRequest(input *ListBucketInventoryConfigurationsInput) (req *request.Request, output *ListBucketInventoryConfigurationsOutput) { + op := &request.Operation{ + Name: opListBucketInventoryConfigurations, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?inventory", + } + + if input == nil { + input = &ListBucketInventoryConfigurationsInput{} + } + + output = &ListBucketInventoryConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListBucketInventoryConfigurations API operation for Amazon Simple Storage Service. +// +// Returns a list of inventory configurations for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListBucketInventoryConfigurations for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurations +func (c *S3) ListBucketInventoryConfigurations(input *ListBucketInventoryConfigurationsInput) (*ListBucketInventoryConfigurationsOutput, error) { + req, out := c.ListBucketInventoryConfigurationsRequest(input) + return out, req.Send() +} + +// ListBucketInventoryConfigurationsWithContext is the same as ListBucketInventoryConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See ListBucketInventoryConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListBucketInventoryConfigurationsWithContext(ctx aws.Context, input *ListBucketInventoryConfigurationsInput, opts ...request.Option) (*ListBucketInventoryConfigurationsOutput, error) { + req, out := c.ListBucketInventoryConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListBucketMetricsConfigurations = "ListBucketMetricsConfigurations" + +// ListBucketMetricsConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the ListBucketMetricsConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListBucketMetricsConfigurations for more information on using the ListBucketMetricsConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListBucketMetricsConfigurationsRequest method. +// req, resp := client.ListBucketMetricsConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurations +func (c *S3) ListBucketMetricsConfigurationsRequest(input *ListBucketMetricsConfigurationsInput) (req *request.Request, output *ListBucketMetricsConfigurationsOutput) { + op := &request.Operation{ + Name: opListBucketMetricsConfigurations, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?metrics", + } + + if input == nil { + input = &ListBucketMetricsConfigurationsInput{} + } + + output = &ListBucketMetricsConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListBucketMetricsConfigurations API operation for Amazon Simple Storage Service. +// +// Lists the metrics configurations for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListBucketMetricsConfigurations for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurations +func (c *S3) ListBucketMetricsConfigurations(input *ListBucketMetricsConfigurationsInput) (*ListBucketMetricsConfigurationsOutput, error) { + req, out := c.ListBucketMetricsConfigurationsRequest(input) + return out, req.Send() +} + +// ListBucketMetricsConfigurationsWithContext is the same as ListBucketMetricsConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See ListBucketMetricsConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListBucketMetricsConfigurationsWithContext(ctx aws.Context, input *ListBucketMetricsConfigurationsInput, opts ...request.Option) (*ListBucketMetricsConfigurationsOutput, error) { + req, out := c.ListBucketMetricsConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListBuckets = "ListBuckets" + +// ListBucketsRequest generates a "aws/request.Request" representing the +// client's request for the ListBuckets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListBuckets for more information on using the ListBuckets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListBucketsRequest method. +// req, resp := client.ListBucketsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets +func (c *S3) ListBucketsRequest(input *ListBucketsInput) (req *request.Request, output *ListBucketsOutput) { + op := &request.Operation{ + Name: opListBuckets, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ListBucketsInput{} + } + + output = &ListBucketsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListBuckets API operation for Amazon Simple Storage Service. +// +// Returns a list of all buckets owned by the authenticated sender of the request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListBuckets for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets +func (c *S3) ListBuckets(input *ListBucketsInput) (*ListBucketsOutput, error) { + req, out := c.ListBucketsRequest(input) + return out, req.Send() +} + +// ListBucketsWithContext is the same as ListBuckets with the addition of +// the ability to pass a context and additional request options. +// +// See ListBuckets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListBucketsWithContext(ctx aws.Context, input *ListBucketsInput, opts ...request.Option) (*ListBucketsOutput, error) { + req, out := c.ListBucketsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListMultipartUploads = "ListMultipartUploads" + +// ListMultipartUploadsRequest generates a "aws/request.Request" representing the +// client's request for the ListMultipartUploads operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListMultipartUploads for more information on using the ListMultipartUploads +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListMultipartUploadsRequest method. +// req, resp := client.ListMultipartUploadsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploads +func (c *S3) ListMultipartUploadsRequest(input *ListMultipartUploadsInput) (req *request.Request, output *ListMultipartUploadsOutput) { + op := &request.Operation{ + Name: opListMultipartUploads, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?uploads", + Paginator: &request.Paginator{ + InputTokens: []string{"KeyMarker", "UploadIdMarker"}, + OutputTokens: []string{"NextKeyMarker", "NextUploadIdMarker"}, + LimitToken: "MaxUploads", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListMultipartUploadsInput{} + } + + output = &ListMultipartUploadsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListMultipartUploads API operation for Amazon Simple Storage Service. +// +// This operation lists in-progress multipart uploads. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListMultipartUploads for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploads +func (c *S3) ListMultipartUploads(input *ListMultipartUploadsInput) (*ListMultipartUploadsOutput, error) { + req, out := c.ListMultipartUploadsRequest(input) + return out, req.Send() +} + +// ListMultipartUploadsWithContext is the same as ListMultipartUploads with the addition of +// the ability to pass a context and additional request options. +// +// See ListMultipartUploads for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListMultipartUploadsWithContext(ctx aws.Context, input *ListMultipartUploadsInput, opts ...request.Option) (*ListMultipartUploadsOutput, error) { + req, out := c.ListMultipartUploadsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListMultipartUploadsPages iterates over the pages of a ListMultipartUploads operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListMultipartUploads method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListMultipartUploads operation. +// pageNum := 0 +// err := client.ListMultipartUploadsPages(params, +// func(page *ListMultipartUploadsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *S3) ListMultipartUploadsPages(input *ListMultipartUploadsInput, fn func(*ListMultipartUploadsOutput, bool) bool) error { + return c.ListMultipartUploadsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListMultipartUploadsPagesWithContext same as ListMultipartUploadsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListMultipartUploadsPagesWithContext(ctx aws.Context, input *ListMultipartUploadsInput, fn func(*ListMultipartUploadsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListMultipartUploadsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListMultipartUploadsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListMultipartUploadsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListObjectVersions = "ListObjectVersions" + +// ListObjectVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListObjectVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListObjectVersions for more information on using the ListObjectVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListObjectVersionsRequest method. +// req, resp := client.ListObjectVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersions +func (c *S3) ListObjectVersionsRequest(input *ListObjectVersionsInput) (req *request.Request, output *ListObjectVersionsOutput) { + op := &request.Operation{ + Name: opListObjectVersions, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?versions", + Paginator: &request.Paginator{ + InputTokens: []string{"KeyMarker", "VersionIdMarker"}, + OutputTokens: []string{"NextKeyMarker", "NextVersionIdMarker"}, + LimitToken: "MaxKeys", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListObjectVersionsInput{} + } + + output = &ListObjectVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListObjectVersions API operation for Amazon Simple Storage Service. +// +// Returns metadata about all of the versions of objects in a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListObjectVersions for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersions +func (c *S3) ListObjectVersions(input *ListObjectVersionsInput) (*ListObjectVersionsOutput, error) { + req, out := c.ListObjectVersionsRequest(input) + return out, req.Send() +} + +// ListObjectVersionsWithContext is the same as ListObjectVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListObjectVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListObjectVersionsWithContext(ctx aws.Context, input *ListObjectVersionsInput, opts ...request.Option) (*ListObjectVersionsOutput, error) { + req, out := c.ListObjectVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListObjectVersionsPages iterates over the pages of a ListObjectVersions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListObjectVersions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListObjectVersions operation. +// pageNum := 0 +// err := client.ListObjectVersionsPages(params, +// func(page *ListObjectVersionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *S3) ListObjectVersionsPages(input *ListObjectVersionsInput, fn func(*ListObjectVersionsOutput, bool) bool) error { + return c.ListObjectVersionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListObjectVersionsPagesWithContext same as ListObjectVersionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListObjectVersionsPagesWithContext(ctx aws.Context, input *ListObjectVersionsInput, fn func(*ListObjectVersionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListObjectVersionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListObjectVersionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListObjectVersionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListObjects = "ListObjects" + +// ListObjectsRequest generates a "aws/request.Request" representing the +// client's request for the ListObjects operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListObjects for more information on using the ListObjects +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListObjectsRequest method. +// req, resp := client.ListObjectsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects +func (c *S3) ListObjectsRequest(input *ListObjectsInput) (req *request.Request, output *ListObjectsOutput) { + op := &request.Operation{ + Name: opListObjects, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"NextMarker || Contents[-1].Key"}, + LimitToken: "MaxKeys", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListObjectsInput{} + } + + output = &ListObjectsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListObjects API operation for Amazon Simple Storage Service. +// +// Returns some or all (up to 1000) of the objects in a bucket. You can use +// the request parameters as selection criteria to return a subset of the objects +// in a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListObjects for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchBucket "NoSuchBucket" +// The specified bucket does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects +func (c *S3) ListObjects(input *ListObjectsInput) (*ListObjectsOutput, error) { + req, out := c.ListObjectsRequest(input) + return out, req.Send() +} + +// ListObjectsWithContext is the same as ListObjects with the addition of +// the ability to pass a context and additional request options. +// +// See ListObjects for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListObjectsWithContext(ctx aws.Context, input *ListObjectsInput, opts ...request.Option) (*ListObjectsOutput, error) { + req, out := c.ListObjectsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListObjectsPages iterates over the pages of a ListObjects operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListObjects method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListObjects operation. +// pageNum := 0 +// err := client.ListObjectsPages(params, +// func(page *ListObjectsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *S3) ListObjectsPages(input *ListObjectsInput, fn func(*ListObjectsOutput, bool) bool) error { + return c.ListObjectsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListObjectsPagesWithContext same as ListObjectsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListObjectsPagesWithContext(ctx aws.Context, input *ListObjectsInput, fn func(*ListObjectsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListObjectsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListObjectsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListObjectsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListObjectsV2 = "ListObjectsV2" + +// ListObjectsV2Request generates a "aws/request.Request" representing the +// client's request for the ListObjectsV2 operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListObjectsV2 for more information on using the ListObjectsV2 +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListObjectsV2Request method. +// req, resp := client.ListObjectsV2Request(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2 +func (c *S3) ListObjectsV2Request(input *ListObjectsV2Input) (req *request.Request, output *ListObjectsV2Output) { + op := &request.Operation{ + Name: opListObjectsV2, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?list-type=2", + Paginator: &request.Paginator{ + InputTokens: []string{"ContinuationToken"}, + OutputTokens: []string{"NextContinuationToken"}, + LimitToken: "MaxKeys", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListObjectsV2Input{} + } + + output = &ListObjectsV2Output{} + req = c.newRequest(op, input, output) + return +} + +// ListObjectsV2 API operation for Amazon Simple Storage Service. +// +// Returns some or all (up to 1000) of the objects in a bucket. You can use +// the request parameters as selection criteria to return a subset of the objects +// in a bucket. Note: ListObjectsV2 is the revised List Objects API and we recommend +// you use this revised API for new application development. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListObjectsV2 for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchBucket "NoSuchBucket" +// The specified bucket does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2 +func (c *S3) ListObjectsV2(input *ListObjectsV2Input) (*ListObjectsV2Output, error) { + req, out := c.ListObjectsV2Request(input) + return out, req.Send() +} + +// ListObjectsV2WithContext is the same as ListObjectsV2 with the addition of +// the ability to pass a context and additional request options. +// +// See ListObjectsV2 for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListObjectsV2WithContext(ctx aws.Context, input *ListObjectsV2Input, opts ...request.Option) (*ListObjectsV2Output, error) { + req, out := c.ListObjectsV2Request(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListObjectsV2Pages iterates over the pages of a ListObjectsV2 operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListObjectsV2 method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListObjectsV2 operation. +// pageNum := 0 +// err := client.ListObjectsV2Pages(params, +// func(page *ListObjectsV2Output, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *S3) ListObjectsV2Pages(input *ListObjectsV2Input, fn func(*ListObjectsV2Output, bool) bool) error { + return c.ListObjectsV2PagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListObjectsV2PagesWithContext same as ListObjectsV2Pages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListObjectsV2PagesWithContext(ctx aws.Context, input *ListObjectsV2Input, fn func(*ListObjectsV2Output, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListObjectsV2Input + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListObjectsV2Request(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListObjectsV2Output), !p.HasNextPage()) + } + return p.Err() +} + +const opListParts = "ListParts" + +// ListPartsRequest generates a "aws/request.Request" representing the +// client's request for the ListParts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListParts for more information on using the ListParts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListPartsRequest method. +// req, resp := client.ListPartsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListParts +func (c *S3) ListPartsRequest(input *ListPartsInput) (req *request.Request, output *ListPartsOutput) { + op := &request.Operation{ + Name: opListParts, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}/{Key+}", + Paginator: &request.Paginator{ + InputTokens: []string{"PartNumberMarker"}, + OutputTokens: []string{"NextPartNumberMarker"}, + LimitToken: "MaxParts", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListPartsInput{} + } + + output = &ListPartsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListParts API operation for Amazon Simple Storage Service. +// +// Lists the parts that have been uploaded for a specific multipart upload. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation ListParts for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListParts +func (c *S3) ListParts(input *ListPartsInput) (*ListPartsOutput, error) { + req, out := c.ListPartsRequest(input) + return out, req.Send() +} + +// ListPartsWithContext is the same as ListParts with the addition of +// the ability to pass a context and additional request options. +// +// See ListParts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListPartsWithContext(ctx aws.Context, input *ListPartsInput, opts ...request.Option) (*ListPartsOutput, error) { + req, out := c.ListPartsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListPartsPages iterates over the pages of a ListParts operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListParts method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListParts operation. +// pageNum := 0 +// err := client.ListPartsPages(params, +// func(page *ListPartsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *S3) ListPartsPages(input *ListPartsInput, fn func(*ListPartsOutput, bool) bool) error { + return c.ListPartsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListPartsPagesWithContext same as ListPartsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) ListPartsPagesWithContext(ctx aws.Context, input *ListPartsInput, fn func(*ListPartsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListPartsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListPartsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListPartsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opPutBucketAccelerateConfiguration = "PutBucketAccelerateConfiguration" + +// PutBucketAccelerateConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketAccelerateConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketAccelerateConfiguration for more information on using the PutBucketAccelerateConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketAccelerateConfigurationRequest method. +// req, resp := client.PutBucketAccelerateConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfiguration +func (c *S3) PutBucketAccelerateConfigurationRequest(input *PutBucketAccelerateConfigurationInput) (req *request.Request, output *PutBucketAccelerateConfigurationOutput) { + op := &request.Operation{ + Name: opPutBucketAccelerateConfiguration, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?accelerate", + } + + if input == nil { + input = &PutBucketAccelerateConfigurationInput{} + } + + output = &PutBucketAccelerateConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketAccelerateConfiguration API operation for Amazon Simple Storage Service. +// +// Sets the accelerate configuration of an existing bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketAccelerateConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfiguration +func (c *S3) PutBucketAccelerateConfiguration(input *PutBucketAccelerateConfigurationInput) (*PutBucketAccelerateConfigurationOutput, error) { + req, out := c.PutBucketAccelerateConfigurationRequest(input) + return out, req.Send() +} + +// PutBucketAccelerateConfigurationWithContext is the same as PutBucketAccelerateConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketAccelerateConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketAccelerateConfigurationWithContext(ctx aws.Context, input *PutBucketAccelerateConfigurationInput, opts ...request.Option) (*PutBucketAccelerateConfigurationOutput, error) { + req, out := c.PutBucketAccelerateConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketAcl = "PutBucketAcl" + +// PutBucketAclRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketAcl operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketAcl for more information on using the PutBucketAcl +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketAclRequest method. +// req, resp := client.PutBucketAclRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAcl +func (c *S3) PutBucketAclRequest(input *PutBucketAclInput) (req *request.Request, output *PutBucketAclOutput) { + op := &request.Operation{ + Name: opPutBucketAcl, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?acl", + } + + if input == nil { + input = &PutBucketAclInput{} + } + + output = &PutBucketAclOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketAcl API operation for Amazon Simple Storage Service. +// +// Sets the permissions on a bucket using access control lists (ACL). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketAcl for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAcl +func (c *S3) PutBucketAcl(input *PutBucketAclInput) (*PutBucketAclOutput, error) { + req, out := c.PutBucketAclRequest(input) + return out, req.Send() +} + +// PutBucketAclWithContext is the same as PutBucketAcl with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketAcl for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketAclWithContext(ctx aws.Context, input *PutBucketAclInput, opts ...request.Option) (*PutBucketAclOutput, error) { + req, out := c.PutBucketAclRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketAnalyticsConfiguration = "PutBucketAnalyticsConfiguration" + +// PutBucketAnalyticsConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketAnalyticsConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketAnalyticsConfiguration for more information on using the PutBucketAnalyticsConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketAnalyticsConfigurationRequest method. +// req, resp := client.PutBucketAnalyticsConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfiguration +func (c *S3) PutBucketAnalyticsConfigurationRequest(input *PutBucketAnalyticsConfigurationInput) (req *request.Request, output *PutBucketAnalyticsConfigurationOutput) { + op := &request.Operation{ + Name: opPutBucketAnalyticsConfiguration, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?analytics", + } + + if input == nil { + input = &PutBucketAnalyticsConfigurationInput{} + } + + output = &PutBucketAnalyticsConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketAnalyticsConfiguration API operation for Amazon Simple Storage Service. +// +// Sets an analytics configuration for the bucket (specified by the analytics +// configuration ID). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketAnalyticsConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfiguration +func (c *S3) PutBucketAnalyticsConfiguration(input *PutBucketAnalyticsConfigurationInput) (*PutBucketAnalyticsConfigurationOutput, error) { + req, out := c.PutBucketAnalyticsConfigurationRequest(input) + return out, req.Send() +} + +// PutBucketAnalyticsConfigurationWithContext is the same as PutBucketAnalyticsConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketAnalyticsConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketAnalyticsConfigurationWithContext(ctx aws.Context, input *PutBucketAnalyticsConfigurationInput, opts ...request.Option) (*PutBucketAnalyticsConfigurationOutput, error) { + req, out := c.PutBucketAnalyticsConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketCors = "PutBucketCors" + +// PutBucketCorsRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketCors operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketCors for more information on using the PutBucketCors +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketCorsRequest method. +// req, resp := client.PutBucketCorsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCors +func (c *S3) PutBucketCorsRequest(input *PutBucketCorsInput) (req *request.Request, output *PutBucketCorsOutput) { + op := &request.Operation{ + Name: opPutBucketCors, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?cors", + } + + if input == nil { + input = &PutBucketCorsInput{} + } + + output = &PutBucketCorsOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketCors API operation for Amazon Simple Storage Service. +// +// Sets the cors configuration for a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketCors for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCors +func (c *S3) PutBucketCors(input *PutBucketCorsInput) (*PutBucketCorsOutput, error) { + req, out := c.PutBucketCorsRequest(input) + return out, req.Send() +} + +// PutBucketCorsWithContext is the same as PutBucketCors with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketCors for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketCorsWithContext(ctx aws.Context, input *PutBucketCorsInput, opts ...request.Option) (*PutBucketCorsOutput, error) { + req, out := c.PutBucketCorsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketEncryption = "PutBucketEncryption" + +// PutBucketEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketEncryption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketEncryption for more information on using the PutBucketEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketEncryptionRequest method. +// req, resp := client.PutBucketEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketEncryption +func (c *S3) PutBucketEncryptionRequest(input *PutBucketEncryptionInput) (req *request.Request, output *PutBucketEncryptionOutput) { + op := &request.Operation{ + Name: opPutBucketEncryption, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?encryption", + } + + if input == nil { + input = &PutBucketEncryptionInput{} + } + + output = &PutBucketEncryptionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketEncryption API operation for Amazon Simple Storage Service. +// +// Creates a new server-side encryption configuration (or replaces an existing +// one, if present). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketEncryption for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketEncryption +func (c *S3) PutBucketEncryption(input *PutBucketEncryptionInput) (*PutBucketEncryptionOutput, error) { + req, out := c.PutBucketEncryptionRequest(input) + return out, req.Send() +} + +// PutBucketEncryptionWithContext is the same as PutBucketEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketEncryptionWithContext(ctx aws.Context, input *PutBucketEncryptionInput, opts ...request.Option) (*PutBucketEncryptionOutput, error) { + req, out := c.PutBucketEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketInventoryConfiguration = "PutBucketInventoryConfiguration" + +// PutBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketInventoryConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketInventoryConfiguration for more information on using the PutBucketInventoryConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketInventoryConfigurationRequest method. +// req, resp := client.PutBucketInventoryConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfiguration +func (c *S3) PutBucketInventoryConfigurationRequest(input *PutBucketInventoryConfigurationInput) (req *request.Request, output *PutBucketInventoryConfigurationOutput) { + op := &request.Operation{ + Name: opPutBucketInventoryConfiguration, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?inventory", + } + + if input == nil { + input = &PutBucketInventoryConfigurationInput{} + } + + output = &PutBucketInventoryConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketInventoryConfiguration API operation for Amazon Simple Storage Service. +// +// Adds an inventory configuration (identified by the inventory ID) from the +// bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketInventoryConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfiguration +func (c *S3) PutBucketInventoryConfiguration(input *PutBucketInventoryConfigurationInput) (*PutBucketInventoryConfigurationOutput, error) { + req, out := c.PutBucketInventoryConfigurationRequest(input) + return out, req.Send() +} + +// PutBucketInventoryConfigurationWithContext is the same as PutBucketInventoryConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketInventoryConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketInventoryConfigurationWithContext(ctx aws.Context, input *PutBucketInventoryConfigurationInput, opts ...request.Option) (*PutBucketInventoryConfigurationOutput, error) { + req, out := c.PutBucketInventoryConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketLifecycle = "PutBucketLifecycle" + +// PutBucketLifecycleRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketLifecycle operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketLifecycle for more information on using the PutBucketLifecycle +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketLifecycleRequest method. +// req, resp := client.PutBucketLifecycleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle +func (c *S3) PutBucketLifecycleRequest(input *PutBucketLifecycleInput) (req *request.Request, output *PutBucketLifecycleOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, PutBucketLifecycle, has been deprecated") + } + op := &request.Operation{ + Name: opPutBucketLifecycle, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?lifecycle", + } + + if input == nil { + input = &PutBucketLifecycleInput{} + } + + output = &PutBucketLifecycleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketLifecycle API operation for Amazon Simple Storage Service. +// +// Deprecated, see the PutBucketLifecycleConfiguration operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketLifecycle for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle +func (c *S3) PutBucketLifecycle(input *PutBucketLifecycleInput) (*PutBucketLifecycleOutput, error) { + req, out := c.PutBucketLifecycleRequest(input) + return out, req.Send() +} + +// PutBucketLifecycleWithContext is the same as PutBucketLifecycle with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketLifecycle for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketLifecycleWithContext(ctx aws.Context, input *PutBucketLifecycleInput, opts ...request.Option) (*PutBucketLifecycleOutput, error) { + req, out := c.PutBucketLifecycleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketLifecycleConfiguration = "PutBucketLifecycleConfiguration" + +// PutBucketLifecycleConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketLifecycleConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketLifecycleConfiguration for more information on using the PutBucketLifecycleConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketLifecycleConfigurationRequest method. +// req, resp := client.PutBucketLifecycleConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfiguration +func (c *S3) PutBucketLifecycleConfigurationRequest(input *PutBucketLifecycleConfigurationInput) (req *request.Request, output *PutBucketLifecycleConfigurationOutput) { + op := &request.Operation{ + Name: opPutBucketLifecycleConfiguration, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?lifecycle", + } + + if input == nil { + input = &PutBucketLifecycleConfigurationInput{} + } + + output = &PutBucketLifecycleConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketLifecycleConfiguration API operation for Amazon Simple Storage Service. +// +// Sets lifecycle configuration for your bucket. If a lifecycle configuration +// exists, it replaces it. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketLifecycleConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfiguration +func (c *S3) PutBucketLifecycleConfiguration(input *PutBucketLifecycleConfigurationInput) (*PutBucketLifecycleConfigurationOutput, error) { + req, out := c.PutBucketLifecycleConfigurationRequest(input) + return out, req.Send() +} + +// PutBucketLifecycleConfigurationWithContext is the same as PutBucketLifecycleConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketLifecycleConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketLifecycleConfigurationWithContext(ctx aws.Context, input *PutBucketLifecycleConfigurationInput, opts ...request.Option) (*PutBucketLifecycleConfigurationOutput, error) { + req, out := c.PutBucketLifecycleConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketLogging = "PutBucketLogging" + +// PutBucketLoggingRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketLogging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketLogging for more information on using the PutBucketLogging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketLoggingRequest method. +// req, resp := client.PutBucketLoggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLogging +func (c *S3) PutBucketLoggingRequest(input *PutBucketLoggingInput) (req *request.Request, output *PutBucketLoggingOutput) { + op := &request.Operation{ + Name: opPutBucketLogging, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?logging", + } + + if input == nil { + input = &PutBucketLoggingInput{} + } + + output = &PutBucketLoggingOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketLogging API operation for Amazon Simple Storage Service. +// +// Set the logging parameters for a bucket and to specify permissions for who +// can view and modify the logging parameters. To set the logging status of +// a bucket, you must be the bucket owner. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketLogging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLogging +func (c *S3) PutBucketLogging(input *PutBucketLoggingInput) (*PutBucketLoggingOutput, error) { + req, out := c.PutBucketLoggingRequest(input) + return out, req.Send() +} + +// PutBucketLoggingWithContext is the same as PutBucketLogging with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketLogging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketLoggingWithContext(ctx aws.Context, input *PutBucketLoggingInput, opts ...request.Option) (*PutBucketLoggingOutput, error) { + req, out := c.PutBucketLoggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketMetricsConfiguration = "PutBucketMetricsConfiguration" + +// PutBucketMetricsConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketMetricsConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketMetricsConfiguration for more information on using the PutBucketMetricsConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketMetricsConfigurationRequest method. +// req, resp := client.PutBucketMetricsConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfiguration +func (c *S3) PutBucketMetricsConfigurationRequest(input *PutBucketMetricsConfigurationInput) (req *request.Request, output *PutBucketMetricsConfigurationOutput) { + op := &request.Operation{ + Name: opPutBucketMetricsConfiguration, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?metrics", + } + + if input == nil { + input = &PutBucketMetricsConfigurationInput{} + } + + output = &PutBucketMetricsConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketMetricsConfiguration API operation for Amazon Simple Storage Service. +// +// Sets a metrics configuration (specified by the metrics configuration ID) +// for the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketMetricsConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfiguration +func (c *S3) PutBucketMetricsConfiguration(input *PutBucketMetricsConfigurationInput) (*PutBucketMetricsConfigurationOutput, error) { + req, out := c.PutBucketMetricsConfigurationRequest(input) + return out, req.Send() +} + +// PutBucketMetricsConfigurationWithContext is the same as PutBucketMetricsConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketMetricsConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketMetricsConfigurationWithContext(ctx aws.Context, input *PutBucketMetricsConfigurationInput, opts ...request.Option) (*PutBucketMetricsConfigurationOutput, error) { + req, out := c.PutBucketMetricsConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketNotification = "PutBucketNotification" + +// PutBucketNotificationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketNotification operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketNotification for more information on using the PutBucketNotification +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketNotificationRequest method. +// req, resp := client.PutBucketNotificationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification +func (c *S3) PutBucketNotificationRequest(input *PutBucketNotificationInput) (req *request.Request, output *PutBucketNotificationOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, PutBucketNotification, has been deprecated") + } + op := &request.Operation{ + Name: opPutBucketNotification, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?notification", + } + + if input == nil { + input = &PutBucketNotificationInput{} + } + + output = &PutBucketNotificationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketNotification API operation for Amazon Simple Storage Service. +// +// Deprecated, see the PutBucketNotificationConfiguraiton operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketNotification for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification +func (c *S3) PutBucketNotification(input *PutBucketNotificationInput) (*PutBucketNotificationOutput, error) { + req, out := c.PutBucketNotificationRequest(input) + return out, req.Send() +} + +// PutBucketNotificationWithContext is the same as PutBucketNotification with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketNotification for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketNotificationWithContext(ctx aws.Context, input *PutBucketNotificationInput, opts ...request.Option) (*PutBucketNotificationOutput, error) { + req, out := c.PutBucketNotificationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketNotificationConfiguration = "PutBucketNotificationConfiguration" + +// PutBucketNotificationConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketNotificationConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketNotificationConfiguration for more information on using the PutBucketNotificationConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketNotificationConfigurationRequest method. +// req, resp := client.PutBucketNotificationConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfiguration +func (c *S3) PutBucketNotificationConfigurationRequest(input *PutBucketNotificationConfigurationInput) (req *request.Request, output *PutBucketNotificationConfigurationOutput) { + op := &request.Operation{ + Name: opPutBucketNotificationConfiguration, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?notification", + } + + if input == nil { + input = &PutBucketNotificationConfigurationInput{} + } + + output = &PutBucketNotificationConfigurationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketNotificationConfiguration API operation for Amazon Simple Storage Service. +// +// Enables notifications of specified events for a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketNotificationConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfiguration +func (c *S3) PutBucketNotificationConfiguration(input *PutBucketNotificationConfigurationInput) (*PutBucketNotificationConfigurationOutput, error) { + req, out := c.PutBucketNotificationConfigurationRequest(input) + return out, req.Send() +} + +// PutBucketNotificationConfigurationWithContext is the same as PutBucketNotificationConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketNotificationConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketNotificationConfigurationWithContext(ctx aws.Context, input *PutBucketNotificationConfigurationInput, opts ...request.Option) (*PutBucketNotificationConfigurationOutput, error) { + req, out := c.PutBucketNotificationConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketPolicy = "PutBucketPolicy" + +// PutBucketPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketPolicy for more information on using the PutBucketPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketPolicyRequest method. +// req, resp := client.PutBucketPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicy +func (c *S3) PutBucketPolicyRequest(input *PutBucketPolicyInput) (req *request.Request, output *PutBucketPolicyOutput) { + op := &request.Operation{ + Name: opPutBucketPolicy, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?policy", + } + + if input == nil { + input = &PutBucketPolicyInput{} + } + + output = &PutBucketPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketPolicy API operation for Amazon Simple Storage Service. +// +// Replaces a policy on a bucket. If the bucket already has a policy, the one +// in this request completely replaces it. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketPolicy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicy +func (c *S3) PutBucketPolicy(input *PutBucketPolicyInput) (*PutBucketPolicyOutput, error) { + req, out := c.PutBucketPolicyRequest(input) + return out, req.Send() +} + +// PutBucketPolicyWithContext is the same as PutBucketPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketPolicyWithContext(ctx aws.Context, input *PutBucketPolicyInput, opts ...request.Option) (*PutBucketPolicyOutput, error) { + req, out := c.PutBucketPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketReplication = "PutBucketReplication" + +// PutBucketReplicationRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketReplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketReplication for more information on using the PutBucketReplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketReplicationRequest method. +// req, resp := client.PutBucketReplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplication +func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req *request.Request, output *PutBucketReplicationOutput) { + op := &request.Operation{ + Name: opPutBucketReplication, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?replication", + } + + if input == nil { + input = &PutBucketReplicationInput{} + } + + output = &PutBucketReplicationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketReplication API operation for Amazon Simple Storage Service. +// +// Creates a new replication configuration (or replaces an existing one, if +// present). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketReplication for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplication +func (c *S3) PutBucketReplication(input *PutBucketReplicationInput) (*PutBucketReplicationOutput, error) { + req, out := c.PutBucketReplicationRequest(input) + return out, req.Send() +} + +// PutBucketReplicationWithContext is the same as PutBucketReplication with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketReplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketReplicationWithContext(ctx aws.Context, input *PutBucketReplicationInput, opts ...request.Option) (*PutBucketReplicationOutput, error) { + req, out := c.PutBucketReplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketRequestPayment = "PutBucketRequestPayment" + +// PutBucketRequestPaymentRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketRequestPayment operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketRequestPayment for more information on using the PutBucketRequestPayment +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketRequestPaymentRequest method. +// req, resp := client.PutBucketRequestPaymentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPayment +func (c *S3) PutBucketRequestPaymentRequest(input *PutBucketRequestPaymentInput) (req *request.Request, output *PutBucketRequestPaymentOutput) { + op := &request.Operation{ + Name: opPutBucketRequestPayment, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?requestPayment", + } + + if input == nil { + input = &PutBucketRequestPaymentInput{} + } + + output = &PutBucketRequestPaymentOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketRequestPayment API operation for Amazon Simple Storage Service. +// +// Sets the request payment configuration for a bucket. By default, the bucket +// owner pays for downloads from the bucket. This configuration parameter enables +// the bucket owner (only) to specify that the person requesting the download +// will be charged for the download. Documentation on requester pays buckets +// can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketRequestPayment for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPayment +func (c *S3) PutBucketRequestPayment(input *PutBucketRequestPaymentInput) (*PutBucketRequestPaymentOutput, error) { + req, out := c.PutBucketRequestPaymentRequest(input) + return out, req.Send() +} + +// PutBucketRequestPaymentWithContext is the same as PutBucketRequestPayment with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketRequestPayment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketRequestPaymentWithContext(ctx aws.Context, input *PutBucketRequestPaymentInput, opts ...request.Option) (*PutBucketRequestPaymentOutput, error) { + req, out := c.PutBucketRequestPaymentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketTagging = "PutBucketTagging" + +// PutBucketTaggingRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketTagging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketTagging for more information on using the PutBucketTagging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketTaggingRequest method. +// req, resp := client.PutBucketTaggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTagging +func (c *S3) PutBucketTaggingRequest(input *PutBucketTaggingInput) (req *request.Request, output *PutBucketTaggingOutput) { + op := &request.Operation{ + Name: opPutBucketTagging, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?tagging", + } + + if input == nil { + input = &PutBucketTaggingInput{} + } + + output = &PutBucketTaggingOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketTagging API operation for Amazon Simple Storage Service. +// +// Sets the tags for a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketTagging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTagging +func (c *S3) PutBucketTagging(input *PutBucketTaggingInput) (*PutBucketTaggingOutput, error) { + req, out := c.PutBucketTaggingRequest(input) + return out, req.Send() +} + +// PutBucketTaggingWithContext is the same as PutBucketTagging with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketTagging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketTaggingWithContext(ctx aws.Context, input *PutBucketTaggingInput, opts ...request.Option) (*PutBucketTaggingOutput, error) { + req, out := c.PutBucketTaggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketVersioning = "PutBucketVersioning" + +// PutBucketVersioningRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketVersioning operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketVersioning for more information on using the PutBucketVersioning +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketVersioningRequest method. +// req, resp := client.PutBucketVersioningRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioning +func (c *S3) PutBucketVersioningRequest(input *PutBucketVersioningInput) (req *request.Request, output *PutBucketVersioningOutput) { + op := &request.Operation{ + Name: opPutBucketVersioning, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?versioning", + } + + if input == nil { + input = &PutBucketVersioningInput{} + } + + output = &PutBucketVersioningOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketVersioning API operation for Amazon Simple Storage Service. +// +// Sets the versioning state of an existing bucket. To set the versioning state, +// you must be the bucket owner. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketVersioning for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioning +func (c *S3) PutBucketVersioning(input *PutBucketVersioningInput) (*PutBucketVersioningOutput, error) { + req, out := c.PutBucketVersioningRequest(input) + return out, req.Send() +} + +// PutBucketVersioningWithContext is the same as PutBucketVersioning with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketVersioning for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketVersioningWithContext(ctx aws.Context, input *PutBucketVersioningInput, opts ...request.Option) (*PutBucketVersioningOutput, error) { + req, out := c.PutBucketVersioningRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutBucketWebsite = "PutBucketWebsite" + +// PutBucketWebsiteRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketWebsite operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketWebsite for more information on using the PutBucketWebsite +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketWebsiteRequest method. +// req, resp := client.PutBucketWebsiteRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsite +func (c *S3) PutBucketWebsiteRequest(input *PutBucketWebsiteInput) (req *request.Request, output *PutBucketWebsiteOutput) { + op := &request.Operation{ + Name: opPutBucketWebsite, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?website", + } + + if input == nil { + input = &PutBucketWebsiteInput{} + } + + output = &PutBucketWebsiteOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketWebsite API operation for Amazon Simple Storage Service. +// +// Set the website configuration for a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketWebsite for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsite +func (c *S3) PutBucketWebsite(input *PutBucketWebsiteInput) (*PutBucketWebsiteOutput, error) { + req, out := c.PutBucketWebsiteRequest(input) + return out, req.Send() +} + +// PutBucketWebsiteWithContext is the same as PutBucketWebsite with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketWebsite for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketWebsiteWithContext(ctx aws.Context, input *PutBucketWebsiteInput, opts ...request.Option) (*PutBucketWebsiteOutput, error) { + req, out := c.PutBucketWebsiteRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutObject = "PutObject" + +// PutObjectRequest generates a "aws/request.Request" representing the +// client's request for the PutObject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutObject for more information on using the PutObject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutObjectRequest method. +// req, resp := client.PutObjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject +func (c *S3) PutObjectRequest(input *PutObjectInput) (req *request.Request, output *PutObjectOutput) { + op := &request.Operation{ + Name: opPutObject, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &PutObjectInput{} + } + + output = &PutObjectOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutObject API operation for Amazon Simple Storage Service. +// +// Adds an object to a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutObject for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject +func (c *S3) PutObject(input *PutObjectInput) (*PutObjectOutput, error) { + req, out := c.PutObjectRequest(input) + return out, req.Send() +} + +// PutObjectWithContext is the same as PutObject with the addition of +// the ability to pass a context and additional request options. +// +// See PutObject for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutObjectWithContext(ctx aws.Context, input *PutObjectInput, opts ...request.Option) (*PutObjectOutput, error) { + req, out := c.PutObjectRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutObjectAcl = "PutObjectAcl" + +// PutObjectAclRequest generates a "aws/request.Request" representing the +// client's request for the PutObjectAcl operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutObjectAcl for more information on using the PutObjectAcl +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutObjectAclRequest method. +// req, resp := client.PutObjectAclRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAcl +func (c *S3) PutObjectAclRequest(input *PutObjectAclInput) (req *request.Request, output *PutObjectAclOutput) { + op := &request.Operation{ + Name: opPutObjectAcl, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}?acl", + } + + if input == nil { + input = &PutObjectAclInput{} + } + + output = &PutObjectAclOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutObjectAcl API operation for Amazon Simple Storage Service. +// +// uses the acl subresource to set the access control list (ACL) permissions +// for an object that already exists in a bucket +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutObjectAcl for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchKey "NoSuchKey" +// The specified key does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAcl +func (c *S3) PutObjectAcl(input *PutObjectAclInput) (*PutObjectAclOutput, error) { + req, out := c.PutObjectAclRequest(input) + return out, req.Send() +} + +// PutObjectAclWithContext is the same as PutObjectAcl with the addition of +// the ability to pass a context and additional request options. +// +// See PutObjectAcl for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutObjectAclWithContext(ctx aws.Context, input *PutObjectAclInput, opts ...request.Option) (*PutObjectAclOutput, error) { + req, out := c.PutObjectAclRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutObjectTagging = "PutObjectTagging" + +// PutObjectTaggingRequest generates a "aws/request.Request" representing the +// client's request for the PutObjectTagging operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutObjectTagging for more information on using the PutObjectTagging +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutObjectTaggingRequest method. +// req, resp := client.PutObjectTaggingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTagging +func (c *S3) PutObjectTaggingRequest(input *PutObjectTaggingInput) (req *request.Request, output *PutObjectTaggingOutput) { + op := &request.Operation{ + Name: opPutObjectTagging, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}?tagging", + } + + if input == nil { + input = &PutObjectTaggingInput{} + } + + output = &PutObjectTaggingOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutObjectTagging API operation for Amazon Simple Storage Service. +// +// Sets the supplied tag-set to an object that already exists in a bucket +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutObjectTagging for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTagging +func (c *S3) PutObjectTagging(input *PutObjectTaggingInput) (*PutObjectTaggingOutput, error) { + req, out := c.PutObjectTaggingRequest(input) + return out, req.Send() +} + +// PutObjectTaggingWithContext is the same as PutObjectTagging with the addition of +// the ability to pass a context and additional request options. +// +// See PutObjectTagging for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutObjectTaggingWithContext(ctx aws.Context, input *PutObjectTaggingInput, opts ...request.Option) (*PutObjectTaggingOutput, error) { + req, out := c.PutObjectTaggingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRestoreObject = "RestoreObject" + +// RestoreObjectRequest generates a "aws/request.Request" representing the +// client's request for the RestoreObject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreObject for more information on using the RestoreObject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreObjectRequest method. +// req, resp := client.RestoreObjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObject +func (c *S3) RestoreObjectRequest(input *RestoreObjectInput) (req *request.Request, output *RestoreObjectOutput) { + op := &request.Operation{ + Name: opRestoreObject, + HTTPMethod: "POST", + HTTPPath: "/{Bucket}/{Key+}?restore", + } + + if input == nil { + input = &RestoreObjectInput{} + } + + output = &RestoreObjectOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreObject API operation for Amazon Simple Storage Service. +// +// Restores an archived copy of an object back into Amazon S3 +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation RestoreObject for usage and error information. +// +// Returned Error Codes: +// * ErrCodeObjectAlreadyInActiveTierError "ObjectAlreadyInActiveTierError" +// This operation is not allowed against this storage tier +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObject +func (c *S3) RestoreObject(input *RestoreObjectInput) (*RestoreObjectOutput, error) { + req, out := c.RestoreObjectRequest(input) + return out, req.Send() +} + +// RestoreObjectWithContext is the same as RestoreObject with the addition of +// the ability to pass a context and additional request options. +// +// See RestoreObject for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) RestoreObjectWithContext(ctx aws.Context, input *RestoreObjectInput, opts ...request.Option) (*RestoreObjectOutput, error) { + req, out := c.RestoreObjectRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadPart = "UploadPart" + +// UploadPartRequest generates a "aws/request.Request" representing the +// client's request for the UploadPart operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadPart for more information on using the UploadPart +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadPartRequest method. +// req, resp := client.UploadPartRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart +func (c *S3) UploadPartRequest(input *UploadPartInput) (req *request.Request, output *UploadPartOutput) { + op := &request.Operation{ + Name: opUploadPart, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &UploadPartInput{} + } + + output = &UploadPartOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadPart API operation for Amazon Simple Storage Service. +// +// Uploads a part in a multipart upload. +// +// Note: After you initiate multipart upload and upload one or more parts, you +// must either complete or abort multipart upload in order to stop getting charged +// for storage of the uploaded parts. Only after you either complete or abort +// multipart upload, Amazon S3 frees up the parts storage and stops charging +// you for the parts storage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation UploadPart for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart +func (c *S3) UploadPart(input *UploadPartInput) (*UploadPartOutput, error) { + req, out := c.UploadPartRequest(input) + return out, req.Send() +} + +// UploadPartWithContext is the same as UploadPart with the addition of +// the ability to pass a context and additional request options. +// +// See UploadPart for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) UploadPartWithContext(ctx aws.Context, input *UploadPartInput, opts ...request.Option) (*UploadPartOutput, error) { + req, out := c.UploadPartRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadPartCopy = "UploadPartCopy" + +// UploadPartCopyRequest generates a "aws/request.Request" representing the +// client's request for the UploadPartCopy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadPartCopy for more information on using the UploadPartCopy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadPartCopyRequest method. +// req, resp := client.UploadPartCopyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopy +func (c *S3) UploadPartCopyRequest(input *UploadPartCopyInput) (req *request.Request, output *UploadPartCopyOutput) { + op := &request.Operation{ + Name: opUploadPartCopy, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &UploadPartCopyInput{} + } + + output = &UploadPartCopyOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadPartCopy API operation for Amazon Simple Storage Service. +// +// Uploads a part by copying data from an existing object as data source. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation UploadPartCopy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopy +func (c *S3) UploadPartCopy(input *UploadPartCopyInput) (*UploadPartCopyOutput, error) { + req, out := c.UploadPartCopyRequest(input) + return out, req.Send() +} + +// UploadPartCopyWithContext is the same as UploadPartCopy with the addition of +// the ability to pass a context and additional request options. +// +// See UploadPartCopy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) UploadPartCopyWithContext(ctx aws.Context, input *UploadPartCopyInput, opts ...request.Option) (*UploadPartCopyOutput, error) { + req, out := c.UploadPartCopyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Specifies the days since the initiation of an Incomplete Multipart Upload +// that Lifecycle will wait before permanently removing all parts of the upload. +type AbortIncompleteMultipartUpload struct { + _ struct{} `type:"structure"` + + // Indicates the number of days that must pass since initiation for Lifecycle + // to abort an Incomplete Multipart Upload. + DaysAfterInitiation *int64 `type:"integer"` +} + +// String returns the string representation +func (s AbortIncompleteMultipartUpload) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AbortIncompleteMultipartUpload) GoString() string { + return s.String() +} + +// SetDaysAfterInitiation sets the DaysAfterInitiation field's value. +func (s *AbortIncompleteMultipartUpload) SetDaysAfterInitiation(v int64) *AbortIncompleteMultipartUpload { + s.DaysAfterInitiation = &v + return s +} + +type AbortMultipartUploadInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // UploadId is a required field + UploadId *string `location:"querystring" locationName:"uploadId" type:"string" required:"true"` +} + +// String returns the string representation +func (s AbortMultipartUploadInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AbortMultipartUploadInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AbortMultipartUploadInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AbortMultipartUploadInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.UploadId == nil { + invalidParams.Add(request.NewErrParamRequired("UploadId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *AbortMultipartUploadInput) SetBucket(v string) *AbortMultipartUploadInput { + s.Bucket = &v + return s +} + +func (s *AbortMultipartUploadInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *AbortMultipartUploadInput) SetKey(v string) *AbortMultipartUploadInput { + s.Key = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *AbortMultipartUploadInput) SetRequestPayer(v string) *AbortMultipartUploadInput { + s.RequestPayer = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *AbortMultipartUploadInput) SetUploadId(v string) *AbortMultipartUploadInput { + s.UploadId = &v + return s +} + +type AbortMultipartUploadOutput struct { + _ struct{} `type:"structure"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` +} + +// String returns the string representation +func (s AbortMultipartUploadOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AbortMultipartUploadOutput) GoString() string { + return s.String() +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *AbortMultipartUploadOutput) SetRequestCharged(v string) *AbortMultipartUploadOutput { + s.RequestCharged = &v + return s +} + +type AccelerateConfiguration struct { + _ struct{} `type:"structure"` + + // The accelerate configuration of the bucket. + Status *string `type:"string" enum:"BucketAccelerateStatus"` +} + +// String returns the string representation +func (s AccelerateConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccelerateConfiguration) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *AccelerateConfiguration) SetStatus(v string) *AccelerateConfiguration { + s.Status = &v + return s +} + +type AccessControlPolicy struct { + _ struct{} `type:"structure"` + + // A list of grants. + Grants []*Grant `locationName:"AccessControlList" locationNameList:"Grant" type:"list"` + + Owner *Owner `type:"structure"` +} + +// String returns the string representation +func (s AccessControlPolicy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessControlPolicy) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AccessControlPolicy) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AccessControlPolicy"} + if s.Grants != nil { + for i, v := range s.Grants { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Grants", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGrants sets the Grants field's value. +func (s *AccessControlPolicy) SetGrants(v []*Grant) *AccessControlPolicy { + s.Grants = v + return s +} + +// SetOwner sets the Owner field's value. +func (s *AccessControlPolicy) SetOwner(v *Owner) *AccessControlPolicy { + s.Owner = v + return s +} + +// Container for information regarding the access control for replicas. +type AccessControlTranslation struct { + _ struct{} `type:"structure"` + + // The override value for the owner of the replica object. + // + // Owner is a required field + Owner *string `type:"string" required:"true" enum:"OwnerOverride"` +} + +// String returns the string representation +func (s AccessControlTranslation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessControlTranslation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AccessControlTranslation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AccessControlTranslation"} + if s.Owner == nil { + invalidParams.Add(request.NewErrParamRequired("Owner")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOwner sets the Owner field's value. +func (s *AccessControlTranslation) SetOwner(v string) *AccessControlTranslation { + s.Owner = &v + return s +} + +type AnalyticsAndOperator struct { + _ struct{} `type:"structure"` + + // The prefix to use when evaluating an AND predicate. + Prefix *string `type:"string"` + + // The list of tags to use when evaluating an AND predicate. + Tags []*Tag `locationName:"Tag" locationNameList:"Tag" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s AnalyticsAndOperator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AnalyticsAndOperator) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AnalyticsAndOperator) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AnalyticsAndOperator"} + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPrefix sets the Prefix field's value. +func (s *AnalyticsAndOperator) SetPrefix(v string) *AnalyticsAndOperator { + s.Prefix = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *AnalyticsAndOperator) SetTags(v []*Tag) *AnalyticsAndOperator { + s.Tags = v + return s +} + +type AnalyticsConfiguration struct { + _ struct{} `type:"structure"` + + // The filter used to describe a set of objects for analyses. A filter must + // have exactly one prefix, one tag, or one conjunction (AnalyticsAndOperator). + // If no filter is provided, all objects will be considered in any analysis. + Filter *AnalyticsFilter `type:"structure"` + + // The identifier used to represent an analytics configuration. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // If present, it indicates that data related to access patterns will be collected + // and made available to analyze the tradeoffs between different storage classes. + // + // StorageClassAnalysis is a required field + StorageClassAnalysis *StorageClassAnalysis `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AnalyticsConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AnalyticsConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AnalyticsConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AnalyticsConfiguration"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.StorageClassAnalysis == nil { + invalidParams.Add(request.NewErrParamRequired("StorageClassAnalysis")) + } + if s.Filter != nil { + if err := s.Filter.Validate(); err != nil { + invalidParams.AddNested("Filter", err.(request.ErrInvalidParams)) + } + } + if s.StorageClassAnalysis != nil { + if err := s.StorageClassAnalysis.Validate(); err != nil { + invalidParams.AddNested("StorageClassAnalysis", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilter sets the Filter field's value. +func (s *AnalyticsConfiguration) SetFilter(v *AnalyticsFilter) *AnalyticsConfiguration { + s.Filter = v + return s +} + +// SetId sets the Id field's value. +func (s *AnalyticsConfiguration) SetId(v string) *AnalyticsConfiguration { + s.Id = &v + return s +} + +// SetStorageClassAnalysis sets the StorageClassAnalysis field's value. +func (s *AnalyticsConfiguration) SetStorageClassAnalysis(v *StorageClassAnalysis) *AnalyticsConfiguration { + s.StorageClassAnalysis = v + return s +} + +type AnalyticsExportDestination struct { + _ struct{} `type:"structure"` + + // A destination signifying output to an S3 bucket. + // + // S3BucketDestination is a required field + S3BucketDestination *AnalyticsS3BucketDestination `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AnalyticsExportDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AnalyticsExportDestination) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AnalyticsExportDestination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AnalyticsExportDestination"} + if s.S3BucketDestination == nil { + invalidParams.Add(request.NewErrParamRequired("S3BucketDestination")) + } + if s.S3BucketDestination != nil { + if err := s.S3BucketDestination.Validate(); err != nil { + invalidParams.AddNested("S3BucketDestination", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3BucketDestination sets the S3BucketDestination field's value. +func (s *AnalyticsExportDestination) SetS3BucketDestination(v *AnalyticsS3BucketDestination) *AnalyticsExportDestination { + s.S3BucketDestination = v + return s +} + +type AnalyticsFilter struct { + _ struct{} `type:"structure"` + + // A conjunction (logical AND) of predicates, which is used in evaluating an + // analytics filter. The operator must have at least two predicates. + And *AnalyticsAndOperator `type:"structure"` + + // The prefix to use when evaluating an analytics filter. + Prefix *string `type:"string"` + + // The tag to use when evaluating an analytics filter. + Tag *Tag `type:"structure"` +} + +// String returns the string representation +func (s AnalyticsFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AnalyticsFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AnalyticsFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AnalyticsFilter"} + if s.And != nil { + if err := s.And.Validate(); err != nil { + invalidParams.AddNested("And", err.(request.ErrInvalidParams)) + } + } + if s.Tag != nil { + if err := s.Tag.Validate(); err != nil { + invalidParams.AddNested("Tag", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAnd sets the And field's value. +func (s *AnalyticsFilter) SetAnd(v *AnalyticsAndOperator) *AnalyticsFilter { + s.And = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *AnalyticsFilter) SetPrefix(v string) *AnalyticsFilter { + s.Prefix = &v + return s +} + +// SetTag sets the Tag field's value. +func (s *AnalyticsFilter) SetTag(v *Tag) *AnalyticsFilter { + s.Tag = v + return s +} + +type AnalyticsS3BucketDestination struct { + _ struct{} `type:"structure"` + + // The Amazon resource name (ARN) of the bucket to which data is exported. + // + // Bucket is a required field + Bucket *string `type:"string" required:"true"` + + // The account ID that owns the destination bucket. If no account ID is provided, + // the owner will not be validated prior to exporting data. + BucketAccountId *string `type:"string"` + + // The file format used when exporting data to Amazon S3. + // + // Format is a required field + Format *string `type:"string" required:"true" enum:"AnalyticsS3ExportFileFormat"` + + // The prefix to use when exporting data. The exported data begins with this + // prefix. + Prefix *string `type:"string"` +} + +// String returns the string representation +func (s AnalyticsS3BucketDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AnalyticsS3BucketDestination) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AnalyticsS3BucketDestination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AnalyticsS3BucketDestination"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Format == nil { + invalidParams.Add(request.NewErrParamRequired("Format")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *AnalyticsS3BucketDestination) SetBucket(v string) *AnalyticsS3BucketDestination { + s.Bucket = &v + return s +} + +func (s *AnalyticsS3BucketDestination) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetBucketAccountId sets the BucketAccountId field's value. +func (s *AnalyticsS3BucketDestination) SetBucketAccountId(v string) *AnalyticsS3BucketDestination { + s.BucketAccountId = &v + return s +} + +// SetFormat sets the Format field's value. +func (s *AnalyticsS3BucketDestination) SetFormat(v string) *AnalyticsS3BucketDestination { + s.Format = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *AnalyticsS3BucketDestination) SetPrefix(v string) *AnalyticsS3BucketDestination { + s.Prefix = &v + return s +} + +type Bucket struct { + _ struct{} `type:"structure"` + + // Date the bucket was created. + CreationDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The name of the bucket. + Name *string `type:"string"` +} + +// String returns the string representation +func (s Bucket) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Bucket) GoString() string { + return s.String() +} + +// SetCreationDate sets the CreationDate field's value. +func (s *Bucket) SetCreationDate(v time.Time) *Bucket { + s.CreationDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *Bucket) SetName(v string) *Bucket { + s.Name = &v + return s +} + +type BucketLifecycleConfiguration struct { + _ struct{} `type:"structure"` + + // Rules is a required field + Rules []*LifecycleRule `locationName:"Rule" type:"list" flattened:"true" required:"true"` +} + +// String returns the string representation +func (s BucketLifecycleConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BucketLifecycleConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BucketLifecycleConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BucketLifecycleConfiguration"} + if s.Rules == nil { + invalidParams.Add(request.NewErrParamRequired("Rules")) + } + if s.Rules != nil { + for i, v := range s.Rules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Rules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRules sets the Rules field's value. +func (s *BucketLifecycleConfiguration) SetRules(v []*LifecycleRule) *BucketLifecycleConfiguration { + s.Rules = v + return s +} + +type BucketLoggingStatus struct { + _ struct{} `type:"structure"` + + // Container for logging information. Presence of this element indicates that + // logging is enabled. Parameters TargetBucket and TargetPrefix are required + // in this case. + LoggingEnabled *LoggingEnabled `type:"structure"` +} + +// String returns the string representation +func (s BucketLoggingStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BucketLoggingStatus) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BucketLoggingStatus) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BucketLoggingStatus"} + if s.LoggingEnabled != nil { + if err := s.LoggingEnabled.Validate(); err != nil { + invalidParams.AddNested("LoggingEnabled", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLoggingEnabled sets the LoggingEnabled field's value. +func (s *BucketLoggingStatus) SetLoggingEnabled(v *LoggingEnabled) *BucketLoggingStatus { + s.LoggingEnabled = v + return s +} + +type CORSConfiguration struct { + _ struct{} `type:"structure"` + + // CORSRules is a required field + CORSRules []*CORSRule `locationName:"CORSRule" type:"list" flattened:"true" required:"true"` +} + +// String returns the string representation +func (s CORSConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CORSConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CORSConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CORSConfiguration"} + if s.CORSRules == nil { + invalidParams.Add(request.NewErrParamRequired("CORSRules")) + } + if s.CORSRules != nil { + for i, v := range s.CORSRules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CORSRules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCORSRules sets the CORSRules field's value. +func (s *CORSConfiguration) SetCORSRules(v []*CORSRule) *CORSConfiguration { + s.CORSRules = v + return s +} + +type CORSRule struct { + _ struct{} `type:"structure"` + + // Specifies which headers are allowed in a pre-flight OPTIONS request. + AllowedHeaders []*string `locationName:"AllowedHeader" type:"list" flattened:"true"` + + // Identifies HTTP methods that the domain/origin specified in the rule is allowed + // to execute. + // + // AllowedMethods is a required field + AllowedMethods []*string `locationName:"AllowedMethod" type:"list" flattened:"true" required:"true"` + + // One or more origins you want customers to be able to access the bucket from. + // + // AllowedOrigins is a required field + AllowedOrigins []*string `locationName:"AllowedOrigin" type:"list" flattened:"true" required:"true"` + + // One or more headers in the response that you want customers to be able to + // access from their applications (for example, from a JavaScript XMLHttpRequest + // object). + ExposeHeaders []*string `locationName:"ExposeHeader" type:"list" flattened:"true"` + + // The time in seconds that your browser is to cache the preflight response + // for the specified resource. + MaxAgeSeconds *int64 `type:"integer"` +} + +// String returns the string representation +func (s CORSRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CORSRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CORSRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CORSRule"} + if s.AllowedMethods == nil { + invalidParams.Add(request.NewErrParamRequired("AllowedMethods")) + } + if s.AllowedOrigins == nil { + invalidParams.Add(request.NewErrParamRequired("AllowedOrigins")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowedHeaders sets the AllowedHeaders field's value. +func (s *CORSRule) SetAllowedHeaders(v []*string) *CORSRule { + s.AllowedHeaders = v + return s +} + +// SetAllowedMethods sets the AllowedMethods field's value. +func (s *CORSRule) SetAllowedMethods(v []*string) *CORSRule { + s.AllowedMethods = v + return s +} + +// SetAllowedOrigins sets the AllowedOrigins field's value. +func (s *CORSRule) SetAllowedOrigins(v []*string) *CORSRule { + s.AllowedOrigins = v + return s +} + +// SetExposeHeaders sets the ExposeHeaders field's value. +func (s *CORSRule) SetExposeHeaders(v []*string) *CORSRule { + s.ExposeHeaders = v + return s +} + +// SetMaxAgeSeconds sets the MaxAgeSeconds field's value. +func (s *CORSRule) SetMaxAgeSeconds(v int64) *CORSRule { + s.MaxAgeSeconds = &v + return s +} + +// Describes how a CSV-formatted input object is formatted. +type CSVInput struct { + _ struct{} `type:"structure"` + + // Single character used to indicate a row should be ignored when present at + // the start of a row. + Comments *string `type:"string"` + + // Value used to separate individual fields in a record. + FieldDelimiter *string `type:"string"` + + // Describes the first line of input. Valid values: None, Ignore, Use. + FileHeaderInfo *string `type:"string" enum:"FileHeaderInfo"` + + // Value used for escaping where the field delimiter is part of the value. + QuoteCharacter *string `type:"string"` + + // Single character used for escaping the quote character inside an already + // escaped value. + QuoteEscapeCharacter *string `type:"string"` + + // Value used to separate individual records. + RecordDelimiter *string `type:"string"` +} + +// String returns the string representation +func (s CSVInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CSVInput) GoString() string { + return s.String() +} + +// SetComments sets the Comments field's value. +func (s *CSVInput) SetComments(v string) *CSVInput { + s.Comments = &v + return s +} + +// SetFieldDelimiter sets the FieldDelimiter field's value. +func (s *CSVInput) SetFieldDelimiter(v string) *CSVInput { + s.FieldDelimiter = &v + return s +} + +// SetFileHeaderInfo sets the FileHeaderInfo field's value. +func (s *CSVInput) SetFileHeaderInfo(v string) *CSVInput { + s.FileHeaderInfo = &v + return s +} + +// SetQuoteCharacter sets the QuoteCharacter field's value. +func (s *CSVInput) SetQuoteCharacter(v string) *CSVInput { + s.QuoteCharacter = &v + return s +} + +// SetQuoteEscapeCharacter sets the QuoteEscapeCharacter field's value. +func (s *CSVInput) SetQuoteEscapeCharacter(v string) *CSVInput { + s.QuoteEscapeCharacter = &v + return s +} + +// SetRecordDelimiter sets the RecordDelimiter field's value. +func (s *CSVInput) SetRecordDelimiter(v string) *CSVInput { + s.RecordDelimiter = &v + return s +} + +// Describes how CSV-formatted results are formatted. +type CSVOutput struct { + _ struct{} `type:"structure"` + + // Value used to separate individual fields in a record. + FieldDelimiter *string `type:"string"` + + // Value used for escaping where the field delimiter is part of the value. + QuoteCharacter *string `type:"string"` + + // Single character used for escaping the quote character inside an already + // escaped value. + QuoteEscapeCharacter *string `type:"string"` + + // Indicates whether or not all output fields should be quoted. + QuoteFields *string `type:"string" enum:"QuoteFields"` + + // Value used to separate individual records. + RecordDelimiter *string `type:"string"` +} + +// String returns the string representation +func (s CSVOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CSVOutput) GoString() string { + return s.String() +} + +// SetFieldDelimiter sets the FieldDelimiter field's value. +func (s *CSVOutput) SetFieldDelimiter(v string) *CSVOutput { + s.FieldDelimiter = &v + return s +} + +// SetQuoteCharacter sets the QuoteCharacter field's value. +func (s *CSVOutput) SetQuoteCharacter(v string) *CSVOutput { + s.QuoteCharacter = &v + return s +} + +// SetQuoteEscapeCharacter sets the QuoteEscapeCharacter field's value. +func (s *CSVOutput) SetQuoteEscapeCharacter(v string) *CSVOutput { + s.QuoteEscapeCharacter = &v + return s +} + +// SetQuoteFields sets the QuoteFields field's value. +func (s *CSVOutput) SetQuoteFields(v string) *CSVOutput { + s.QuoteFields = &v + return s +} + +// SetRecordDelimiter sets the RecordDelimiter field's value. +func (s *CSVOutput) SetRecordDelimiter(v string) *CSVOutput { + s.RecordDelimiter = &v + return s +} + +type CloudFunctionConfiguration struct { + _ struct{} `type:"structure"` + + CloudFunction *string `type:"string"` + + // Bucket event for which to send notifications. + Event *string `deprecated:"true" type:"string" enum:"Event"` + + Events []*string `locationName:"Event" type:"list" flattened:"true"` + + // Optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + InvocationRole *string `type:"string"` +} + +// String returns the string representation +func (s CloudFunctionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudFunctionConfiguration) GoString() string { + return s.String() +} + +// SetCloudFunction sets the CloudFunction field's value. +func (s *CloudFunctionConfiguration) SetCloudFunction(v string) *CloudFunctionConfiguration { + s.CloudFunction = &v + return s +} + +// SetEvent sets the Event field's value. +func (s *CloudFunctionConfiguration) SetEvent(v string) *CloudFunctionConfiguration { + s.Event = &v + return s +} + +// SetEvents sets the Events field's value. +func (s *CloudFunctionConfiguration) SetEvents(v []*string) *CloudFunctionConfiguration { + s.Events = v + return s +} + +// SetId sets the Id field's value. +func (s *CloudFunctionConfiguration) SetId(v string) *CloudFunctionConfiguration { + s.Id = &v + return s +} + +// SetInvocationRole sets the InvocationRole field's value. +func (s *CloudFunctionConfiguration) SetInvocationRole(v string) *CloudFunctionConfiguration { + s.InvocationRole = &v + return s +} + +type CommonPrefix struct { + _ struct{} `type:"structure"` + + Prefix *string `type:"string"` +} + +// String returns the string representation +func (s CommonPrefix) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CommonPrefix) GoString() string { + return s.String() +} + +// SetPrefix sets the Prefix field's value. +func (s *CommonPrefix) SetPrefix(v string) *CommonPrefix { + s.Prefix = &v + return s +} + +type CompleteMultipartUploadInput struct { + _ struct{} `type:"structure" payload:"MultipartUpload"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + MultipartUpload *CompletedMultipartUpload `locationName:"CompleteMultipartUpload" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // UploadId is a required field + UploadId *string `location:"querystring" locationName:"uploadId" type:"string" required:"true"` +} + +// String returns the string representation +func (s CompleteMultipartUploadInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompleteMultipartUploadInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CompleteMultipartUploadInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CompleteMultipartUploadInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.UploadId == nil { + invalidParams.Add(request.NewErrParamRequired("UploadId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *CompleteMultipartUploadInput) SetBucket(v string) *CompleteMultipartUploadInput { + s.Bucket = &v + return s +} + +func (s *CompleteMultipartUploadInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *CompleteMultipartUploadInput) SetKey(v string) *CompleteMultipartUploadInput { + s.Key = &v + return s +} + +// SetMultipartUpload sets the MultipartUpload field's value. +func (s *CompleteMultipartUploadInput) SetMultipartUpload(v *CompletedMultipartUpload) *CompleteMultipartUploadInput { + s.MultipartUpload = v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *CompleteMultipartUploadInput) SetRequestPayer(v string) *CompleteMultipartUploadInput { + s.RequestPayer = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *CompleteMultipartUploadInput) SetUploadId(v string) *CompleteMultipartUploadInput { + s.UploadId = &v + return s +} + +type CompleteMultipartUploadOutput struct { + _ struct{} `type:"structure"` + + Bucket *string `type:"string"` + + // Entity tag of the object. + ETag *string `type:"string"` + + // If the object expiration is configured, this will contain the expiration + // date (expiry-date) and rule ID (rule-id). The value of rule-id is URL encoded. + Expiration *string `location:"header" locationName:"x-amz-expiration" type:"string"` + + Key *string `min:"1" type:"string"` + + Location *string `type:"string"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // Version of the object. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s CompleteMultipartUploadOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompleteMultipartUploadOutput) GoString() string { + return s.String() +} + +// SetBucket sets the Bucket field's value. +func (s *CompleteMultipartUploadOutput) SetBucket(v string) *CompleteMultipartUploadOutput { + s.Bucket = &v + return s +} + +func (s *CompleteMultipartUploadOutput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetETag sets the ETag field's value. +func (s *CompleteMultipartUploadOutput) SetETag(v string) *CompleteMultipartUploadOutput { + s.ETag = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *CompleteMultipartUploadOutput) SetExpiration(v string) *CompleteMultipartUploadOutput { + s.Expiration = &v + return s +} + +// SetKey sets the Key field's value. +func (s *CompleteMultipartUploadOutput) SetKey(v string) *CompleteMultipartUploadOutput { + s.Key = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CompleteMultipartUploadOutput) SetLocation(v string) *CompleteMultipartUploadOutput { + s.Location = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *CompleteMultipartUploadOutput) SetRequestCharged(v string) *CompleteMultipartUploadOutput { + s.RequestCharged = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *CompleteMultipartUploadOutput) SetSSEKMSKeyId(v string) *CompleteMultipartUploadOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *CompleteMultipartUploadOutput) SetServerSideEncryption(v string) *CompleteMultipartUploadOutput { + s.ServerSideEncryption = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *CompleteMultipartUploadOutput) SetVersionId(v string) *CompleteMultipartUploadOutput { + s.VersionId = &v + return s +} + +type CompletedMultipartUpload struct { + _ struct{} `type:"structure"` + + Parts []*CompletedPart `locationName:"Part" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s CompletedMultipartUpload) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompletedMultipartUpload) GoString() string { + return s.String() +} + +// SetParts sets the Parts field's value. +func (s *CompletedMultipartUpload) SetParts(v []*CompletedPart) *CompletedMultipartUpload { + s.Parts = v + return s +} + +type CompletedPart struct { + _ struct{} `type:"structure"` + + // Entity tag returned when the part was uploaded. + ETag *string `type:"string"` + + // Part number that identifies the part. This is a positive integer between + // 1 and 10,000. + PartNumber *int64 `type:"integer"` +} + +// String returns the string representation +func (s CompletedPart) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompletedPart) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CompletedPart) SetETag(v string) *CompletedPart { + s.ETag = &v + return s +} + +// SetPartNumber sets the PartNumber field's value. +func (s *CompletedPart) SetPartNumber(v int64) *CompletedPart { + s.PartNumber = &v + return s +} + +type Condition struct { + _ struct{} `type:"structure"` + + // The HTTP error code when the redirect is applied. In the event of an error, + // if the error code equals this value, then the specified redirect is applied. + // Required when parent element Condition is specified and sibling KeyPrefixEquals + // is not specified. If both are specified, then both must be true for the redirect + // to be applied. + HttpErrorCodeReturnedEquals *string `type:"string"` + + // The object key name prefix when the redirect is applied. For example, to + // redirect requests for ExamplePage.html, the key prefix will be ExamplePage.html. + // To redirect request for all pages with the prefix docs/, the key prefix will + // be /docs, which identifies all objects in the docs/ folder. Required when + // the parent element Condition is specified and sibling HttpErrorCodeReturnedEquals + // is not specified. If both conditions are specified, both must be true for + // the redirect to be applied. + KeyPrefixEquals *string `type:"string"` +} + +// String returns the string representation +func (s Condition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Condition) GoString() string { + return s.String() +} + +// SetHttpErrorCodeReturnedEquals sets the HttpErrorCodeReturnedEquals field's value. +func (s *Condition) SetHttpErrorCodeReturnedEquals(v string) *Condition { + s.HttpErrorCodeReturnedEquals = &v + return s +} + +// SetKeyPrefixEquals sets the KeyPrefixEquals field's value. +func (s *Condition) SetKeyPrefixEquals(v string) *Condition { + s.KeyPrefixEquals = &v + return s +} + +type CopyObjectInput struct { + _ struct{} `type:"structure"` + + // The canned ACL to apply to the object. + ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Specifies caching behavior along the request/reply chain. + CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"` + + // Specifies presentational information for the object. + ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"` + + // Specifies what content encodings have been applied to the object and thus + // what decoding mechanisms must be applied to obtain the media-type referenced + // by the Content-Type header field. + ContentEncoding *string `location:"header" locationName:"Content-Encoding" type:"string"` + + // The language the content is in. + ContentLanguage *string `location:"header" locationName:"Content-Language" type:"string"` + + // A standard MIME type describing the format of the object data. + ContentType *string `location:"header" locationName:"Content-Type" type:"string"` + + // The name of the source bucket and key name of the source object, separated + // by a slash (/). Must be URL-encoded. + // + // CopySource is a required field + CopySource *string `location:"header" locationName:"x-amz-copy-source" type:"string" required:"true"` + + // Copies the object if its entity tag (ETag) matches the specified tag. + CopySourceIfMatch *string `location:"header" locationName:"x-amz-copy-source-if-match" type:"string"` + + // Copies the object if it has been modified since the specified time. + CopySourceIfModifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-modified-since" type:"timestamp" timestampFormat:"rfc822"` + + // Copies the object if its entity tag (ETag) is different than the specified + // ETag. + CopySourceIfNoneMatch *string `location:"header" locationName:"x-amz-copy-source-if-none-match" type:"string"` + + // Copies the object if it hasn't been modified since the specified time. + CopySourceIfUnmodifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-unmodified-since" type:"timestamp" timestampFormat:"rfc822"` + + // Specifies the algorithm to use when decrypting the source object (e.g., AES256). + CopySourceSSECustomerAlgorithm *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use to decrypt + // the source object. The encryption key provided in this header must be one + // that was used when the source object was created. + CopySourceSSECustomerKey *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + CopySourceSSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-key-MD5" type:"string"` + + // The date and time at which the object is no longer cacheable. + Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp" timestampFormat:"rfc822"` + + // Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object. + GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` + + // Allows grantee to read the object data and its metadata. + GrantRead *string `location:"header" locationName:"x-amz-grant-read" type:"string"` + + // Allows grantee to read the object ACL. + GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` + + // Allows grantee to write the ACL for the applicable object. + GrantWriteACP *string `location:"header" locationName:"x-amz-grant-write-acp" type:"string"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // A map of metadata to store with the object in S3. + Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` + + // Specifies whether the metadata is copied from the source object or replaced + // with metadata provided in the request. + MetadataDirective *string `location:"header" locationName:"x-amz-metadata-directive" type:"string" enum:"MetadataDirective"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // Specifies the AWS KMS key ID to use for object encryption. All GET and PUT + // requests for an object protected by AWS KMS will fail if not made via SSL + // or using SigV4. Documentation on configuring any of the officially supported + // AWS SDKs and CLI can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // The type of storage to use for the object. Defaults to 'STANDARD'. + StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` + + // The tag-set for the object destination object this value must be used in + // conjunction with the TaggingDirective. The tag-set must be encoded as URL + // Query parameters + Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"` + + // Specifies whether the object tag-set are copied from the source object or + // replaced with tag-set provided in the request. + TaggingDirective *string `location:"header" locationName:"x-amz-tagging-directive" type:"string" enum:"TaggingDirective"` + + // If the bucket is configured as a website, redirects requests for this object + // to another object in the same bucket or to an external URL. Amazon S3 stores + // the value of this header in the object metadata. + WebsiteRedirectLocation *string `location:"header" locationName:"x-amz-website-redirect-location" type:"string"` +} + +// String returns the string representation +func (s CopyObjectInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyObjectInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyObjectInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyObjectInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.CopySource == nil { + invalidParams.Add(request.NewErrParamRequired("CopySource")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetACL sets the ACL field's value. +func (s *CopyObjectInput) SetACL(v string) *CopyObjectInput { + s.ACL = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *CopyObjectInput) SetBucket(v string) *CopyObjectInput { + s.Bucket = &v + return s +} + +func (s *CopyObjectInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCacheControl sets the CacheControl field's value. +func (s *CopyObjectInput) SetCacheControl(v string) *CopyObjectInput { + s.CacheControl = &v + return s +} + +// SetContentDisposition sets the ContentDisposition field's value. +func (s *CopyObjectInput) SetContentDisposition(v string) *CopyObjectInput { + s.ContentDisposition = &v + return s +} + +// SetContentEncoding sets the ContentEncoding field's value. +func (s *CopyObjectInput) SetContentEncoding(v string) *CopyObjectInput { + s.ContentEncoding = &v + return s +} + +// SetContentLanguage sets the ContentLanguage field's value. +func (s *CopyObjectInput) SetContentLanguage(v string) *CopyObjectInput { + s.ContentLanguage = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *CopyObjectInput) SetContentType(v string) *CopyObjectInput { + s.ContentType = &v + return s +} + +// SetCopySource sets the CopySource field's value. +func (s *CopyObjectInput) SetCopySource(v string) *CopyObjectInput { + s.CopySource = &v + return s +} + +// SetCopySourceIfMatch sets the CopySourceIfMatch field's value. +func (s *CopyObjectInput) SetCopySourceIfMatch(v string) *CopyObjectInput { + s.CopySourceIfMatch = &v + return s +} + +// SetCopySourceIfModifiedSince sets the CopySourceIfModifiedSince field's value. +func (s *CopyObjectInput) SetCopySourceIfModifiedSince(v time.Time) *CopyObjectInput { + s.CopySourceIfModifiedSince = &v + return s +} + +// SetCopySourceIfNoneMatch sets the CopySourceIfNoneMatch field's value. +func (s *CopyObjectInput) SetCopySourceIfNoneMatch(v string) *CopyObjectInput { + s.CopySourceIfNoneMatch = &v + return s +} + +// SetCopySourceIfUnmodifiedSince sets the CopySourceIfUnmodifiedSince field's value. +func (s *CopyObjectInput) SetCopySourceIfUnmodifiedSince(v time.Time) *CopyObjectInput { + s.CopySourceIfUnmodifiedSince = &v + return s +} + +// SetCopySourceSSECustomerAlgorithm sets the CopySourceSSECustomerAlgorithm field's value. +func (s *CopyObjectInput) SetCopySourceSSECustomerAlgorithm(v string) *CopyObjectInput { + s.CopySourceSSECustomerAlgorithm = &v + return s +} + +// SetCopySourceSSECustomerKey sets the CopySourceSSECustomerKey field's value. +func (s *CopyObjectInput) SetCopySourceSSECustomerKey(v string) *CopyObjectInput { + s.CopySourceSSECustomerKey = &v + return s +} + +func (s *CopyObjectInput) getCopySourceSSECustomerKey() (v string) { + if s.CopySourceSSECustomerKey == nil { + return v + } + return *s.CopySourceSSECustomerKey +} + +// SetCopySourceSSECustomerKeyMD5 sets the CopySourceSSECustomerKeyMD5 field's value. +func (s *CopyObjectInput) SetCopySourceSSECustomerKeyMD5(v string) *CopyObjectInput { + s.CopySourceSSECustomerKeyMD5 = &v + return s +} + +// SetExpires sets the Expires field's value. +func (s *CopyObjectInput) SetExpires(v time.Time) *CopyObjectInput { + s.Expires = &v + return s +} + +// SetGrantFullControl sets the GrantFullControl field's value. +func (s *CopyObjectInput) SetGrantFullControl(v string) *CopyObjectInput { + s.GrantFullControl = &v + return s +} + +// SetGrantRead sets the GrantRead field's value. +func (s *CopyObjectInput) SetGrantRead(v string) *CopyObjectInput { + s.GrantRead = &v + return s +} + +// SetGrantReadACP sets the GrantReadACP field's value. +func (s *CopyObjectInput) SetGrantReadACP(v string) *CopyObjectInput { + s.GrantReadACP = &v + return s +} + +// SetGrantWriteACP sets the GrantWriteACP field's value. +func (s *CopyObjectInput) SetGrantWriteACP(v string) *CopyObjectInput { + s.GrantWriteACP = &v + return s +} + +// SetKey sets the Key field's value. +func (s *CopyObjectInput) SetKey(v string) *CopyObjectInput { + s.Key = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *CopyObjectInput) SetMetadata(v map[string]*string) *CopyObjectInput { + s.Metadata = v + return s +} + +// SetMetadataDirective sets the MetadataDirective field's value. +func (s *CopyObjectInput) SetMetadataDirective(v string) *CopyObjectInput { + s.MetadataDirective = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *CopyObjectInput) SetRequestPayer(v string) *CopyObjectInput { + s.RequestPayer = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *CopyObjectInput) SetSSECustomerAlgorithm(v string) *CopyObjectInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *CopyObjectInput) SetSSECustomerKey(v string) *CopyObjectInput { + s.SSECustomerKey = &v + return s +} + +func (s *CopyObjectInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *CopyObjectInput) SetSSECustomerKeyMD5(v string) *CopyObjectInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *CopyObjectInput) SetSSEKMSKeyId(v string) *CopyObjectInput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *CopyObjectInput) SetServerSideEncryption(v string) *CopyObjectInput { + s.ServerSideEncryption = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *CopyObjectInput) SetStorageClass(v string) *CopyObjectInput { + s.StorageClass = &v + return s +} + +// SetTagging sets the Tagging field's value. +func (s *CopyObjectInput) SetTagging(v string) *CopyObjectInput { + s.Tagging = &v + return s +} + +// SetTaggingDirective sets the TaggingDirective field's value. +func (s *CopyObjectInput) SetTaggingDirective(v string) *CopyObjectInput { + s.TaggingDirective = &v + return s +} + +// SetWebsiteRedirectLocation sets the WebsiteRedirectLocation field's value. +func (s *CopyObjectInput) SetWebsiteRedirectLocation(v string) *CopyObjectInput { + s.WebsiteRedirectLocation = &v + return s +} + +type CopyObjectOutput struct { + _ struct{} `type:"structure" payload:"CopyObjectResult"` + + CopyObjectResult *CopyObjectResult `type:"structure"` + + CopySourceVersionId *string `location:"header" locationName:"x-amz-copy-source-version-id" type:"string"` + + // If the object expiration is configured, the response includes this header. + Expiration *string `location:"header" locationName:"x-amz-expiration" type:"string"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // Version ID of the newly created copy. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s CopyObjectOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyObjectOutput) GoString() string { + return s.String() +} + +// SetCopyObjectResult sets the CopyObjectResult field's value. +func (s *CopyObjectOutput) SetCopyObjectResult(v *CopyObjectResult) *CopyObjectOutput { + s.CopyObjectResult = v + return s +} + +// SetCopySourceVersionId sets the CopySourceVersionId field's value. +func (s *CopyObjectOutput) SetCopySourceVersionId(v string) *CopyObjectOutput { + s.CopySourceVersionId = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *CopyObjectOutput) SetExpiration(v string) *CopyObjectOutput { + s.Expiration = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *CopyObjectOutput) SetRequestCharged(v string) *CopyObjectOutput { + s.RequestCharged = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *CopyObjectOutput) SetSSECustomerAlgorithm(v string) *CopyObjectOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *CopyObjectOutput) SetSSECustomerKeyMD5(v string) *CopyObjectOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *CopyObjectOutput) SetSSEKMSKeyId(v string) *CopyObjectOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *CopyObjectOutput) SetServerSideEncryption(v string) *CopyObjectOutput { + s.ServerSideEncryption = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *CopyObjectOutput) SetVersionId(v string) *CopyObjectOutput { + s.VersionId = &v + return s +} + +type CopyObjectResult struct { + _ struct{} `type:"structure"` + + ETag *string `type:"string"` + + LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s CopyObjectResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyObjectResult) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CopyObjectResult) SetETag(v string) *CopyObjectResult { + s.ETag = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *CopyObjectResult) SetLastModified(v time.Time) *CopyObjectResult { + s.LastModified = &v + return s +} + +type CopyPartResult struct { + _ struct{} `type:"structure"` + + // Entity tag of the object. + ETag *string `type:"string"` + + // Date and time at which the object was uploaded. + LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s CopyPartResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyPartResult) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CopyPartResult) SetETag(v string) *CopyPartResult { + s.ETag = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *CopyPartResult) SetLastModified(v time.Time) *CopyPartResult { + s.LastModified = &v + return s +} + +type CreateBucketConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies the region where the bucket will be created. If you don't specify + // a region, the bucket will be created in US Standard. + LocationConstraint *string `type:"string" enum:"BucketLocationConstraint"` +} + +// String returns the string representation +func (s CreateBucketConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBucketConfiguration) GoString() string { + return s.String() +} + +// SetLocationConstraint sets the LocationConstraint field's value. +func (s *CreateBucketConfiguration) SetLocationConstraint(v string) *CreateBucketConfiguration { + s.LocationConstraint = &v + return s +} + +type CreateBucketInput struct { + _ struct{} `type:"structure" payload:"CreateBucketConfiguration"` + + // The canned ACL to apply to the bucket. + ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"BucketCannedACL"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + CreateBucketConfiguration *CreateBucketConfiguration `locationName:"CreateBucketConfiguration" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // Allows grantee the read, write, read ACP, and write ACP permissions on the + // bucket. + GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` + + // Allows grantee to list the objects in the bucket. + GrantRead *string `location:"header" locationName:"x-amz-grant-read" type:"string"` + + // Allows grantee to read the bucket ACL. + GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` + + // Allows grantee to create, overwrite, and delete any object in the bucket. + GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` + + // Allows grantee to write the ACL for the applicable bucket. + GrantWriteACP *string `location:"header" locationName:"x-amz-grant-write-acp" type:"string"` +} + +// String returns the string representation +func (s CreateBucketInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBucketInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateBucketInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateBucketInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetACL sets the ACL field's value. +func (s *CreateBucketInput) SetACL(v string) *CreateBucketInput { + s.ACL = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *CreateBucketInput) SetBucket(v string) *CreateBucketInput { + s.Bucket = &v + return s +} + +func (s *CreateBucketInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCreateBucketConfiguration sets the CreateBucketConfiguration field's value. +func (s *CreateBucketInput) SetCreateBucketConfiguration(v *CreateBucketConfiguration) *CreateBucketInput { + s.CreateBucketConfiguration = v + return s +} + +// SetGrantFullControl sets the GrantFullControl field's value. +func (s *CreateBucketInput) SetGrantFullControl(v string) *CreateBucketInput { + s.GrantFullControl = &v + return s +} + +// SetGrantRead sets the GrantRead field's value. +func (s *CreateBucketInput) SetGrantRead(v string) *CreateBucketInput { + s.GrantRead = &v + return s +} + +// SetGrantReadACP sets the GrantReadACP field's value. +func (s *CreateBucketInput) SetGrantReadACP(v string) *CreateBucketInput { + s.GrantReadACP = &v + return s +} + +// SetGrantWrite sets the GrantWrite field's value. +func (s *CreateBucketInput) SetGrantWrite(v string) *CreateBucketInput { + s.GrantWrite = &v + return s +} + +// SetGrantWriteACP sets the GrantWriteACP field's value. +func (s *CreateBucketInput) SetGrantWriteACP(v string) *CreateBucketInput { + s.GrantWriteACP = &v + return s +} + +type CreateBucketOutput struct { + _ struct{} `type:"structure"` + + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateBucketOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBucketOutput) GoString() string { + return s.String() +} + +// SetLocation sets the Location field's value. +func (s *CreateBucketOutput) SetLocation(v string) *CreateBucketOutput { + s.Location = &v + return s +} + +type CreateMultipartUploadInput struct { + _ struct{} `type:"structure"` + + // The canned ACL to apply to the object. + ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Specifies caching behavior along the request/reply chain. + CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"` + + // Specifies presentational information for the object. + ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"` + + // Specifies what content encodings have been applied to the object and thus + // what decoding mechanisms must be applied to obtain the media-type referenced + // by the Content-Type header field. + ContentEncoding *string `location:"header" locationName:"Content-Encoding" type:"string"` + + // The language the content is in. + ContentLanguage *string `location:"header" locationName:"Content-Language" type:"string"` + + // A standard MIME type describing the format of the object data. + ContentType *string `location:"header" locationName:"Content-Type" type:"string"` + + // The date and time at which the object is no longer cacheable. + Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp" timestampFormat:"rfc822"` + + // Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object. + GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` + + // Allows grantee to read the object data and its metadata. + GrantRead *string `location:"header" locationName:"x-amz-grant-read" type:"string"` + + // Allows grantee to read the object ACL. + GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` + + // Allows grantee to write the ACL for the applicable object. + GrantWriteACP *string `location:"header" locationName:"x-amz-grant-write-acp" type:"string"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // A map of metadata to store with the object in S3. + Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // Specifies the AWS KMS key ID to use for object encryption. All GET and PUT + // requests for an object protected by AWS KMS will fail if not made via SSL + // or using SigV4. Documentation on configuring any of the officially supported + // AWS SDKs and CLI can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // The type of storage to use for the object. Defaults to 'STANDARD'. + StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` + + // The tag-set for the object. The tag-set must be encoded as URL Query parameters + Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"` + + // If the bucket is configured as a website, redirects requests for this object + // to another object in the same bucket or to an external URL. Amazon S3 stores + // the value of this header in the object metadata. + WebsiteRedirectLocation *string `location:"header" locationName:"x-amz-website-redirect-location" type:"string"` +} + +// String returns the string representation +func (s CreateMultipartUploadInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateMultipartUploadInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateMultipartUploadInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateMultipartUploadInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetACL sets the ACL field's value. +func (s *CreateMultipartUploadInput) SetACL(v string) *CreateMultipartUploadInput { + s.ACL = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *CreateMultipartUploadInput) SetBucket(v string) *CreateMultipartUploadInput { + s.Bucket = &v + return s +} + +func (s *CreateMultipartUploadInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCacheControl sets the CacheControl field's value. +func (s *CreateMultipartUploadInput) SetCacheControl(v string) *CreateMultipartUploadInput { + s.CacheControl = &v + return s +} + +// SetContentDisposition sets the ContentDisposition field's value. +func (s *CreateMultipartUploadInput) SetContentDisposition(v string) *CreateMultipartUploadInput { + s.ContentDisposition = &v + return s +} + +// SetContentEncoding sets the ContentEncoding field's value. +func (s *CreateMultipartUploadInput) SetContentEncoding(v string) *CreateMultipartUploadInput { + s.ContentEncoding = &v + return s +} + +// SetContentLanguage sets the ContentLanguage field's value. +func (s *CreateMultipartUploadInput) SetContentLanguage(v string) *CreateMultipartUploadInput { + s.ContentLanguage = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *CreateMultipartUploadInput) SetContentType(v string) *CreateMultipartUploadInput { + s.ContentType = &v + return s +} + +// SetExpires sets the Expires field's value. +func (s *CreateMultipartUploadInput) SetExpires(v time.Time) *CreateMultipartUploadInput { + s.Expires = &v + return s +} + +// SetGrantFullControl sets the GrantFullControl field's value. +func (s *CreateMultipartUploadInput) SetGrantFullControl(v string) *CreateMultipartUploadInput { + s.GrantFullControl = &v + return s +} + +// SetGrantRead sets the GrantRead field's value. +func (s *CreateMultipartUploadInput) SetGrantRead(v string) *CreateMultipartUploadInput { + s.GrantRead = &v + return s +} + +// SetGrantReadACP sets the GrantReadACP field's value. +func (s *CreateMultipartUploadInput) SetGrantReadACP(v string) *CreateMultipartUploadInput { + s.GrantReadACP = &v + return s +} + +// SetGrantWriteACP sets the GrantWriteACP field's value. +func (s *CreateMultipartUploadInput) SetGrantWriteACP(v string) *CreateMultipartUploadInput { + s.GrantWriteACP = &v + return s +} + +// SetKey sets the Key field's value. +func (s *CreateMultipartUploadInput) SetKey(v string) *CreateMultipartUploadInput { + s.Key = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *CreateMultipartUploadInput) SetMetadata(v map[string]*string) *CreateMultipartUploadInput { + s.Metadata = v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *CreateMultipartUploadInput) SetRequestPayer(v string) *CreateMultipartUploadInput { + s.RequestPayer = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *CreateMultipartUploadInput) SetSSECustomerAlgorithm(v string) *CreateMultipartUploadInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *CreateMultipartUploadInput) SetSSECustomerKey(v string) *CreateMultipartUploadInput { + s.SSECustomerKey = &v + return s +} + +func (s *CreateMultipartUploadInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *CreateMultipartUploadInput) SetSSECustomerKeyMD5(v string) *CreateMultipartUploadInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *CreateMultipartUploadInput) SetSSEKMSKeyId(v string) *CreateMultipartUploadInput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *CreateMultipartUploadInput) SetServerSideEncryption(v string) *CreateMultipartUploadInput { + s.ServerSideEncryption = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *CreateMultipartUploadInput) SetStorageClass(v string) *CreateMultipartUploadInput { + s.StorageClass = &v + return s +} + +// SetTagging sets the Tagging field's value. +func (s *CreateMultipartUploadInput) SetTagging(v string) *CreateMultipartUploadInput { + s.Tagging = &v + return s +} + +// SetWebsiteRedirectLocation sets the WebsiteRedirectLocation field's value. +func (s *CreateMultipartUploadInput) SetWebsiteRedirectLocation(v string) *CreateMultipartUploadInput { + s.WebsiteRedirectLocation = &v + return s +} + +type CreateMultipartUploadOutput struct { + _ struct{} `type:"structure"` + + // Date when multipart upload will become eligible for abort operation by lifecycle. + AbortDate *time.Time `location:"header" locationName:"x-amz-abort-date" type:"timestamp" timestampFormat:"rfc822"` + + // Id of the lifecycle rule that makes a multipart upload eligible for abort + // operation. + AbortRuleId *string `location:"header" locationName:"x-amz-abort-rule-id" type:"string"` + + // Name of the bucket to which the multipart upload was initiated. + Bucket *string `locationName:"Bucket" type:"string"` + + // Object key for which the multipart upload was initiated. + Key *string `min:"1" type:"string"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // ID for the initiated multipart upload. + UploadId *string `type:"string"` +} + +// String returns the string representation +func (s CreateMultipartUploadOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateMultipartUploadOutput) GoString() string { + return s.String() +} + +// SetAbortDate sets the AbortDate field's value. +func (s *CreateMultipartUploadOutput) SetAbortDate(v time.Time) *CreateMultipartUploadOutput { + s.AbortDate = &v + return s +} + +// SetAbortRuleId sets the AbortRuleId field's value. +func (s *CreateMultipartUploadOutput) SetAbortRuleId(v string) *CreateMultipartUploadOutput { + s.AbortRuleId = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *CreateMultipartUploadOutput) SetBucket(v string) *CreateMultipartUploadOutput { + s.Bucket = &v + return s +} + +func (s *CreateMultipartUploadOutput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *CreateMultipartUploadOutput) SetKey(v string) *CreateMultipartUploadOutput { + s.Key = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *CreateMultipartUploadOutput) SetRequestCharged(v string) *CreateMultipartUploadOutput { + s.RequestCharged = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *CreateMultipartUploadOutput) SetSSECustomerAlgorithm(v string) *CreateMultipartUploadOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *CreateMultipartUploadOutput) SetSSECustomerKeyMD5(v string) *CreateMultipartUploadOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *CreateMultipartUploadOutput) SetSSEKMSKeyId(v string) *CreateMultipartUploadOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *CreateMultipartUploadOutput) SetServerSideEncryption(v string) *CreateMultipartUploadOutput { + s.ServerSideEncryption = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *CreateMultipartUploadOutput) SetUploadId(v string) *CreateMultipartUploadOutput { + s.UploadId = &v + return s +} + +type Delete struct { + _ struct{} `type:"structure"` + + // Objects is a required field + Objects []*ObjectIdentifier `locationName:"Object" type:"list" flattened:"true" required:"true"` + + // Element to enable quiet mode for the request. When you add this element, + // you must set its value to true. + Quiet *bool `type:"boolean"` +} + +// String returns the string representation +func (s Delete) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Delete) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Delete) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Delete"} + if s.Objects == nil { + invalidParams.Add(request.NewErrParamRequired("Objects")) + } + if s.Objects != nil { + for i, v := range s.Objects { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Objects", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetObjects sets the Objects field's value. +func (s *Delete) SetObjects(v []*ObjectIdentifier) *Delete { + s.Objects = v + return s +} + +// SetQuiet sets the Quiet field's value. +func (s *Delete) SetQuiet(v bool) *Delete { + s.Quiet = &v + return s +} + +type DeleteBucketAnalyticsConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket from which an analytics configuration is deleted. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The identifier used to represent an analytics configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketAnalyticsConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketAnalyticsConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketAnalyticsConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketAnalyticsConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketAnalyticsConfigurationInput) SetBucket(v string) *DeleteBucketAnalyticsConfigurationInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketAnalyticsConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *DeleteBucketAnalyticsConfigurationInput) SetId(v string) *DeleteBucketAnalyticsConfigurationInput { + s.Id = &v + return s +} + +type DeleteBucketAnalyticsConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketAnalyticsConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketAnalyticsConfigurationOutput) GoString() string { + return s.String() +} + +type DeleteBucketCorsInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketCorsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketCorsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketCorsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketCorsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketCorsInput) SetBucket(v string) *DeleteBucketCorsInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketCorsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketCorsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketCorsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketCorsOutput) GoString() string { + return s.String() +} + +type DeleteBucketEncryptionInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the server-side encryption configuration + // to delete. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketEncryptionInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketEncryptionInput) SetBucket(v string) *DeleteBucketEncryptionInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketEncryptionInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketEncryptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketEncryptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketEncryptionOutput) GoString() string { + return s.String() +} + +type DeleteBucketInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketInput) SetBucket(v string) *DeleteBucketInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketInventoryConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the inventory configuration to delete. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ID used to identify the inventory configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketInventoryConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketInventoryConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketInventoryConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketInventoryConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketInventoryConfigurationInput) SetBucket(v string) *DeleteBucketInventoryConfigurationInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketInventoryConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *DeleteBucketInventoryConfigurationInput) SetId(v string) *DeleteBucketInventoryConfigurationInput { + s.Id = &v + return s +} + +type DeleteBucketInventoryConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketInventoryConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketInventoryConfigurationOutput) GoString() string { + return s.String() +} + +type DeleteBucketLifecycleInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketLifecycleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketLifecycleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketLifecycleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketLifecycleInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketLifecycleInput) SetBucket(v string) *DeleteBucketLifecycleInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketLifecycleInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketLifecycleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketLifecycleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketLifecycleOutput) GoString() string { + return s.String() +} + +type DeleteBucketMetricsConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the metrics configuration to delete. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ID used to identify the metrics configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketMetricsConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketMetricsConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketMetricsConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketMetricsConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketMetricsConfigurationInput) SetBucket(v string) *DeleteBucketMetricsConfigurationInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketMetricsConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *DeleteBucketMetricsConfigurationInput) SetId(v string) *DeleteBucketMetricsConfigurationInput { + s.Id = &v + return s +} + +type DeleteBucketMetricsConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketMetricsConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketMetricsConfigurationOutput) GoString() string { + return s.String() +} + +type DeleteBucketOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketOutput) GoString() string { + return s.String() +} + +type DeleteBucketPolicyInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketPolicyInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketPolicyInput) SetBucket(v string) *DeleteBucketPolicyInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketPolicyInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketPolicyOutput) GoString() string { + return s.String() +} + +type DeleteBucketReplicationInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketReplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketReplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketReplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketReplicationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketReplicationInput) SetBucket(v string) *DeleteBucketReplicationInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketReplicationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketReplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketReplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketReplicationOutput) GoString() string { + return s.String() +} + +type DeleteBucketTaggingInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketTaggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketTaggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketTaggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketTaggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketTaggingInput) SetBucket(v string) *DeleteBucketTaggingInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketTaggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketTaggingOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketTaggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketTaggingOutput) GoString() string { + return s.String() +} + +type DeleteBucketWebsiteInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketWebsiteInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketWebsiteInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketWebsiteInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketWebsiteInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketWebsiteInput) SetBucket(v string) *DeleteBucketWebsiteInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketWebsiteInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketWebsiteOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketWebsiteOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketWebsiteOutput) GoString() string { + return s.String() +} + +type DeleteMarkerEntry struct { + _ struct{} `type:"structure"` + + // Specifies whether the object is (true) or is not (false) the latest version + // of an object. + IsLatest *bool `type:"boolean"` + + // The object key. + Key *string `min:"1" type:"string"` + + // Date and time the object was last modified. + LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + Owner *Owner `type:"structure"` + + // Version ID of an object. + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteMarkerEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteMarkerEntry) GoString() string { + return s.String() +} + +// SetIsLatest sets the IsLatest field's value. +func (s *DeleteMarkerEntry) SetIsLatest(v bool) *DeleteMarkerEntry { + s.IsLatest = &v + return s +} + +// SetKey sets the Key field's value. +func (s *DeleteMarkerEntry) SetKey(v string) *DeleteMarkerEntry { + s.Key = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *DeleteMarkerEntry) SetLastModified(v time.Time) *DeleteMarkerEntry { + s.LastModified = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *DeleteMarkerEntry) SetOwner(v *Owner) *DeleteMarkerEntry { + s.Owner = v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeleteMarkerEntry) SetVersionId(v string) *DeleteMarkerEntry { + s.VersionId = &v + return s +} + +type DeleteObjectInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // The concatenation of the authentication device's serial number, a space, + // and the value that is displayed on your authentication device. + MFA *string `location:"header" locationName:"x-amz-mfa" type:"string"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // VersionId used to reference a specific version of the object. + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s DeleteObjectInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteObjectInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteObjectInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteObjectInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteObjectInput) SetBucket(v string) *DeleteObjectInput { + s.Bucket = &v + return s +} + +func (s *DeleteObjectInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *DeleteObjectInput) SetKey(v string) *DeleteObjectInput { + s.Key = &v + return s +} + +// SetMFA sets the MFA field's value. +func (s *DeleteObjectInput) SetMFA(v string) *DeleteObjectInput { + s.MFA = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *DeleteObjectInput) SetRequestPayer(v string) *DeleteObjectInput { + s.RequestPayer = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeleteObjectInput) SetVersionId(v string) *DeleteObjectInput { + s.VersionId = &v + return s +} + +type DeleteObjectOutput struct { + _ struct{} `type:"structure"` + + // Specifies whether the versioned object that was permanently deleted was (true) + // or was not (false) a delete marker. + DeleteMarker *bool `location:"header" locationName:"x-amz-delete-marker" type:"boolean"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // Returns the version ID of the delete marker created as a result of the DELETE + // operation. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s DeleteObjectOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteObjectOutput) GoString() string { + return s.String() +} + +// SetDeleteMarker sets the DeleteMarker field's value. +func (s *DeleteObjectOutput) SetDeleteMarker(v bool) *DeleteObjectOutput { + s.DeleteMarker = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *DeleteObjectOutput) SetRequestCharged(v string) *DeleteObjectOutput { + s.RequestCharged = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeleteObjectOutput) SetVersionId(v string) *DeleteObjectOutput { + s.VersionId = &v + return s +} + +type DeleteObjectTaggingInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // The versionId of the object that the tag-set will be removed from. + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s DeleteObjectTaggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteObjectTaggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteObjectTaggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteObjectTaggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteObjectTaggingInput) SetBucket(v string) *DeleteObjectTaggingInput { + s.Bucket = &v + return s +} + +func (s *DeleteObjectTaggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *DeleteObjectTaggingInput) SetKey(v string) *DeleteObjectTaggingInput { + s.Key = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeleteObjectTaggingInput) SetVersionId(v string) *DeleteObjectTaggingInput { + s.VersionId = &v + return s +} + +type DeleteObjectTaggingOutput struct { + _ struct{} `type:"structure"` + + // The versionId of the object the tag-set was removed from. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s DeleteObjectTaggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteObjectTaggingOutput) GoString() string { + return s.String() +} + +// SetVersionId sets the VersionId field's value. +func (s *DeleteObjectTaggingOutput) SetVersionId(v string) *DeleteObjectTaggingOutput { + s.VersionId = &v + return s +} + +type DeleteObjectsInput struct { + _ struct{} `type:"structure" payload:"Delete"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Delete is a required field + Delete *Delete `locationName:"Delete" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // The concatenation of the authentication device's serial number, a space, + // and the value that is displayed on your authentication device. + MFA *string `location:"header" locationName:"x-amz-mfa" type:"string"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` +} + +// String returns the string representation +func (s DeleteObjectsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteObjectsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteObjectsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteObjectsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Delete == nil { + invalidParams.Add(request.NewErrParamRequired("Delete")) + } + if s.Delete != nil { + if err := s.Delete.Validate(); err != nil { + invalidParams.AddNested("Delete", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteObjectsInput) SetBucket(v string) *DeleteObjectsInput { + s.Bucket = &v + return s +} + +func (s *DeleteObjectsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetDelete sets the Delete field's value. +func (s *DeleteObjectsInput) SetDelete(v *Delete) *DeleteObjectsInput { + s.Delete = v + return s +} + +// SetMFA sets the MFA field's value. +func (s *DeleteObjectsInput) SetMFA(v string) *DeleteObjectsInput { + s.MFA = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *DeleteObjectsInput) SetRequestPayer(v string) *DeleteObjectsInput { + s.RequestPayer = &v + return s +} + +type DeleteObjectsOutput struct { + _ struct{} `type:"structure"` + + Deleted []*DeletedObject `type:"list" flattened:"true"` + + Errors []*Error `locationName:"Error" type:"list" flattened:"true"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` +} + +// String returns the string representation +func (s DeleteObjectsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteObjectsOutput) GoString() string { + return s.String() +} + +// SetDeleted sets the Deleted field's value. +func (s *DeleteObjectsOutput) SetDeleted(v []*DeletedObject) *DeleteObjectsOutput { + s.Deleted = v + return s +} + +// SetErrors sets the Errors field's value. +func (s *DeleteObjectsOutput) SetErrors(v []*Error) *DeleteObjectsOutput { + s.Errors = v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *DeleteObjectsOutput) SetRequestCharged(v string) *DeleteObjectsOutput { + s.RequestCharged = &v + return s +} + +type DeletedObject struct { + _ struct{} `type:"structure"` + + DeleteMarker *bool `type:"boolean"` + + DeleteMarkerVersionId *string `type:"string"` + + Key *string `min:"1" type:"string"` + + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s DeletedObject) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletedObject) GoString() string { + return s.String() +} + +// SetDeleteMarker sets the DeleteMarker field's value. +func (s *DeletedObject) SetDeleteMarker(v bool) *DeletedObject { + s.DeleteMarker = &v + return s +} + +// SetDeleteMarkerVersionId sets the DeleteMarkerVersionId field's value. +func (s *DeletedObject) SetDeleteMarkerVersionId(v string) *DeletedObject { + s.DeleteMarkerVersionId = &v + return s +} + +// SetKey sets the Key field's value. +func (s *DeletedObject) SetKey(v string) *DeletedObject { + s.Key = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeletedObject) SetVersionId(v string) *DeletedObject { + s.VersionId = &v + return s +} + +// Container for replication destination information. +type Destination struct { + _ struct{} `type:"structure"` + + // Container for information regarding the access control for replicas. + AccessControlTranslation *AccessControlTranslation `type:"structure"` + + // Account ID of the destination bucket. Currently this is only being verified + // if Access Control Translation is enabled + Account *string `type:"string"` + + // Amazon resource name (ARN) of the bucket where you want Amazon S3 to store + // replicas of the object identified by the rule. + // + // Bucket is a required field + Bucket *string `type:"string" required:"true"` + + // Container for information regarding encryption based configuration for replicas. + EncryptionConfiguration *EncryptionConfiguration `type:"structure"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"StorageClass"` +} + +// String returns the string representation +func (s Destination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Destination) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Destination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Destination"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.AccessControlTranslation != nil { + if err := s.AccessControlTranslation.Validate(); err != nil { + invalidParams.AddNested("AccessControlTranslation", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessControlTranslation sets the AccessControlTranslation field's value. +func (s *Destination) SetAccessControlTranslation(v *AccessControlTranslation) *Destination { + s.AccessControlTranslation = v + return s +} + +// SetAccount sets the Account field's value. +func (s *Destination) SetAccount(v string) *Destination { + s.Account = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *Destination) SetBucket(v string) *Destination { + s.Bucket = &v + return s +} + +func (s *Destination) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *Destination) SetEncryptionConfiguration(v *EncryptionConfiguration) *Destination { + s.EncryptionConfiguration = v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *Destination) SetStorageClass(v string) *Destination { + s.StorageClass = &v + return s +} + +// Describes the server-side encryption that will be applied to the restore +// results. +type Encryption struct { + _ struct{} `type:"structure"` + + // The server-side encryption algorithm used when storing job results in Amazon + // S3 (e.g., AES256, aws:kms). + // + // EncryptionType is a required field + EncryptionType *string `type:"string" required:"true" enum:"ServerSideEncryption"` + + // If the encryption type is aws:kms, this optional value can be used to specify + // the encryption context for the restore results. + KMSContext *string `type:"string"` + + // If the encryption type is aws:kms, this optional value specifies the AWS + // KMS key ID to use for encryption of job results. + KMSKeyId *string `type:"string"` +} + +// String returns the string representation +func (s Encryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Encryption) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Encryption) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Encryption"} + if s.EncryptionType == nil { + invalidParams.Add(request.NewErrParamRequired("EncryptionType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncryptionType sets the EncryptionType field's value. +func (s *Encryption) SetEncryptionType(v string) *Encryption { + s.EncryptionType = &v + return s +} + +// SetKMSContext sets the KMSContext field's value. +func (s *Encryption) SetKMSContext(v string) *Encryption { + s.KMSContext = &v + return s +} + +// SetKMSKeyId sets the KMSKeyId field's value. +func (s *Encryption) SetKMSKeyId(v string) *Encryption { + s.KMSKeyId = &v + return s +} + +// Container for information regarding encryption based configuration for replicas. +type EncryptionConfiguration struct { + _ struct{} `type:"structure"` + + // The id of the KMS key used to encrypt the replica object. + ReplicaKmsKeyID *string `type:"string"` +} + +// String returns the string representation +func (s EncryptionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EncryptionConfiguration) GoString() string { + return s.String() +} + +// SetReplicaKmsKeyID sets the ReplicaKmsKeyID field's value. +func (s *EncryptionConfiguration) SetReplicaKmsKeyID(v string) *EncryptionConfiguration { + s.ReplicaKmsKeyID = &v + return s +} + +type Error struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Key *string `min:"1" type:"string"` + + Message *string `type:"string"` + + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s Error) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Error) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *Error) SetCode(v string) *Error { + s.Code = &v + return s +} + +// SetKey sets the Key field's value. +func (s *Error) SetKey(v string) *Error { + s.Key = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *Error) SetMessage(v string) *Error { + s.Message = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *Error) SetVersionId(v string) *Error { + s.VersionId = &v + return s +} + +type ErrorDocument struct { + _ struct{} `type:"structure"` + + // The object key name to use when a 4XX class error occurs. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ErrorDocument) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorDocument) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ErrorDocument) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ErrorDocument"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ErrorDocument) SetKey(v string) *ErrorDocument { + s.Key = &v + return s +} + +// Container for key value pair that defines the criteria for the filter rule. +type FilterRule struct { + _ struct{} `type:"structure"` + + // Object key name prefix or suffix identifying one or more objects to which + // the filtering rule applies. Maximum prefix length can be up to 1,024 characters. + // Overlapping prefixes and suffixes are not supported. For more information, + // go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + Name *string `type:"string" enum:"FilterRuleName"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s FilterRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FilterRule) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *FilterRule) SetName(v string) *FilterRule { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *FilterRule) SetValue(v string) *FilterRule { + s.Value = &v + return s +} + +type GetBucketAccelerateConfigurationInput struct { + _ struct{} `type:"structure"` + + // Name of the bucket for which the accelerate configuration is retrieved. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketAccelerateConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketAccelerateConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketAccelerateConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketAccelerateConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketAccelerateConfigurationInput) SetBucket(v string) *GetBucketAccelerateConfigurationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketAccelerateConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketAccelerateConfigurationOutput struct { + _ struct{} `type:"structure"` + + // The accelerate configuration of the bucket. + Status *string `type:"string" enum:"BucketAccelerateStatus"` +} + +// String returns the string representation +func (s GetBucketAccelerateConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketAccelerateConfigurationOutput) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *GetBucketAccelerateConfigurationOutput) SetStatus(v string) *GetBucketAccelerateConfigurationOutput { + s.Status = &v + return s +} + +type GetBucketAclInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketAclInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketAclInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketAclInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketAclInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketAclInput) SetBucket(v string) *GetBucketAclInput { + s.Bucket = &v + return s +} + +func (s *GetBucketAclInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketAclOutput struct { + _ struct{} `type:"structure"` + + // A list of grants. + Grants []*Grant `locationName:"AccessControlList" locationNameList:"Grant" type:"list"` + + Owner *Owner `type:"structure"` +} + +// String returns the string representation +func (s GetBucketAclOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketAclOutput) GoString() string { + return s.String() +} + +// SetGrants sets the Grants field's value. +func (s *GetBucketAclOutput) SetGrants(v []*Grant) *GetBucketAclOutput { + s.Grants = v + return s +} + +// SetOwner sets the Owner field's value. +func (s *GetBucketAclOutput) SetOwner(v *Owner) *GetBucketAclOutput { + s.Owner = v + return s +} + +type GetBucketAnalyticsConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket from which an analytics configuration is retrieved. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The identifier used to represent an analytics configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketAnalyticsConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketAnalyticsConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketAnalyticsConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketAnalyticsConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketAnalyticsConfigurationInput) SetBucket(v string) *GetBucketAnalyticsConfigurationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketAnalyticsConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *GetBucketAnalyticsConfigurationInput) SetId(v string) *GetBucketAnalyticsConfigurationInput { + s.Id = &v + return s +} + +type GetBucketAnalyticsConfigurationOutput struct { + _ struct{} `type:"structure" payload:"AnalyticsConfiguration"` + + // The configuration and any analyses for the analytics filter. + AnalyticsConfiguration *AnalyticsConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetBucketAnalyticsConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketAnalyticsConfigurationOutput) GoString() string { + return s.String() +} + +// SetAnalyticsConfiguration sets the AnalyticsConfiguration field's value. +func (s *GetBucketAnalyticsConfigurationOutput) SetAnalyticsConfiguration(v *AnalyticsConfiguration) *GetBucketAnalyticsConfigurationOutput { + s.AnalyticsConfiguration = v + return s +} + +type GetBucketCorsInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketCorsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketCorsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketCorsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketCorsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketCorsInput) SetBucket(v string) *GetBucketCorsInput { + s.Bucket = &v + return s +} + +func (s *GetBucketCorsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketCorsOutput struct { + _ struct{} `type:"structure"` + + CORSRules []*CORSRule `locationName:"CORSRule" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s GetBucketCorsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketCorsOutput) GoString() string { + return s.String() +} + +// SetCORSRules sets the CORSRules field's value. +func (s *GetBucketCorsOutput) SetCORSRules(v []*CORSRule) *GetBucketCorsOutput { + s.CORSRules = v + return s +} + +type GetBucketEncryptionInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket from which the server-side encryption configuration + // is retrieved. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketEncryptionInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketEncryptionInput) SetBucket(v string) *GetBucketEncryptionInput { + s.Bucket = &v + return s +} + +func (s *GetBucketEncryptionInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketEncryptionOutput struct { + _ struct{} `type:"structure" payload:"ServerSideEncryptionConfiguration"` + + // Container for server-side encryption configuration rules. Currently S3 supports + // one rule only. + ServerSideEncryptionConfiguration *ServerSideEncryptionConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetBucketEncryptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketEncryptionOutput) GoString() string { + return s.String() +} + +// SetServerSideEncryptionConfiguration sets the ServerSideEncryptionConfiguration field's value. +func (s *GetBucketEncryptionOutput) SetServerSideEncryptionConfiguration(v *ServerSideEncryptionConfiguration) *GetBucketEncryptionOutput { + s.ServerSideEncryptionConfiguration = v + return s +} + +type GetBucketInventoryConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the inventory configuration to retrieve. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ID used to identify the inventory configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketInventoryConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketInventoryConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketInventoryConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketInventoryConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketInventoryConfigurationInput) SetBucket(v string) *GetBucketInventoryConfigurationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketInventoryConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *GetBucketInventoryConfigurationInput) SetId(v string) *GetBucketInventoryConfigurationInput { + s.Id = &v + return s +} + +type GetBucketInventoryConfigurationOutput struct { + _ struct{} `type:"structure" payload:"InventoryConfiguration"` + + // Specifies the inventory configuration. + InventoryConfiguration *InventoryConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetBucketInventoryConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketInventoryConfigurationOutput) GoString() string { + return s.String() +} + +// SetInventoryConfiguration sets the InventoryConfiguration field's value. +func (s *GetBucketInventoryConfigurationOutput) SetInventoryConfiguration(v *InventoryConfiguration) *GetBucketInventoryConfigurationOutput { + s.InventoryConfiguration = v + return s +} + +type GetBucketLifecycleConfigurationInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketLifecycleConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLifecycleConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketLifecycleConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketLifecycleConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketLifecycleConfigurationInput) SetBucket(v string) *GetBucketLifecycleConfigurationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketLifecycleConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketLifecycleConfigurationOutput struct { + _ struct{} `type:"structure"` + + Rules []*LifecycleRule `locationName:"Rule" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s GetBucketLifecycleConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLifecycleConfigurationOutput) GoString() string { + return s.String() +} + +// SetRules sets the Rules field's value. +func (s *GetBucketLifecycleConfigurationOutput) SetRules(v []*LifecycleRule) *GetBucketLifecycleConfigurationOutput { + s.Rules = v + return s +} + +type GetBucketLifecycleInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketLifecycleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLifecycleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketLifecycleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketLifecycleInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketLifecycleInput) SetBucket(v string) *GetBucketLifecycleInput { + s.Bucket = &v + return s +} + +func (s *GetBucketLifecycleInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketLifecycleOutput struct { + _ struct{} `type:"structure"` + + Rules []*Rule `locationName:"Rule" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s GetBucketLifecycleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLifecycleOutput) GoString() string { + return s.String() +} + +// SetRules sets the Rules field's value. +func (s *GetBucketLifecycleOutput) SetRules(v []*Rule) *GetBucketLifecycleOutput { + s.Rules = v + return s +} + +type GetBucketLocationInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketLocationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLocationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketLocationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketLocationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketLocationInput) SetBucket(v string) *GetBucketLocationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketLocationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketLocationOutput struct { + _ struct{} `type:"structure"` + + LocationConstraint *string `type:"string" enum:"BucketLocationConstraint"` +} + +// String returns the string representation +func (s GetBucketLocationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLocationOutput) GoString() string { + return s.String() +} + +// SetLocationConstraint sets the LocationConstraint field's value. +func (s *GetBucketLocationOutput) SetLocationConstraint(v string) *GetBucketLocationOutput { + s.LocationConstraint = &v + return s +} + +type GetBucketLoggingInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketLoggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLoggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketLoggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketLoggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketLoggingInput) SetBucket(v string) *GetBucketLoggingInput { + s.Bucket = &v + return s +} + +func (s *GetBucketLoggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketLoggingOutput struct { + _ struct{} `type:"structure"` + + // Container for logging information. Presence of this element indicates that + // logging is enabled. Parameters TargetBucket and TargetPrefix are required + // in this case. + LoggingEnabled *LoggingEnabled `type:"structure"` +} + +// String returns the string representation +func (s GetBucketLoggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketLoggingOutput) GoString() string { + return s.String() +} + +// SetLoggingEnabled sets the LoggingEnabled field's value. +func (s *GetBucketLoggingOutput) SetLoggingEnabled(v *LoggingEnabled) *GetBucketLoggingOutput { + s.LoggingEnabled = v + return s +} + +type GetBucketMetricsConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the metrics configuration to retrieve. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ID used to identify the metrics configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketMetricsConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketMetricsConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketMetricsConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketMetricsConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketMetricsConfigurationInput) SetBucket(v string) *GetBucketMetricsConfigurationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketMetricsConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *GetBucketMetricsConfigurationInput) SetId(v string) *GetBucketMetricsConfigurationInput { + s.Id = &v + return s +} + +type GetBucketMetricsConfigurationOutput struct { + _ struct{} `type:"structure" payload:"MetricsConfiguration"` + + // Specifies the metrics configuration. + MetricsConfiguration *MetricsConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetBucketMetricsConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketMetricsConfigurationOutput) GoString() string { + return s.String() +} + +// SetMetricsConfiguration sets the MetricsConfiguration field's value. +func (s *GetBucketMetricsConfigurationOutput) SetMetricsConfiguration(v *MetricsConfiguration) *GetBucketMetricsConfigurationOutput { + s.MetricsConfiguration = v + return s +} + +type GetBucketNotificationConfigurationRequest struct { + _ struct{} `type:"structure"` + + // Name of the bucket to get the notification configuration for. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketNotificationConfigurationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketNotificationConfigurationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketNotificationConfigurationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketNotificationConfigurationRequest"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketNotificationConfigurationRequest) SetBucket(v string) *GetBucketNotificationConfigurationRequest { + s.Bucket = &v + return s +} + +func (s *GetBucketNotificationConfigurationRequest) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketPolicyInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketPolicyInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketPolicyInput) SetBucket(v string) *GetBucketPolicyInput { + s.Bucket = &v + return s +} + +func (s *GetBucketPolicyInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketPolicyOutput struct { + _ struct{} `type:"structure" payload:"Policy"` + + // The bucket policy as a JSON document. + Policy *string `type:"string"` +} + +// String returns the string representation +func (s GetBucketPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketPolicyOutput) GoString() string { + return s.String() +} + +// SetPolicy sets the Policy field's value. +func (s *GetBucketPolicyOutput) SetPolicy(v string) *GetBucketPolicyOutput { + s.Policy = &v + return s +} + +type GetBucketReplicationInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketReplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketReplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketReplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketReplicationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketReplicationInput) SetBucket(v string) *GetBucketReplicationInput { + s.Bucket = &v + return s +} + +func (s *GetBucketReplicationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketReplicationOutput struct { + _ struct{} `type:"structure" payload:"ReplicationConfiguration"` + + // Container for replication rules. You can add as many as 1,000 rules. Total + // replication configuration size can be up to 2 MB. + ReplicationConfiguration *ReplicationConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetBucketReplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketReplicationOutput) GoString() string { + return s.String() +} + +// SetReplicationConfiguration sets the ReplicationConfiguration field's value. +func (s *GetBucketReplicationOutput) SetReplicationConfiguration(v *ReplicationConfiguration) *GetBucketReplicationOutput { + s.ReplicationConfiguration = v + return s +} + +type GetBucketRequestPaymentInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketRequestPaymentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketRequestPaymentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketRequestPaymentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketRequestPaymentInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketRequestPaymentInput) SetBucket(v string) *GetBucketRequestPaymentInput { + s.Bucket = &v + return s +} + +func (s *GetBucketRequestPaymentInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketRequestPaymentOutput struct { + _ struct{} `type:"structure"` + + // Specifies who pays for the download and request fees. + Payer *string `type:"string" enum:"Payer"` +} + +// String returns the string representation +func (s GetBucketRequestPaymentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketRequestPaymentOutput) GoString() string { + return s.String() +} + +// SetPayer sets the Payer field's value. +func (s *GetBucketRequestPaymentOutput) SetPayer(v string) *GetBucketRequestPaymentOutput { + s.Payer = &v + return s +} + +type GetBucketTaggingInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketTaggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketTaggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketTaggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketTaggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketTaggingInput) SetBucket(v string) *GetBucketTaggingInput { + s.Bucket = &v + return s +} + +func (s *GetBucketTaggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketTaggingOutput struct { + _ struct{} `type:"structure"` + + // TagSet is a required field + TagSet []*Tag `locationNameList:"Tag" type:"list" required:"true"` +} + +// String returns the string representation +func (s GetBucketTaggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketTaggingOutput) GoString() string { + return s.String() +} + +// SetTagSet sets the TagSet field's value. +func (s *GetBucketTaggingOutput) SetTagSet(v []*Tag) *GetBucketTaggingOutput { + s.TagSet = v + return s +} + +type GetBucketVersioningInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketVersioningInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketVersioningInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketVersioningInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketVersioningInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketVersioningInput) SetBucket(v string) *GetBucketVersioningInput { + s.Bucket = &v + return s +} + +func (s *GetBucketVersioningInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketVersioningOutput struct { + _ struct{} `type:"structure"` + + // Specifies whether MFA delete is enabled in the bucket versioning configuration. + // This element is only returned if the bucket has been configured with MFA + // delete. If the bucket has never been so configured, this element is not returned. + MFADelete *string `locationName:"MfaDelete" type:"string" enum:"MFADeleteStatus"` + + // The versioning state of the bucket. + Status *string `type:"string" enum:"BucketVersioningStatus"` +} + +// String returns the string representation +func (s GetBucketVersioningOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketVersioningOutput) GoString() string { + return s.String() +} + +// SetMFADelete sets the MFADelete field's value. +func (s *GetBucketVersioningOutput) SetMFADelete(v string) *GetBucketVersioningOutput { + s.MFADelete = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetBucketVersioningOutput) SetStatus(v string) *GetBucketVersioningOutput { + s.Status = &v + return s +} + +type GetBucketWebsiteInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketWebsiteInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketWebsiteInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketWebsiteInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketWebsiteInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketWebsiteInput) SetBucket(v string) *GetBucketWebsiteInput { + s.Bucket = &v + return s +} + +func (s *GetBucketWebsiteInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketWebsiteOutput struct { + _ struct{} `type:"structure"` + + ErrorDocument *ErrorDocument `type:"structure"` + + IndexDocument *IndexDocument `type:"structure"` + + RedirectAllRequestsTo *RedirectAllRequestsTo `type:"structure"` + + RoutingRules []*RoutingRule `locationNameList:"RoutingRule" type:"list"` +} + +// String returns the string representation +func (s GetBucketWebsiteOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketWebsiteOutput) GoString() string { + return s.String() +} + +// SetErrorDocument sets the ErrorDocument field's value. +func (s *GetBucketWebsiteOutput) SetErrorDocument(v *ErrorDocument) *GetBucketWebsiteOutput { + s.ErrorDocument = v + return s +} + +// SetIndexDocument sets the IndexDocument field's value. +func (s *GetBucketWebsiteOutput) SetIndexDocument(v *IndexDocument) *GetBucketWebsiteOutput { + s.IndexDocument = v + return s +} + +// SetRedirectAllRequestsTo sets the RedirectAllRequestsTo field's value. +func (s *GetBucketWebsiteOutput) SetRedirectAllRequestsTo(v *RedirectAllRequestsTo) *GetBucketWebsiteOutput { + s.RedirectAllRequestsTo = v + return s +} + +// SetRoutingRules sets the RoutingRules field's value. +func (s *GetBucketWebsiteOutput) SetRoutingRules(v []*RoutingRule) *GetBucketWebsiteOutput { + s.RoutingRules = v + return s +} + +type GetObjectAclInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // VersionId used to reference a specific version of the object. + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s GetObjectAclInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectAclInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetObjectAclInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetObjectAclInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetObjectAclInput) SetBucket(v string) *GetObjectAclInput { + s.Bucket = &v + return s +} + +func (s *GetObjectAclInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *GetObjectAclInput) SetKey(v string) *GetObjectAclInput { + s.Key = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *GetObjectAclInput) SetRequestPayer(v string) *GetObjectAclInput { + s.RequestPayer = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetObjectAclInput) SetVersionId(v string) *GetObjectAclInput { + s.VersionId = &v + return s +} + +type GetObjectAclOutput struct { + _ struct{} `type:"structure"` + + // A list of grants. + Grants []*Grant `locationName:"AccessControlList" locationNameList:"Grant" type:"list"` + + Owner *Owner `type:"structure"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` +} + +// String returns the string representation +func (s GetObjectAclOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectAclOutput) GoString() string { + return s.String() +} + +// SetGrants sets the Grants field's value. +func (s *GetObjectAclOutput) SetGrants(v []*Grant) *GetObjectAclOutput { + s.Grants = v + return s +} + +// SetOwner sets the Owner field's value. +func (s *GetObjectAclOutput) SetOwner(v *Owner) *GetObjectAclOutput { + s.Owner = v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *GetObjectAclOutput) SetRequestCharged(v string) *GetObjectAclOutput { + s.RequestCharged = &v + return s +} + +type GetObjectInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Return the object only if its entity tag (ETag) is the same as the one specified, + // otherwise return a 412 (precondition failed). + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` + + // Return the object only if it has been modified since the specified time, + // otherwise return a 304 (not modified). + IfModifiedSince *time.Time `location:"header" locationName:"If-Modified-Since" type:"timestamp" timestampFormat:"rfc822"` + + // Return the object only if its entity tag (ETag) is different from the one + // specified, otherwise return a 304 (not modified). + IfNoneMatch *string `location:"header" locationName:"If-None-Match" type:"string"` + + // Return the object only if it has not been modified since the specified time, + // otherwise return a 412 (precondition failed). + IfUnmodifiedSince *time.Time `location:"header" locationName:"If-Unmodified-Since" type:"timestamp" timestampFormat:"rfc822"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Part number of the object being read. This is a positive integer between + // 1 and 10,000. Effectively performs a 'ranged' GET request for the part specified. + // Useful for downloading just a part of an object. + PartNumber *int64 `location:"querystring" locationName:"partNumber" type:"integer"` + + // Downloads the specified range bytes of an object. For more information about + // the HTTP Range header, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35. + Range *string `location:"header" locationName:"Range" type:"string"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Sets the Cache-Control header of the response. + ResponseCacheControl *string `location:"querystring" locationName:"response-cache-control" type:"string"` + + // Sets the Content-Disposition header of the response + ResponseContentDisposition *string `location:"querystring" locationName:"response-content-disposition" type:"string"` + + // Sets the Content-Encoding header of the response. + ResponseContentEncoding *string `location:"querystring" locationName:"response-content-encoding" type:"string"` + + // Sets the Content-Language header of the response. + ResponseContentLanguage *string `location:"querystring" locationName:"response-content-language" type:"string"` + + // Sets the Content-Type header of the response. + ResponseContentType *string `location:"querystring" locationName:"response-content-type" type:"string"` + + // Sets the Expires header of the response. + ResponseExpires *time.Time `location:"querystring" locationName:"response-expires" type:"timestamp" timestampFormat:"iso8601"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // VersionId used to reference a specific version of the object. + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s GetObjectInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetObjectInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetObjectInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetObjectInput) SetBucket(v string) *GetObjectInput { + s.Bucket = &v + return s +} + +func (s *GetObjectInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetIfMatch sets the IfMatch field's value. +func (s *GetObjectInput) SetIfMatch(v string) *GetObjectInput { + s.IfMatch = &v + return s +} + +// SetIfModifiedSince sets the IfModifiedSince field's value. +func (s *GetObjectInput) SetIfModifiedSince(v time.Time) *GetObjectInput { + s.IfModifiedSince = &v + return s +} + +// SetIfNoneMatch sets the IfNoneMatch field's value. +func (s *GetObjectInput) SetIfNoneMatch(v string) *GetObjectInput { + s.IfNoneMatch = &v + return s +} + +// SetIfUnmodifiedSince sets the IfUnmodifiedSince field's value. +func (s *GetObjectInput) SetIfUnmodifiedSince(v time.Time) *GetObjectInput { + s.IfUnmodifiedSince = &v + return s +} + +// SetKey sets the Key field's value. +func (s *GetObjectInput) SetKey(v string) *GetObjectInput { + s.Key = &v + return s +} + +// SetPartNumber sets the PartNumber field's value. +func (s *GetObjectInput) SetPartNumber(v int64) *GetObjectInput { + s.PartNumber = &v + return s +} + +// SetRange sets the Range field's value. +func (s *GetObjectInput) SetRange(v string) *GetObjectInput { + s.Range = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *GetObjectInput) SetRequestPayer(v string) *GetObjectInput { + s.RequestPayer = &v + return s +} + +// SetResponseCacheControl sets the ResponseCacheControl field's value. +func (s *GetObjectInput) SetResponseCacheControl(v string) *GetObjectInput { + s.ResponseCacheControl = &v + return s +} + +// SetResponseContentDisposition sets the ResponseContentDisposition field's value. +func (s *GetObjectInput) SetResponseContentDisposition(v string) *GetObjectInput { + s.ResponseContentDisposition = &v + return s +} + +// SetResponseContentEncoding sets the ResponseContentEncoding field's value. +func (s *GetObjectInput) SetResponseContentEncoding(v string) *GetObjectInput { + s.ResponseContentEncoding = &v + return s +} + +// SetResponseContentLanguage sets the ResponseContentLanguage field's value. +func (s *GetObjectInput) SetResponseContentLanguage(v string) *GetObjectInput { + s.ResponseContentLanguage = &v + return s +} + +// SetResponseContentType sets the ResponseContentType field's value. +func (s *GetObjectInput) SetResponseContentType(v string) *GetObjectInput { + s.ResponseContentType = &v + return s +} + +// SetResponseExpires sets the ResponseExpires field's value. +func (s *GetObjectInput) SetResponseExpires(v time.Time) *GetObjectInput { + s.ResponseExpires = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *GetObjectInput) SetSSECustomerAlgorithm(v string) *GetObjectInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *GetObjectInput) SetSSECustomerKey(v string) *GetObjectInput { + s.SSECustomerKey = &v + return s +} + +func (s *GetObjectInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *GetObjectInput) SetSSECustomerKeyMD5(v string) *GetObjectInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetObjectInput) SetVersionId(v string) *GetObjectInput { + s.VersionId = &v + return s +} + +type GetObjectOutput struct { + _ struct{} `type:"structure" payload:"Body"` + + AcceptRanges *string `location:"header" locationName:"accept-ranges" type:"string"` + + // Object data. + Body io.ReadCloser `type:"blob"` + + // Specifies caching behavior along the request/reply chain. + CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"` + + // Specifies presentational information for the object. + ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"` + + // Specifies what content encodings have been applied to the object and thus + // what decoding mechanisms must be applied to obtain the media-type referenced + // by the Content-Type header field. + ContentEncoding *string `location:"header" locationName:"Content-Encoding" type:"string"` + + // The language the content is in. + ContentLanguage *string `location:"header" locationName:"Content-Language" type:"string"` + + // Size of the body in bytes. + ContentLength *int64 `location:"header" locationName:"Content-Length" type:"long"` + + // The portion of the object returned in the response. + ContentRange *string `location:"header" locationName:"Content-Range" type:"string"` + + // A standard MIME type describing the format of the object data. + ContentType *string `location:"header" locationName:"Content-Type" type:"string"` + + // Specifies whether the object retrieved was (true) or was not (false) a Delete + // Marker. If false, this response header does not appear in the response. + DeleteMarker *bool `location:"header" locationName:"x-amz-delete-marker" type:"boolean"` + + // An ETag is an opaque identifier assigned by a web server to a specific version + // of a resource found at a URL + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // If the object expiration is configured (see PUT Bucket lifecycle), the response + // includes this header. It includes the expiry-date and rule-id key value pairs + // providing object expiration information. The value of the rule-id is URL + // encoded. + Expiration *string `location:"header" locationName:"x-amz-expiration" type:"string"` + + // The date and time at which the object is no longer cacheable. + Expires *string `location:"header" locationName:"Expires" type:"string"` + + // Last modified date of the object + LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp" timestampFormat:"rfc822"` + + // A map of metadata to store with the object in S3. + Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` + + // This is set to the number of metadata entries not returned in x-amz-meta + // headers. This can happen if you create metadata using an API like SOAP that + // supports more flexible metadata than the REST API. For example, using SOAP, + // you can create metadata whose values are not legal HTTP headers. + MissingMeta *int64 `location:"header" locationName:"x-amz-missing-meta" type:"integer"` + + // The count of parts this object has. + PartsCount *int64 `location:"header" locationName:"x-amz-mp-parts-count" type:"integer"` + + ReplicationStatus *string `location:"header" locationName:"x-amz-replication-status" type:"string" enum:"ReplicationStatus"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // Provides information about object restoration operation and expiration time + // of the restored object copy. + Restore *string `location:"header" locationName:"x-amz-restore" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` + + // The number of tags, if any, on the object. + TagCount *int64 `location:"header" locationName:"x-amz-tagging-count" type:"integer"` + + // Version of the object. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` + + // If the bucket is configured as a website, redirects requests for this object + // to another object in the same bucket or to an external URL. Amazon S3 stores + // the value of this header in the object metadata. + WebsiteRedirectLocation *string `location:"header" locationName:"x-amz-website-redirect-location" type:"string"` +} + +// String returns the string representation +func (s GetObjectOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectOutput) GoString() string { + return s.String() +} + +// SetAcceptRanges sets the AcceptRanges field's value. +func (s *GetObjectOutput) SetAcceptRanges(v string) *GetObjectOutput { + s.AcceptRanges = &v + return s +} + +// SetBody sets the Body field's value. +func (s *GetObjectOutput) SetBody(v io.ReadCloser) *GetObjectOutput { + s.Body = v + return s +} + +// SetCacheControl sets the CacheControl field's value. +func (s *GetObjectOutput) SetCacheControl(v string) *GetObjectOutput { + s.CacheControl = &v + return s +} + +// SetContentDisposition sets the ContentDisposition field's value. +func (s *GetObjectOutput) SetContentDisposition(v string) *GetObjectOutput { + s.ContentDisposition = &v + return s +} + +// SetContentEncoding sets the ContentEncoding field's value. +func (s *GetObjectOutput) SetContentEncoding(v string) *GetObjectOutput { + s.ContentEncoding = &v + return s +} + +// SetContentLanguage sets the ContentLanguage field's value. +func (s *GetObjectOutput) SetContentLanguage(v string) *GetObjectOutput { + s.ContentLanguage = &v + return s +} + +// SetContentLength sets the ContentLength field's value. +func (s *GetObjectOutput) SetContentLength(v int64) *GetObjectOutput { + s.ContentLength = &v + return s +} + +// SetContentRange sets the ContentRange field's value. +func (s *GetObjectOutput) SetContentRange(v string) *GetObjectOutput { + s.ContentRange = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *GetObjectOutput) SetContentType(v string) *GetObjectOutput { + s.ContentType = &v + return s +} + +// SetDeleteMarker sets the DeleteMarker field's value. +func (s *GetObjectOutput) SetDeleteMarker(v bool) *GetObjectOutput { + s.DeleteMarker = &v + return s +} + +// SetETag sets the ETag field's value. +func (s *GetObjectOutput) SetETag(v string) *GetObjectOutput { + s.ETag = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *GetObjectOutput) SetExpiration(v string) *GetObjectOutput { + s.Expiration = &v + return s +} + +// SetExpires sets the Expires field's value. +func (s *GetObjectOutput) SetExpires(v string) *GetObjectOutput { + s.Expires = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *GetObjectOutput) SetLastModified(v time.Time) *GetObjectOutput { + s.LastModified = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *GetObjectOutput) SetMetadata(v map[string]*string) *GetObjectOutput { + s.Metadata = v + return s +} + +// SetMissingMeta sets the MissingMeta field's value. +func (s *GetObjectOutput) SetMissingMeta(v int64) *GetObjectOutput { + s.MissingMeta = &v + return s +} + +// SetPartsCount sets the PartsCount field's value. +func (s *GetObjectOutput) SetPartsCount(v int64) *GetObjectOutput { + s.PartsCount = &v + return s +} + +// SetReplicationStatus sets the ReplicationStatus field's value. +func (s *GetObjectOutput) SetReplicationStatus(v string) *GetObjectOutput { + s.ReplicationStatus = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *GetObjectOutput) SetRequestCharged(v string) *GetObjectOutput { + s.RequestCharged = &v + return s +} + +// SetRestore sets the Restore field's value. +func (s *GetObjectOutput) SetRestore(v string) *GetObjectOutput { + s.Restore = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *GetObjectOutput) SetSSECustomerAlgorithm(v string) *GetObjectOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *GetObjectOutput) SetSSECustomerKeyMD5(v string) *GetObjectOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *GetObjectOutput) SetSSEKMSKeyId(v string) *GetObjectOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *GetObjectOutput) SetServerSideEncryption(v string) *GetObjectOutput { + s.ServerSideEncryption = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *GetObjectOutput) SetStorageClass(v string) *GetObjectOutput { + s.StorageClass = &v + return s +} + +// SetTagCount sets the TagCount field's value. +func (s *GetObjectOutput) SetTagCount(v int64) *GetObjectOutput { + s.TagCount = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetObjectOutput) SetVersionId(v string) *GetObjectOutput { + s.VersionId = &v + return s +} + +// SetWebsiteRedirectLocation sets the WebsiteRedirectLocation field's value. +func (s *GetObjectOutput) SetWebsiteRedirectLocation(v string) *GetObjectOutput { + s.WebsiteRedirectLocation = &v + return s +} + +type GetObjectTaggingInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s GetObjectTaggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectTaggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetObjectTaggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetObjectTaggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetObjectTaggingInput) SetBucket(v string) *GetObjectTaggingInput { + s.Bucket = &v + return s +} + +func (s *GetObjectTaggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *GetObjectTaggingInput) SetKey(v string) *GetObjectTaggingInput { + s.Key = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetObjectTaggingInput) SetVersionId(v string) *GetObjectTaggingInput { + s.VersionId = &v + return s +} + +type GetObjectTaggingOutput struct { + _ struct{} `type:"structure"` + + // TagSet is a required field + TagSet []*Tag `locationNameList:"Tag" type:"list" required:"true"` + + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s GetObjectTaggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectTaggingOutput) GoString() string { + return s.String() +} + +// SetTagSet sets the TagSet field's value. +func (s *GetObjectTaggingOutput) SetTagSet(v []*Tag) *GetObjectTaggingOutput { + s.TagSet = v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetObjectTaggingOutput) SetVersionId(v string) *GetObjectTaggingOutput { + s.VersionId = &v + return s +} + +type GetObjectTorrentInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` +} + +// String returns the string representation +func (s GetObjectTorrentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectTorrentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetObjectTorrentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetObjectTorrentInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetObjectTorrentInput) SetBucket(v string) *GetObjectTorrentInput { + s.Bucket = &v + return s +} + +func (s *GetObjectTorrentInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *GetObjectTorrentInput) SetKey(v string) *GetObjectTorrentInput { + s.Key = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *GetObjectTorrentInput) SetRequestPayer(v string) *GetObjectTorrentInput { + s.RequestPayer = &v + return s +} + +type GetObjectTorrentOutput struct { + _ struct{} `type:"structure" payload:"Body"` + + Body io.ReadCloser `type:"blob"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` +} + +// String returns the string representation +func (s GetObjectTorrentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetObjectTorrentOutput) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *GetObjectTorrentOutput) SetBody(v io.ReadCloser) *GetObjectTorrentOutput { + s.Body = v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *GetObjectTorrentOutput) SetRequestCharged(v string) *GetObjectTorrentOutput { + s.RequestCharged = &v + return s +} + +type GlacierJobParameters struct { + _ struct{} `type:"structure"` + + // Glacier retrieval tier at which the restore will be processed. + // + // Tier is a required field + Tier *string `type:"string" required:"true" enum:"Tier"` +} + +// String returns the string representation +func (s GlacierJobParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlacierJobParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GlacierJobParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlacierJobParameters"} + if s.Tier == nil { + invalidParams.Add(request.NewErrParamRequired("Tier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTier sets the Tier field's value. +func (s *GlacierJobParameters) SetTier(v string) *GlacierJobParameters { + s.Tier = &v + return s +} + +type Grant struct { + _ struct{} `type:"structure"` + + Grantee *Grantee `type:"structure" xmlPrefix:"xsi" xmlURI:"http://www.w3.org/2001/XMLSchema-instance"` + + // Specifies the permission given to the grantee. + Permission *string `type:"string" enum:"Permission"` +} + +// String returns the string representation +func (s Grant) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Grant) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Grant) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Grant"} + if s.Grantee != nil { + if err := s.Grantee.Validate(); err != nil { + invalidParams.AddNested("Grantee", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGrantee sets the Grantee field's value. +func (s *Grant) SetGrantee(v *Grantee) *Grant { + s.Grantee = v + return s +} + +// SetPermission sets the Permission field's value. +func (s *Grant) SetPermission(v string) *Grant { + s.Permission = &v + return s +} + +type Grantee struct { + _ struct{} `type:"structure" xmlPrefix:"xsi" xmlURI:"http://www.w3.org/2001/XMLSchema-instance"` + + // Screen name of the grantee. + DisplayName *string `type:"string"` + + // Email address of the grantee. + EmailAddress *string `type:"string"` + + // The canonical user ID of the grantee. + ID *string `type:"string"` + + // Type of grantee + // + // Type is a required field + Type *string `locationName:"xsi:type" type:"string" xmlAttribute:"true" required:"true" enum:"Type"` + + // URI of the grantee group. + URI *string `type:"string"` +} + +// String returns the string representation +func (s Grantee) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Grantee) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Grantee) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Grantee"} + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDisplayName sets the DisplayName field's value. +func (s *Grantee) SetDisplayName(v string) *Grantee { + s.DisplayName = &v + return s +} + +// SetEmailAddress sets the EmailAddress field's value. +func (s *Grantee) SetEmailAddress(v string) *Grantee { + s.EmailAddress = &v + return s +} + +// SetID sets the ID field's value. +func (s *Grantee) SetID(v string) *Grantee { + s.ID = &v + return s +} + +// SetType sets the Type field's value. +func (s *Grantee) SetType(v string) *Grantee { + s.Type = &v + return s +} + +// SetURI sets the URI field's value. +func (s *Grantee) SetURI(v string) *Grantee { + s.URI = &v + return s +} + +type HeadBucketInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s HeadBucketInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HeadBucketInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *HeadBucketInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HeadBucketInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *HeadBucketInput) SetBucket(v string) *HeadBucketInput { + s.Bucket = &v + return s +} + +func (s *HeadBucketInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type HeadBucketOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s HeadBucketOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HeadBucketOutput) GoString() string { + return s.String() +} + +type HeadObjectInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Return the object only if its entity tag (ETag) is the same as the one specified, + // otherwise return a 412 (precondition failed). + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` + + // Return the object only if it has been modified since the specified time, + // otherwise return a 304 (not modified). + IfModifiedSince *time.Time `location:"header" locationName:"If-Modified-Since" type:"timestamp" timestampFormat:"rfc822"` + + // Return the object only if its entity tag (ETag) is different from the one + // specified, otherwise return a 304 (not modified). + IfNoneMatch *string `location:"header" locationName:"If-None-Match" type:"string"` + + // Return the object only if it has not been modified since the specified time, + // otherwise return a 412 (precondition failed). + IfUnmodifiedSince *time.Time `location:"header" locationName:"If-Unmodified-Since" type:"timestamp" timestampFormat:"rfc822"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Part number of the object being read. This is a positive integer between + // 1 and 10,000. Effectively performs a 'ranged' HEAD request for the part specified. + // Useful querying about the size of the part and the number of parts in this + // object. + PartNumber *int64 `location:"querystring" locationName:"partNumber" type:"integer"` + + // Downloads the specified range bytes of an object. For more information about + // the HTTP Range header, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35. + Range *string `location:"header" locationName:"Range" type:"string"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // VersionId used to reference a specific version of the object. + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s HeadObjectInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HeadObjectInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *HeadObjectInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HeadObjectInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *HeadObjectInput) SetBucket(v string) *HeadObjectInput { + s.Bucket = &v + return s +} + +func (s *HeadObjectInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetIfMatch sets the IfMatch field's value. +func (s *HeadObjectInput) SetIfMatch(v string) *HeadObjectInput { + s.IfMatch = &v + return s +} + +// SetIfModifiedSince sets the IfModifiedSince field's value. +func (s *HeadObjectInput) SetIfModifiedSince(v time.Time) *HeadObjectInput { + s.IfModifiedSince = &v + return s +} + +// SetIfNoneMatch sets the IfNoneMatch field's value. +func (s *HeadObjectInput) SetIfNoneMatch(v string) *HeadObjectInput { + s.IfNoneMatch = &v + return s +} + +// SetIfUnmodifiedSince sets the IfUnmodifiedSince field's value. +func (s *HeadObjectInput) SetIfUnmodifiedSince(v time.Time) *HeadObjectInput { + s.IfUnmodifiedSince = &v + return s +} + +// SetKey sets the Key field's value. +func (s *HeadObjectInput) SetKey(v string) *HeadObjectInput { + s.Key = &v + return s +} + +// SetPartNumber sets the PartNumber field's value. +func (s *HeadObjectInput) SetPartNumber(v int64) *HeadObjectInput { + s.PartNumber = &v + return s +} + +// SetRange sets the Range field's value. +func (s *HeadObjectInput) SetRange(v string) *HeadObjectInput { + s.Range = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *HeadObjectInput) SetRequestPayer(v string) *HeadObjectInput { + s.RequestPayer = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *HeadObjectInput) SetSSECustomerAlgorithm(v string) *HeadObjectInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *HeadObjectInput) SetSSECustomerKey(v string) *HeadObjectInput { + s.SSECustomerKey = &v + return s +} + +func (s *HeadObjectInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *HeadObjectInput) SetSSECustomerKeyMD5(v string) *HeadObjectInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *HeadObjectInput) SetVersionId(v string) *HeadObjectInput { + s.VersionId = &v + return s +} + +type HeadObjectOutput struct { + _ struct{} `type:"structure"` + + AcceptRanges *string `location:"header" locationName:"accept-ranges" type:"string"` + + // Specifies caching behavior along the request/reply chain. + CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"` + + // Specifies presentational information for the object. + ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"` + + // Specifies what content encodings have been applied to the object and thus + // what decoding mechanisms must be applied to obtain the media-type referenced + // by the Content-Type header field. + ContentEncoding *string `location:"header" locationName:"Content-Encoding" type:"string"` + + // The language the content is in. + ContentLanguage *string `location:"header" locationName:"Content-Language" type:"string"` + + // Size of the body in bytes. + ContentLength *int64 `location:"header" locationName:"Content-Length" type:"long"` + + // A standard MIME type describing the format of the object data. + ContentType *string `location:"header" locationName:"Content-Type" type:"string"` + + // Specifies whether the object retrieved was (true) or was not (false) a Delete + // Marker. If false, this response header does not appear in the response. + DeleteMarker *bool `location:"header" locationName:"x-amz-delete-marker" type:"boolean"` + + // An ETag is an opaque identifier assigned by a web server to a specific version + // of a resource found at a URL + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // If the object expiration is configured (see PUT Bucket lifecycle), the response + // includes this header. It includes the expiry-date and rule-id key value pairs + // providing object expiration information. The value of the rule-id is URL + // encoded. + Expiration *string `location:"header" locationName:"x-amz-expiration" type:"string"` + + // The date and time at which the object is no longer cacheable. + Expires *string `location:"header" locationName:"Expires" type:"string"` + + // Last modified date of the object + LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp" timestampFormat:"rfc822"` + + // A map of metadata to store with the object in S3. + Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` + + // This is set to the number of metadata entries not returned in x-amz-meta + // headers. This can happen if you create metadata using an API like SOAP that + // supports more flexible metadata than the REST API. For example, using SOAP, + // you can create metadata whose values are not legal HTTP headers. + MissingMeta *int64 `location:"header" locationName:"x-amz-missing-meta" type:"integer"` + + // The count of parts this object has. + PartsCount *int64 `location:"header" locationName:"x-amz-mp-parts-count" type:"integer"` + + ReplicationStatus *string `location:"header" locationName:"x-amz-replication-status" type:"string" enum:"ReplicationStatus"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // Provides information about object restoration operation and expiration time + // of the restored object copy. + Restore *string `location:"header" locationName:"x-amz-restore" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` + + // Version of the object. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` + + // If the bucket is configured as a website, redirects requests for this object + // to another object in the same bucket or to an external URL. Amazon S3 stores + // the value of this header in the object metadata. + WebsiteRedirectLocation *string `location:"header" locationName:"x-amz-website-redirect-location" type:"string"` +} + +// String returns the string representation +func (s HeadObjectOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HeadObjectOutput) GoString() string { + return s.String() +} + +// SetAcceptRanges sets the AcceptRanges field's value. +func (s *HeadObjectOutput) SetAcceptRanges(v string) *HeadObjectOutput { + s.AcceptRanges = &v + return s +} + +// SetCacheControl sets the CacheControl field's value. +func (s *HeadObjectOutput) SetCacheControl(v string) *HeadObjectOutput { + s.CacheControl = &v + return s +} + +// SetContentDisposition sets the ContentDisposition field's value. +func (s *HeadObjectOutput) SetContentDisposition(v string) *HeadObjectOutput { + s.ContentDisposition = &v + return s +} + +// SetContentEncoding sets the ContentEncoding field's value. +func (s *HeadObjectOutput) SetContentEncoding(v string) *HeadObjectOutput { + s.ContentEncoding = &v + return s +} + +// SetContentLanguage sets the ContentLanguage field's value. +func (s *HeadObjectOutput) SetContentLanguage(v string) *HeadObjectOutput { + s.ContentLanguage = &v + return s +} + +// SetContentLength sets the ContentLength field's value. +func (s *HeadObjectOutput) SetContentLength(v int64) *HeadObjectOutput { + s.ContentLength = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *HeadObjectOutput) SetContentType(v string) *HeadObjectOutput { + s.ContentType = &v + return s +} + +// SetDeleteMarker sets the DeleteMarker field's value. +func (s *HeadObjectOutput) SetDeleteMarker(v bool) *HeadObjectOutput { + s.DeleteMarker = &v + return s +} + +// SetETag sets the ETag field's value. +func (s *HeadObjectOutput) SetETag(v string) *HeadObjectOutput { + s.ETag = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *HeadObjectOutput) SetExpiration(v string) *HeadObjectOutput { + s.Expiration = &v + return s +} + +// SetExpires sets the Expires field's value. +func (s *HeadObjectOutput) SetExpires(v string) *HeadObjectOutput { + s.Expires = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *HeadObjectOutput) SetLastModified(v time.Time) *HeadObjectOutput { + s.LastModified = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *HeadObjectOutput) SetMetadata(v map[string]*string) *HeadObjectOutput { + s.Metadata = v + return s +} + +// SetMissingMeta sets the MissingMeta field's value. +func (s *HeadObjectOutput) SetMissingMeta(v int64) *HeadObjectOutput { + s.MissingMeta = &v + return s +} + +// SetPartsCount sets the PartsCount field's value. +func (s *HeadObjectOutput) SetPartsCount(v int64) *HeadObjectOutput { + s.PartsCount = &v + return s +} + +// SetReplicationStatus sets the ReplicationStatus field's value. +func (s *HeadObjectOutput) SetReplicationStatus(v string) *HeadObjectOutput { + s.ReplicationStatus = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *HeadObjectOutput) SetRequestCharged(v string) *HeadObjectOutput { + s.RequestCharged = &v + return s +} + +// SetRestore sets the Restore field's value. +func (s *HeadObjectOutput) SetRestore(v string) *HeadObjectOutput { + s.Restore = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *HeadObjectOutput) SetSSECustomerAlgorithm(v string) *HeadObjectOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *HeadObjectOutput) SetSSECustomerKeyMD5(v string) *HeadObjectOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *HeadObjectOutput) SetSSEKMSKeyId(v string) *HeadObjectOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *HeadObjectOutput) SetServerSideEncryption(v string) *HeadObjectOutput { + s.ServerSideEncryption = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *HeadObjectOutput) SetStorageClass(v string) *HeadObjectOutput { + s.StorageClass = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *HeadObjectOutput) SetVersionId(v string) *HeadObjectOutput { + s.VersionId = &v + return s +} + +// SetWebsiteRedirectLocation sets the WebsiteRedirectLocation field's value. +func (s *HeadObjectOutput) SetWebsiteRedirectLocation(v string) *HeadObjectOutput { + s.WebsiteRedirectLocation = &v + return s +} + +type IndexDocument struct { + _ struct{} `type:"structure"` + + // A suffix that is appended to a request that is for a directory on the website + // endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ + // the data that is returned will be for the object with the key name images/index.html) + // The suffix must not be empty and must not include a slash character. + // + // Suffix is a required field + Suffix *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s IndexDocument) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s IndexDocument) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *IndexDocument) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "IndexDocument"} + if s.Suffix == nil { + invalidParams.Add(request.NewErrParamRequired("Suffix")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSuffix sets the Suffix field's value. +func (s *IndexDocument) SetSuffix(v string) *IndexDocument { + s.Suffix = &v + return s +} + +type Initiator struct { + _ struct{} `type:"structure"` + + // Name of the Principal. + DisplayName *string `type:"string"` + + // If the principal is an AWS account, it provides the Canonical User ID. If + // the principal is an IAM User, it provides a user ARN value. + ID *string `type:"string"` +} + +// String returns the string representation +func (s Initiator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Initiator) GoString() string { + return s.String() +} + +// SetDisplayName sets the DisplayName field's value. +func (s *Initiator) SetDisplayName(v string) *Initiator { + s.DisplayName = &v + return s +} + +// SetID sets the ID field's value. +func (s *Initiator) SetID(v string) *Initiator { + s.ID = &v + return s +} + +// Describes the serialization format of the object. +type InputSerialization struct { + _ struct{} `type:"structure"` + + // Describes the serialization of a CSV-encoded object. + CSV *CSVInput `type:"structure"` + + // Specifies object's compression format. Valid values: NONE, GZIP. Default + // Value: NONE. + CompressionType *string `type:"string" enum:"CompressionType"` + + // Specifies JSON as object's input serialization format. + JSON *JSONInput `type:"structure"` +} + +// String returns the string representation +func (s InputSerialization) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputSerialization) GoString() string { + return s.String() +} + +// SetCSV sets the CSV field's value. +func (s *InputSerialization) SetCSV(v *CSVInput) *InputSerialization { + s.CSV = v + return s +} + +// SetCompressionType sets the CompressionType field's value. +func (s *InputSerialization) SetCompressionType(v string) *InputSerialization { + s.CompressionType = &v + return s +} + +// SetJSON sets the JSON field's value. +func (s *InputSerialization) SetJSON(v *JSONInput) *InputSerialization { + s.JSON = v + return s +} + +type InventoryConfiguration struct { + _ struct{} `type:"structure"` + + // Contains information about where to publish the inventory results. + // + // Destination is a required field + Destination *InventoryDestination `type:"structure" required:"true"` + + // Specifies an inventory filter. The inventory only includes objects that meet + // the filter's criteria. + Filter *InventoryFilter `type:"structure"` + + // The ID used to identify the inventory configuration. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // Specifies which object version(s) to included in the inventory results. + // + // IncludedObjectVersions is a required field + IncludedObjectVersions *string `type:"string" required:"true" enum:"InventoryIncludedObjectVersions"` + + // Specifies whether the inventory is enabled or disabled. + // + // IsEnabled is a required field + IsEnabled *bool `type:"boolean" required:"true"` + + // Contains the optional fields that are included in the inventory results. + OptionalFields []*string `locationNameList:"Field" type:"list"` + + // Specifies the schedule for generating inventory results. + // + // Schedule is a required field + Schedule *InventorySchedule `type:"structure" required:"true"` +} + +// String returns the string representation +func (s InventoryConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryConfiguration"} + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.IncludedObjectVersions == nil { + invalidParams.Add(request.NewErrParamRequired("IncludedObjectVersions")) + } + if s.IsEnabled == nil { + invalidParams.Add(request.NewErrParamRequired("IsEnabled")) + } + if s.Schedule == nil { + invalidParams.Add(request.NewErrParamRequired("Schedule")) + } + if s.Destination != nil { + if err := s.Destination.Validate(); err != nil { + invalidParams.AddNested("Destination", err.(request.ErrInvalidParams)) + } + } + if s.Filter != nil { + if err := s.Filter.Validate(); err != nil { + invalidParams.AddNested("Filter", err.(request.ErrInvalidParams)) + } + } + if s.Schedule != nil { + if err := s.Schedule.Validate(); err != nil { + invalidParams.AddNested("Schedule", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestination sets the Destination field's value. +func (s *InventoryConfiguration) SetDestination(v *InventoryDestination) *InventoryConfiguration { + s.Destination = v + return s +} + +// SetFilter sets the Filter field's value. +func (s *InventoryConfiguration) SetFilter(v *InventoryFilter) *InventoryConfiguration { + s.Filter = v + return s +} + +// SetId sets the Id field's value. +func (s *InventoryConfiguration) SetId(v string) *InventoryConfiguration { + s.Id = &v + return s +} + +// SetIncludedObjectVersions sets the IncludedObjectVersions field's value. +func (s *InventoryConfiguration) SetIncludedObjectVersions(v string) *InventoryConfiguration { + s.IncludedObjectVersions = &v + return s +} + +// SetIsEnabled sets the IsEnabled field's value. +func (s *InventoryConfiguration) SetIsEnabled(v bool) *InventoryConfiguration { + s.IsEnabled = &v + return s +} + +// SetOptionalFields sets the OptionalFields field's value. +func (s *InventoryConfiguration) SetOptionalFields(v []*string) *InventoryConfiguration { + s.OptionalFields = v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *InventoryConfiguration) SetSchedule(v *InventorySchedule) *InventoryConfiguration { + s.Schedule = v + return s +} + +type InventoryDestination struct { + _ struct{} `type:"structure"` + + // Contains the bucket name, file format, bucket owner (optional), and prefix + // (optional) where inventory results are published. + // + // S3BucketDestination is a required field + S3BucketDestination *InventoryS3BucketDestination `type:"structure" required:"true"` +} + +// String returns the string representation +func (s InventoryDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDestination) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryDestination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryDestination"} + if s.S3BucketDestination == nil { + invalidParams.Add(request.NewErrParamRequired("S3BucketDestination")) + } + if s.S3BucketDestination != nil { + if err := s.S3BucketDestination.Validate(); err != nil { + invalidParams.AddNested("S3BucketDestination", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3BucketDestination sets the S3BucketDestination field's value. +func (s *InventoryDestination) SetS3BucketDestination(v *InventoryS3BucketDestination) *InventoryDestination { + s.S3BucketDestination = v + return s +} + +// Contains the type of server-side encryption used to encrypt the inventory +// results. +type InventoryEncryption struct { + _ struct{} `type:"structure"` + + // Specifies the use of SSE-KMS to encrypt delievered Inventory reports. + SSEKMS *SSEKMS `locationName:"SSE-KMS" type:"structure"` + + // Specifies the use of SSE-S3 to encrypt delievered Inventory reports. + SSES3 *SSES3 `locationName:"SSE-S3" type:"structure"` +} + +// String returns the string representation +func (s InventoryEncryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryEncryption) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryEncryption) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryEncryption"} + if s.SSEKMS != nil { + if err := s.SSEKMS.Validate(); err != nil { + invalidParams.AddNested("SSEKMS", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSSEKMS sets the SSEKMS field's value. +func (s *InventoryEncryption) SetSSEKMS(v *SSEKMS) *InventoryEncryption { + s.SSEKMS = v + return s +} + +// SetSSES3 sets the SSES3 field's value. +func (s *InventoryEncryption) SetSSES3(v *SSES3) *InventoryEncryption { + s.SSES3 = v + return s +} + +type InventoryFilter struct { + _ struct{} `type:"structure"` + + // The prefix that an object must have to be included in the inventory results. + // + // Prefix is a required field + Prefix *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s InventoryFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryFilter"} + if s.Prefix == nil { + invalidParams.Add(request.NewErrParamRequired("Prefix")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPrefix sets the Prefix field's value. +func (s *InventoryFilter) SetPrefix(v string) *InventoryFilter { + s.Prefix = &v + return s +} + +type InventoryS3BucketDestination struct { + _ struct{} `type:"structure"` + + // The ID of the account that owns the destination bucket. + AccountId *string `type:"string"` + + // The Amazon resource name (ARN) of the bucket where inventory results will + // be published. + // + // Bucket is a required field + Bucket *string `type:"string" required:"true"` + + // Contains the type of server-side encryption used to encrypt the inventory + // results. + Encryption *InventoryEncryption `type:"structure"` + + // Specifies the output format of the inventory results. + // + // Format is a required field + Format *string `type:"string" required:"true" enum:"InventoryFormat"` + + // The prefix that is prepended to all inventory results. + Prefix *string `type:"string"` +} + +// String returns the string representation +func (s InventoryS3BucketDestination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryS3BucketDestination) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryS3BucketDestination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryS3BucketDestination"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Format == nil { + invalidParams.Add(request.NewErrParamRequired("Format")) + } + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *InventoryS3BucketDestination) SetAccountId(v string) *InventoryS3BucketDestination { + s.AccountId = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *InventoryS3BucketDestination) SetBucket(v string) *InventoryS3BucketDestination { + s.Bucket = &v + return s +} + +func (s *InventoryS3BucketDestination) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetEncryption sets the Encryption field's value. +func (s *InventoryS3BucketDestination) SetEncryption(v *InventoryEncryption) *InventoryS3BucketDestination { + s.Encryption = v + return s +} + +// SetFormat sets the Format field's value. +func (s *InventoryS3BucketDestination) SetFormat(v string) *InventoryS3BucketDestination { + s.Format = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *InventoryS3BucketDestination) SetPrefix(v string) *InventoryS3BucketDestination { + s.Prefix = &v + return s +} + +type InventorySchedule struct { + _ struct{} `type:"structure"` + + // Specifies how frequently inventory results are produced. + // + // Frequency is a required field + Frequency *string `type:"string" required:"true" enum:"InventoryFrequency"` +} + +// String returns the string representation +func (s InventorySchedule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventorySchedule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventorySchedule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventorySchedule"} + if s.Frequency == nil { + invalidParams.Add(request.NewErrParamRequired("Frequency")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFrequency sets the Frequency field's value. +func (s *InventorySchedule) SetFrequency(v string) *InventorySchedule { + s.Frequency = &v + return s +} + +type JSONInput struct { + _ struct{} `type:"structure"` + + // The type of JSON. Valid values: Document, Lines. + Type *string `type:"string" enum:"JSONType"` +} + +// String returns the string representation +func (s JSONInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JSONInput) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *JSONInput) SetType(v string) *JSONInput { + s.Type = &v + return s +} + +type JSONOutput struct { + _ struct{} `type:"structure"` + + // The value used to separate individual records in the output. + RecordDelimiter *string `type:"string"` +} + +// String returns the string representation +func (s JSONOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JSONOutput) GoString() string { + return s.String() +} + +// SetRecordDelimiter sets the RecordDelimiter field's value. +func (s *JSONOutput) SetRecordDelimiter(v string) *JSONOutput { + s.RecordDelimiter = &v + return s +} + +// Container for object key name prefix and suffix filtering rules. +type KeyFilter struct { + _ struct{} `type:"structure"` + + // A list of containers for key value pair that defines the criteria for the + // filter rule. + FilterRules []*FilterRule `locationName:"FilterRule" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s KeyFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KeyFilter) GoString() string { + return s.String() +} + +// SetFilterRules sets the FilterRules field's value. +func (s *KeyFilter) SetFilterRules(v []*FilterRule) *KeyFilter { + s.FilterRules = v + return s +} + +// Container for specifying the AWS Lambda notification configuration. +type LambdaFunctionConfiguration struct { + _ struct{} `type:"structure"` + + // Events is a required field + Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` + + // Container for object key name filtering rules. For information about key + // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + Filter *NotificationConfigurationFilter `type:"structure"` + + // Optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + // Lambda cloud function ARN that Amazon S3 can invoke when it detects events + // of the specified type. + // + // LambdaFunctionArn is a required field + LambdaFunctionArn *string `locationName:"CloudFunction" type:"string" required:"true"` +} + +// String returns the string representation +func (s LambdaFunctionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LambdaFunctionConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LambdaFunctionConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LambdaFunctionConfiguration"} + if s.Events == nil { + invalidParams.Add(request.NewErrParamRequired("Events")) + } + if s.LambdaFunctionArn == nil { + invalidParams.Add(request.NewErrParamRequired("LambdaFunctionArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEvents sets the Events field's value. +func (s *LambdaFunctionConfiguration) SetEvents(v []*string) *LambdaFunctionConfiguration { + s.Events = v + return s +} + +// SetFilter sets the Filter field's value. +func (s *LambdaFunctionConfiguration) SetFilter(v *NotificationConfigurationFilter) *LambdaFunctionConfiguration { + s.Filter = v + return s +} + +// SetId sets the Id field's value. +func (s *LambdaFunctionConfiguration) SetId(v string) *LambdaFunctionConfiguration { + s.Id = &v + return s +} + +// SetLambdaFunctionArn sets the LambdaFunctionArn field's value. +func (s *LambdaFunctionConfiguration) SetLambdaFunctionArn(v string) *LambdaFunctionConfiguration { + s.LambdaFunctionArn = &v + return s +} + +type LifecycleConfiguration struct { + _ struct{} `type:"structure"` + + // Rules is a required field + Rules []*Rule `locationName:"Rule" type:"list" flattened:"true" required:"true"` +} + +// String returns the string representation +func (s LifecycleConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LifecycleConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LifecycleConfiguration"} + if s.Rules == nil { + invalidParams.Add(request.NewErrParamRequired("Rules")) + } + if s.Rules != nil { + for i, v := range s.Rules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Rules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRules sets the Rules field's value. +func (s *LifecycleConfiguration) SetRules(v []*Rule) *LifecycleConfiguration { + s.Rules = v + return s +} + +type LifecycleExpiration struct { + _ struct{} `type:"structure"` + + // Indicates at what date the object is to be moved or deleted. Should be in + // GMT ISO 8601 Format. + Date *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // Indicates the lifetime, in days, of the objects that are subject to the rule. + // The value must be a non-zero positive integer. + Days *int64 `type:"integer"` + + // Indicates whether Amazon S3 will remove a delete marker with no noncurrent + // versions. If set to true, the delete marker will be expired; if set to false + // the policy takes no action. This cannot be specified with Days or Date in + // a Lifecycle Expiration Policy. + ExpiredObjectDeleteMarker *bool `type:"boolean"` +} + +// String returns the string representation +func (s LifecycleExpiration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleExpiration) GoString() string { + return s.String() +} + +// SetDate sets the Date field's value. +func (s *LifecycleExpiration) SetDate(v time.Time) *LifecycleExpiration { + s.Date = &v + return s +} + +// SetDays sets the Days field's value. +func (s *LifecycleExpiration) SetDays(v int64) *LifecycleExpiration { + s.Days = &v + return s +} + +// SetExpiredObjectDeleteMarker sets the ExpiredObjectDeleteMarker field's value. +func (s *LifecycleExpiration) SetExpiredObjectDeleteMarker(v bool) *LifecycleExpiration { + s.ExpiredObjectDeleteMarker = &v + return s +} + +type LifecycleRule struct { + _ struct{} `type:"structure"` + + // Specifies the days since the initiation of an Incomplete Multipart Upload + // that Lifecycle will wait before permanently removing all parts of the upload. + AbortIncompleteMultipartUpload *AbortIncompleteMultipartUpload `type:"structure"` + + Expiration *LifecycleExpiration `type:"structure"` + + // The Filter is used to identify objects that a Lifecycle Rule applies to. + // A Filter must have exactly one of Prefix, Tag, or And specified. + Filter *LifecycleRuleFilter `type:"structure"` + + // Unique identifier for the rule. The value cannot be longer than 255 characters. + ID *string `type:"string"` + + // Specifies when noncurrent object versions expire. Upon expiration, Amazon + // S3 permanently deletes the noncurrent object versions. You set this lifecycle + // configuration action on a bucket that has versioning enabled (or suspended) + // to request that Amazon S3 delete noncurrent object versions at a specific + // period in the object's lifetime. + NoncurrentVersionExpiration *NoncurrentVersionExpiration `type:"structure"` + + NoncurrentVersionTransitions []*NoncurrentVersionTransition `locationName:"NoncurrentVersionTransition" type:"list" flattened:"true"` + + // Prefix identifying one or more objects to which the rule applies. This is + // deprecated; use Filter instead. + Prefix *string `deprecated:"true" type:"string"` + + // If 'Enabled', the rule is currently being applied. If 'Disabled', the rule + // is not currently being applied. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"ExpirationStatus"` + + Transitions []*Transition `locationName:"Transition" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s LifecycleRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LifecycleRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LifecycleRule"} + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.Filter != nil { + if err := s.Filter.Validate(); err != nil { + invalidParams.AddNested("Filter", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAbortIncompleteMultipartUpload sets the AbortIncompleteMultipartUpload field's value. +func (s *LifecycleRule) SetAbortIncompleteMultipartUpload(v *AbortIncompleteMultipartUpload) *LifecycleRule { + s.AbortIncompleteMultipartUpload = v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *LifecycleRule) SetExpiration(v *LifecycleExpiration) *LifecycleRule { + s.Expiration = v + return s +} + +// SetFilter sets the Filter field's value. +func (s *LifecycleRule) SetFilter(v *LifecycleRuleFilter) *LifecycleRule { + s.Filter = v + return s +} + +// SetID sets the ID field's value. +func (s *LifecycleRule) SetID(v string) *LifecycleRule { + s.ID = &v + return s +} + +// SetNoncurrentVersionExpiration sets the NoncurrentVersionExpiration field's value. +func (s *LifecycleRule) SetNoncurrentVersionExpiration(v *NoncurrentVersionExpiration) *LifecycleRule { + s.NoncurrentVersionExpiration = v + return s +} + +// SetNoncurrentVersionTransitions sets the NoncurrentVersionTransitions field's value. +func (s *LifecycleRule) SetNoncurrentVersionTransitions(v []*NoncurrentVersionTransition) *LifecycleRule { + s.NoncurrentVersionTransitions = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *LifecycleRule) SetPrefix(v string) *LifecycleRule { + s.Prefix = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *LifecycleRule) SetStatus(v string) *LifecycleRule { + s.Status = &v + return s +} + +// SetTransitions sets the Transitions field's value. +func (s *LifecycleRule) SetTransitions(v []*Transition) *LifecycleRule { + s.Transitions = v + return s +} + +// This is used in a Lifecycle Rule Filter to apply a logical AND to two or +// more predicates. The Lifecycle Rule will apply to any object matching all +// of the predicates configured inside the And operator. +type LifecycleRuleAndOperator struct { + _ struct{} `type:"structure"` + + Prefix *string `type:"string"` + + // All of these tags must exist in the object's tag set in order for the rule + // to apply. + Tags []*Tag `locationName:"Tag" locationNameList:"Tag" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s LifecycleRuleAndOperator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleRuleAndOperator) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LifecycleRuleAndOperator) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LifecycleRuleAndOperator"} + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPrefix sets the Prefix field's value. +func (s *LifecycleRuleAndOperator) SetPrefix(v string) *LifecycleRuleAndOperator { + s.Prefix = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *LifecycleRuleAndOperator) SetTags(v []*Tag) *LifecycleRuleAndOperator { + s.Tags = v + return s +} + +// The Filter is used to identify objects that a Lifecycle Rule applies to. +// A Filter must have exactly one of Prefix, Tag, or And specified. +type LifecycleRuleFilter struct { + _ struct{} `type:"structure"` + + // This is used in a Lifecycle Rule Filter to apply a logical AND to two or + // more predicates. The Lifecycle Rule will apply to any object matching all + // of the predicates configured inside the And operator. + And *LifecycleRuleAndOperator `type:"structure"` + + // Prefix identifying one or more objects to which the rule applies. + Prefix *string `type:"string"` + + // This tag must exist in the object's tag set in order for the rule to apply. + Tag *Tag `type:"structure"` +} + +// String returns the string representation +func (s LifecycleRuleFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleRuleFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LifecycleRuleFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LifecycleRuleFilter"} + if s.And != nil { + if err := s.And.Validate(); err != nil { + invalidParams.AddNested("And", err.(request.ErrInvalidParams)) + } + } + if s.Tag != nil { + if err := s.Tag.Validate(); err != nil { + invalidParams.AddNested("Tag", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAnd sets the And field's value. +func (s *LifecycleRuleFilter) SetAnd(v *LifecycleRuleAndOperator) *LifecycleRuleFilter { + s.And = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *LifecycleRuleFilter) SetPrefix(v string) *LifecycleRuleFilter { + s.Prefix = &v + return s +} + +// SetTag sets the Tag field's value. +func (s *LifecycleRuleFilter) SetTag(v *Tag) *LifecycleRuleFilter { + s.Tag = v + return s +} + +type ListBucketAnalyticsConfigurationsInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket from which analytics configurations are retrieved. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ContinuationToken that represents a placeholder from where this request + // should begin. + ContinuationToken *string `location:"querystring" locationName:"continuation-token" type:"string"` +} + +// String returns the string representation +func (s ListBucketAnalyticsConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketAnalyticsConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListBucketAnalyticsConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListBucketAnalyticsConfigurationsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListBucketAnalyticsConfigurationsInput) SetBucket(v string) *ListBucketAnalyticsConfigurationsInput { + s.Bucket = &v + return s +} + +func (s *ListBucketAnalyticsConfigurationsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListBucketAnalyticsConfigurationsInput) SetContinuationToken(v string) *ListBucketAnalyticsConfigurationsInput { + s.ContinuationToken = &v + return s +} + +type ListBucketAnalyticsConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // The list of analytics configurations for a bucket. + AnalyticsConfigurationList []*AnalyticsConfiguration `locationName:"AnalyticsConfiguration" type:"list" flattened:"true"` + + // The ContinuationToken that represents where this request began. + ContinuationToken *string `type:"string"` + + // Indicates whether the returned list of analytics configurations is complete. + // A value of true indicates that the list is not complete and the NextContinuationToken + // will be provided for a subsequent request. + IsTruncated *bool `type:"boolean"` + + // NextContinuationToken is sent when isTruncated is true, which indicates that + // there are more analytics configurations to list. The next request must include + // this NextContinuationToken. The token is obfuscated and is not a usable value. + NextContinuationToken *string `type:"string"` +} + +// String returns the string representation +func (s ListBucketAnalyticsConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketAnalyticsConfigurationsOutput) GoString() string { + return s.String() +} + +// SetAnalyticsConfigurationList sets the AnalyticsConfigurationList field's value. +func (s *ListBucketAnalyticsConfigurationsOutput) SetAnalyticsConfigurationList(v []*AnalyticsConfiguration) *ListBucketAnalyticsConfigurationsOutput { + s.AnalyticsConfigurationList = v + return s +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListBucketAnalyticsConfigurationsOutput) SetContinuationToken(v string) *ListBucketAnalyticsConfigurationsOutput { + s.ContinuationToken = &v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListBucketAnalyticsConfigurationsOutput) SetIsTruncated(v bool) *ListBucketAnalyticsConfigurationsOutput { + s.IsTruncated = &v + return s +} + +// SetNextContinuationToken sets the NextContinuationToken field's value. +func (s *ListBucketAnalyticsConfigurationsOutput) SetNextContinuationToken(v string) *ListBucketAnalyticsConfigurationsOutput { + s.NextContinuationToken = &v + return s +} + +type ListBucketInventoryConfigurationsInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the inventory configurations to retrieve. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The marker used to continue an inventory configuration listing that has been + // truncated. Use the NextContinuationToken from a previously truncated list + // response to continue the listing. The continuation token is an opaque value + // that Amazon S3 understands. + ContinuationToken *string `location:"querystring" locationName:"continuation-token" type:"string"` +} + +// String returns the string representation +func (s ListBucketInventoryConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketInventoryConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListBucketInventoryConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListBucketInventoryConfigurationsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListBucketInventoryConfigurationsInput) SetBucket(v string) *ListBucketInventoryConfigurationsInput { + s.Bucket = &v + return s +} + +func (s *ListBucketInventoryConfigurationsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListBucketInventoryConfigurationsInput) SetContinuationToken(v string) *ListBucketInventoryConfigurationsInput { + s.ContinuationToken = &v + return s +} + +type ListBucketInventoryConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // If sent in the request, the marker that is used as a starting point for this + // inventory configuration list response. + ContinuationToken *string `type:"string"` + + // The list of inventory configurations for a bucket. + InventoryConfigurationList []*InventoryConfiguration `locationName:"InventoryConfiguration" type:"list" flattened:"true"` + + // Indicates whether the returned list of inventory configurations is truncated + // in this response. A value of true indicates that the list is truncated. + IsTruncated *bool `type:"boolean"` + + // The marker used to continue this inventory configuration listing. Use the + // NextContinuationToken from this response to continue the listing in a subsequent + // request. The continuation token is an opaque value that Amazon S3 understands. + NextContinuationToken *string `type:"string"` +} + +// String returns the string representation +func (s ListBucketInventoryConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketInventoryConfigurationsOutput) GoString() string { + return s.String() +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListBucketInventoryConfigurationsOutput) SetContinuationToken(v string) *ListBucketInventoryConfigurationsOutput { + s.ContinuationToken = &v + return s +} + +// SetInventoryConfigurationList sets the InventoryConfigurationList field's value. +func (s *ListBucketInventoryConfigurationsOutput) SetInventoryConfigurationList(v []*InventoryConfiguration) *ListBucketInventoryConfigurationsOutput { + s.InventoryConfigurationList = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListBucketInventoryConfigurationsOutput) SetIsTruncated(v bool) *ListBucketInventoryConfigurationsOutput { + s.IsTruncated = &v + return s +} + +// SetNextContinuationToken sets the NextContinuationToken field's value. +func (s *ListBucketInventoryConfigurationsOutput) SetNextContinuationToken(v string) *ListBucketInventoryConfigurationsOutput { + s.NextContinuationToken = &v + return s +} + +type ListBucketMetricsConfigurationsInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the metrics configurations to retrieve. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The marker that is used to continue a metrics configuration listing that + // has been truncated. Use the NextContinuationToken from a previously truncated + // list response to continue the listing. The continuation token is an opaque + // value that Amazon S3 understands. + ContinuationToken *string `location:"querystring" locationName:"continuation-token" type:"string"` +} + +// String returns the string representation +func (s ListBucketMetricsConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketMetricsConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListBucketMetricsConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListBucketMetricsConfigurationsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListBucketMetricsConfigurationsInput) SetBucket(v string) *ListBucketMetricsConfigurationsInput { + s.Bucket = &v + return s +} + +func (s *ListBucketMetricsConfigurationsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListBucketMetricsConfigurationsInput) SetContinuationToken(v string) *ListBucketMetricsConfigurationsInput { + s.ContinuationToken = &v + return s +} + +type ListBucketMetricsConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // The marker that is used as a starting point for this metrics configuration + // list response. This value is present if it was sent in the request. + ContinuationToken *string `type:"string"` + + // Indicates whether the returned list of metrics configurations is complete. + // A value of true indicates that the list is not complete and the NextContinuationToken + // will be provided for a subsequent request. + IsTruncated *bool `type:"boolean"` + + // The list of metrics configurations for a bucket. + MetricsConfigurationList []*MetricsConfiguration `locationName:"MetricsConfiguration" type:"list" flattened:"true"` + + // The marker used to continue a metrics configuration listing that has been + // truncated. Use the NextContinuationToken from a previously truncated list + // response to continue the listing. The continuation token is an opaque value + // that Amazon S3 understands. + NextContinuationToken *string `type:"string"` +} + +// String returns the string representation +func (s ListBucketMetricsConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketMetricsConfigurationsOutput) GoString() string { + return s.String() +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListBucketMetricsConfigurationsOutput) SetContinuationToken(v string) *ListBucketMetricsConfigurationsOutput { + s.ContinuationToken = &v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListBucketMetricsConfigurationsOutput) SetIsTruncated(v bool) *ListBucketMetricsConfigurationsOutput { + s.IsTruncated = &v + return s +} + +// SetMetricsConfigurationList sets the MetricsConfigurationList field's value. +func (s *ListBucketMetricsConfigurationsOutput) SetMetricsConfigurationList(v []*MetricsConfiguration) *ListBucketMetricsConfigurationsOutput { + s.MetricsConfigurationList = v + return s +} + +// SetNextContinuationToken sets the NextContinuationToken field's value. +func (s *ListBucketMetricsConfigurationsOutput) SetNextContinuationToken(v string) *ListBucketMetricsConfigurationsOutput { + s.NextContinuationToken = &v + return s +} + +type ListBucketsInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ListBucketsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketsInput) GoString() string { + return s.String() +} + +type ListBucketsOutput struct { + _ struct{} `type:"structure"` + + Buckets []*Bucket `locationNameList:"Bucket" type:"list"` + + Owner *Owner `type:"structure"` +} + +// String returns the string representation +func (s ListBucketsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListBucketsOutput) GoString() string { + return s.String() +} + +// SetBuckets sets the Buckets field's value. +func (s *ListBucketsOutput) SetBuckets(v []*Bucket) *ListBucketsOutput { + s.Buckets = v + return s +} + +// SetOwner sets the Owner field's value. +func (s *ListBucketsOutput) SetOwner(v *Owner) *ListBucketsOutput { + s.Owner = v + return s +} + +type ListMultipartUploadsInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Character you use to group keys. + Delimiter *string `location:"querystring" locationName:"delimiter" type:"string"` + + // Requests Amazon S3 to encode the object keys in the response and specifies + // the encoding method to use. An object key may contain any Unicode character; + // however, XML 1.0 parser cannot parse some characters, such as characters + // with an ASCII value from 0 to 10. For characters that are not supported in + // XML 1.0, you can add this parameter to request that Amazon S3 encode the + // keys in the response. + EncodingType *string `location:"querystring" locationName:"encoding-type" type:"string" enum:"EncodingType"` + + // Together with upload-id-marker, this parameter specifies the multipart upload + // after which listing should begin. + KeyMarker *string `location:"querystring" locationName:"key-marker" type:"string"` + + // Sets the maximum number of multipart uploads, from 1 to 1,000, to return + // in the response body. 1,000 is the maximum number of uploads that can be + // returned in a response. + MaxUploads *int64 `location:"querystring" locationName:"max-uploads" type:"integer"` + + // Lists in-progress uploads only for those keys that begin with the specified + // prefix. + Prefix *string `location:"querystring" locationName:"prefix" type:"string"` + + // Together with key-marker, specifies the multipart upload after which listing + // should begin. If key-marker is not specified, the upload-id-marker parameter + // is ignored. + UploadIdMarker *string `location:"querystring" locationName:"upload-id-marker" type:"string"` +} + +// String returns the string representation +func (s ListMultipartUploadsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMultipartUploadsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListMultipartUploadsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListMultipartUploadsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListMultipartUploadsInput) SetBucket(v string) *ListMultipartUploadsInput { + s.Bucket = &v + return s +} + +func (s *ListMultipartUploadsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListMultipartUploadsInput) SetDelimiter(v string) *ListMultipartUploadsInput { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListMultipartUploadsInput) SetEncodingType(v string) *ListMultipartUploadsInput { + s.EncodingType = &v + return s +} + +// SetKeyMarker sets the KeyMarker field's value. +func (s *ListMultipartUploadsInput) SetKeyMarker(v string) *ListMultipartUploadsInput { + s.KeyMarker = &v + return s +} + +// SetMaxUploads sets the MaxUploads field's value. +func (s *ListMultipartUploadsInput) SetMaxUploads(v int64) *ListMultipartUploadsInput { + s.MaxUploads = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListMultipartUploadsInput) SetPrefix(v string) *ListMultipartUploadsInput { + s.Prefix = &v + return s +} + +// SetUploadIdMarker sets the UploadIdMarker field's value. +func (s *ListMultipartUploadsInput) SetUploadIdMarker(v string) *ListMultipartUploadsInput { + s.UploadIdMarker = &v + return s +} + +type ListMultipartUploadsOutput struct { + _ struct{} `type:"structure"` + + // Name of the bucket to which the multipart upload was initiated. + Bucket *string `type:"string"` + + CommonPrefixes []*CommonPrefix `type:"list" flattened:"true"` + + Delimiter *string `type:"string"` + + // Encoding type used by Amazon S3 to encode object keys in the response. + EncodingType *string `type:"string" enum:"EncodingType"` + + // Indicates whether the returned list of multipart uploads is truncated. A + // value of true indicates that the list was truncated. The list can be truncated + // if the number of multipart uploads exceeds the limit allowed or specified + // by max uploads. + IsTruncated *bool `type:"boolean"` + + // The key at or after which the listing began. + KeyMarker *string `type:"string"` + + // Maximum number of multipart uploads that could have been included in the + // response. + MaxUploads *int64 `type:"integer"` + + // When a list is truncated, this element specifies the value that should be + // used for the key-marker request parameter in a subsequent request. + NextKeyMarker *string `type:"string"` + + // When a list is truncated, this element specifies the value that should be + // used for the upload-id-marker request parameter in a subsequent request. + NextUploadIdMarker *string `type:"string"` + + // When a prefix is provided in the request, this field contains the specified + // prefix. The result contains only keys starting with the specified prefix. + Prefix *string `type:"string"` + + // Upload ID after which listing began. + UploadIdMarker *string `type:"string"` + + Uploads []*MultipartUpload `locationName:"Upload" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s ListMultipartUploadsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMultipartUploadsOutput) GoString() string { + return s.String() +} + +// SetBucket sets the Bucket field's value. +func (s *ListMultipartUploadsOutput) SetBucket(v string) *ListMultipartUploadsOutput { + s.Bucket = &v + return s +} + +func (s *ListMultipartUploadsOutput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCommonPrefixes sets the CommonPrefixes field's value. +func (s *ListMultipartUploadsOutput) SetCommonPrefixes(v []*CommonPrefix) *ListMultipartUploadsOutput { + s.CommonPrefixes = v + return s +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListMultipartUploadsOutput) SetDelimiter(v string) *ListMultipartUploadsOutput { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListMultipartUploadsOutput) SetEncodingType(v string) *ListMultipartUploadsOutput { + s.EncodingType = &v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListMultipartUploadsOutput) SetIsTruncated(v bool) *ListMultipartUploadsOutput { + s.IsTruncated = &v + return s +} + +// SetKeyMarker sets the KeyMarker field's value. +func (s *ListMultipartUploadsOutput) SetKeyMarker(v string) *ListMultipartUploadsOutput { + s.KeyMarker = &v + return s +} + +// SetMaxUploads sets the MaxUploads field's value. +func (s *ListMultipartUploadsOutput) SetMaxUploads(v int64) *ListMultipartUploadsOutput { + s.MaxUploads = &v + return s +} + +// SetNextKeyMarker sets the NextKeyMarker field's value. +func (s *ListMultipartUploadsOutput) SetNextKeyMarker(v string) *ListMultipartUploadsOutput { + s.NextKeyMarker = &v + return s +} + +// SetNextUploadIdMarker sets the NextUploadIdMarker field's value. +func (s *ListMultipartUploadsOutput) SetNextUploadIdMarker(v string) *ListMultipartUploadsOutput { + s.NextUploadIdMarker = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListMultipartUploadsOutput) SetPrefix(v string) *ListMultipartUploadsOutput { + s.Prefix = &v + return s +} + +// SetUploadIdMarker sets the UploadIdMarker field's value. +func (s *ListMultipartUploadsOutput) SetUploadIdMarker(v string) *ListMultipartUploadsOutput { + s.UploadIdMarker = &v + return s +} + +// SetUploads sets the Uploads field's value. +func (s *ListMultipartUploadsOutput) SetUploads(v []*MultipartUpload) *ListMultipartUploadsOutput { + s.Uploads = v + return s +} + +type ListObjectVersionsInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // A delimiter is a character you use to group keys. + Delimiter *string `location:"querystring" locationName:"delimiter" type:"string"` + + // Requests Amazon S3 to encode the object keys in the response and specifies + // the encoding method to use. An object key may contain any Unicode character; + // however, XML 1.0 parser cannot parse some characters, such as characters + // with an ASCII value from 0 to 10. For characters that are not supported in + // XML 1.0, you can add this parameter to request that Amazon S3 encode the + // keys in the response. + EncodingType *string `location:"querystring" locationName:"encoding-type" type:"string" enum:"EncodingType"` + + // Specifies the key to start with when listing objects in a bucket. + KeyMarker *string `location:"querystring" locationName:"key-marker" type:"string"` + + // Sets the maximum number of keys returned in the response. The response might + // contain fewer keys but will never contain more. + MaxKeys *int64 `location:"querystring" locationName:"max-keys" type:"integer"` + + // Limits the response to keys that begin with the specified prefix. + Prefix *string `location:"querystring" locationName:"prefix" type:"string"` + + // Specifies the object version you want to start listing from. + VersionIdMarker *string `location:"querystring" locationName:"version-id-marker" type:"string"` +} + +// String returns the string representation +func (s ListObjectVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListObjectVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListObjectVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListObjectVersionsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListObjectVersionsInput) SetBucket(v string) *ListObjectVersionsInput { + s.Bucket = &v + return s +} + +func (s *ListObjectVersionsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListObjectVersionsInput) SetDelimiter(v string) *ListObjectVersionsInput { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListObjectVersionsInput) SetEncodingType(v string) *ListObjectVersionsInput { + s.EncodingType = &v + return s +} + +// SetKeyMarker sets the KeyMarker field's value. +func (s *ListObjectVersionsInput) SetKeyMarker(v string) *ListObjectVersionsInput { + s.KeyMarker = &v + return s +} + +// SetMaxKeys sets the MaxKeys field's value. +func (s *ListObjectVersionsInput) SetMaxKeys(v int64) *ListObjectVersionsInput { + s.MaxKeys = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListObjectVersionsInput) SetPrefix(v string) *ListObjectVersionsInput { + s.Prefix = &v + return s +} + +// SetVersionIdMarker sets the VersionIdMarker field's value. +func (s *ListObjectVersionsInput) SetVersionIdMarker(v string) *ListObjectVersionsInput { + s.VersionIdMarker = &v + return s +} + +type ListObjectVersionsOutput struct { + _ struct{} `type:"structure"` + + CommonPrefixes []*CommonPrefix `type:"list" flattened:"true"` + + DeleteMarkers []*DeleteMarkerEntry `locationName:"DeleteMarker" type:"list" flattened:"true"` + + Delimiter *string `type:"string"` + + // Encoding type used by Amazon S3 to encode object keys in the response. + EncodingType *string `type:"string" enum:"EncodingType"` + + // A flag that indicates whether or not Amazon S3 returned all of the results + // that satisfied the search criteria. If your results were truncated, you can + // make a follow-up paginated request using the NextKeyMarker and NextVersionIdMarker + // response parameters as a starting place in another request to return the + // rest of the results. + IsTruncated *bool `type:"boolean"` + + // Marks the last Key returned in a truncated response. + KeyMarker *string `type:"string"` + + MaxKeys *int64 `type:"integer"` + + Name *string `type:"string"` + + // Use this value for the key marker request parameter in a subsequent request. + NextKeyMarker *string `type:"string"` + + // Use this value for the next version id marker parameter in a subsequent request. + NextVersionIdMarker *string `type:"string"` + + Prefix *string `type:"string"` + + VersionIdMarker *string `type:"string"` + + Versions []*ObjectVersion `locationName:"Version" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s ListObjectVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListObjectVersionsOutput) GoString() string { + return s.String() +} + +// SetCommonPrefixes sets the CommonPrefixes field's value. +func (s *ListObjectVersionsOutput) SetCommonPrefixes(v []*CommonPrefix) *ListObjectVersionsOutput { + s.CommonPrefixes = v + return s +} + +// SetDeleteMarkers sets the DeleteMarkers field's value. +func (s *ListObjectVersionsOutput) SetDeleteMarkers(v []*DeleteMarkerEntry) *ListObjectVersionsOutput { + s.DeleteMarkers = v + return s +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListObjectVersionsOutput) SetDelimiter(v string) *ListObjectVersionsOutput { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListObjectVersionsOutput) SetEncodingType(v string) *ListObjectVersionsOutput { + s.EncodingType = &v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListObjectVersionsOutput) SetIsTruncated(v bool) *ListObjectVersionsOutput { + s.IsTruncated = &v + return s +} + +// SetKeyMarker sets the KeyMarker field's value. +func (s *ListObjectVersionsOutput) SetKeyMarker(v string) *ListObjectVersionsOutput { + s.KeyMarker = &v + return s +} + +// SetMaxKeys sets the MaxKeys field's value. +func (s *ListObjectVersionsOutput) SetMaxKeys(v int64) *ListObjectVersionsOutput { + s.MaxKeys = &v + return s +} + +// SetName sets the Name field's value. +func (s *ListObjectVersionsOutput) SetName(v string) *ListObjectVersionsOutput { + s.Name = &v + return s +} + +// SetNextKeyMarker sets the NextKeyMarker field's value. +func (s *ListObjectVersionsOutput) SetNextKeyMarker(v string) *ListObjectVersionsOutput { + s.NextKeyMarker = &v + return s +} + +// SetNextVersionIdMarker sets the NextVersionIdMarker field's value. +func (s *ListObjectVersionsOutput) SetNextVersionIdMarker(v string) *ListObjectVersionsOutput { + s.NextVersionIdMarker = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListObjectVersionsOutput) SetPrefix(v string) *ListObjectVersionsOutput { + s.Prefix = &v + return s +} + +// SetVersionIdMarker sets the VersionIdMarker field's value. +func (s *ListObjectVersionsOutput) SetVersionIdMarker(v string) *ListObjectVersionsOutput { + s.VersionIdMarker = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *ListObjectVersionsOutput) SetVersions(v []*ObjectVersion) *ListObjectVersionsOutput { + s.Versions = v + return s +} + +type ListObjectsInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // A delimiter is a character you use to group keys. + Delimiter *string `location:"querystring" locationName:"delimiter" type:"string"` + + // Requests Amazon S3 to encode the object keys in the response and specifies + // the encoding method to use. An object key may contain any Unicode character; + // however, XML 1.0 parser cannot parse some characters, such as characters + // with an ASCII value from 0 to 10. For characters that are not supported in + // XML 1.0, you can add this parameter to request that Amazon S3 encode the + // keys in the response. + EncodingType *string `location:"querystring" locationName:"encoding-type" type:"string" enum:"EncodingType"` + + // Specifies the key to start with when listing objects in a bucket. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // Sets the maximum number of keys returned in the response. The response might + // contain fewer keys but will never contain more. + MaxKeys *int64 `location:"querystring" locationName:"max-keys" type:"integer"` + + // Limits the response to keys that begin with the specified prefix. + Prefix *string `location:"querystring" locationName:"prefix" type:"string"` + + // Confirms that the requester knows that she or he will be charged for the + // list objects request. Bucket owners need not specify this parameter in their + // requests. + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` +} + +// String returns the string representation +func (s ListObjectsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListObjectsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListObjectsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListObjectsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListObjectsInput) SetBucket(v string) *ListObjectsInput { + s.Bucket = &v + return s +} + +func (s *ListObjectsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListObjectsInput) SetDelimiter(v string) *ListObjectsInput { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListObjectsInput) SetEncodingType(v string) *ListObjectsInput { + s.EncodingType = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListObjectsInput) SetMarker(v string) *ListObjectsInput { + s.Marker = &v + return s +} + +// SetMaxKeys sets the MaxKeys field's value. +func (s *ListObjectsInput) SetMaxKeys(v int64) *ListObjectsInput { + s.MaxKeys = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListObjectsInput) SetPrefix(v string) *ListObjectsInput { + s.Prefix = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *ListObjectsInput) SetRequestPayer(v string) *ListObjectsInput { + s.RequestPayer = &v + return s +} + +type ListObjectsOutput struct { + _ struct{} `type:"structure"` + + CommonPrefixes []*CommonPrefix `type:"list" flattened:"true"` + + Contents []*Object `type:"list" flattened:"true"` + + Delimiter *string `type:"string"` + + // Encoding type used by Amazon S3 to encode object keys in the response. + EncodingType *string `type:"string" enum:"EncodingType"` + + // A flag that indicates whether or not Amazon S3 returned all of the results + // that satisfied the search criteria. + IsTruncated *bool `type:"boolean"` + + Marker *string `type:"string"` + + MaxKeys *int64 `type:"integer"` + + Name *string `type:"string"` + + // When response is truncated (the IsTruncated element value in the response + // is true), you can use the key name in this field as marker in the subsequent + // request to get next set of objects. Amazon S3 lists objects in alphabetical + // order Note: This element is returned only if you have delimiter request parameter + // specified. If response does not include the NextMaker and it is truncated, + // you can use the value of the last Key in the response as the marker in the + // subsequent request to get the next set of object keys. + NextMarker *string `type:"string"` + + Prefix *string `type:"string"` +} + +// String returns the string representation +func (s ListObjectsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListObjectsOutput) GoString() string { + return s.String() +} + +// SetCommonPrefixes sets the CommonPrefixes field's value. +func (s *ListObjectsOutput) SetCommonPrefixes(v []*CommonPrefix) *ListObjectsOutput { + s.CommonPrefixes = v + return s +} + +// SetContents sets the Contents field's value. +func (s *ListObjectsOutput) SetContents(v []*Object) *ListObjectsOutput { + s.Contents = v + return s +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListObjectsOutput) SetDelimiter(v string) *ListObjectsOutput { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListObjectsOutput) SetEncodingType(v string) *ListObjectsOutput { + s.EncodingType = &v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListObjectsOutput) SetIsTruncated(v bool) *ListObjectsOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListObjectsOutput) SetMarker(v string) *ListObjectsOutput { + s.Marker = &v + return s +} + +// SetMaxKeys sets the MaxKeys field's value. +func (s *ListObjectsOutput) SetMaxKeys(v int64) *ListObjectsOutput { + s.MaxKeys = &v + return s +} + +// SetName sets the Name field's value. +func (s *ListObjectsOutput) SetName(v string) *ListObjectsOutput { + s.Name = &v + return s +} + +// SetNextMarker sets the NextMarker field's value. +func (s *ListObjectsOutput) SetNextMarker(v string) *ListObjectsOutput { + s.NextMarker = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListObjectsOutput) SetPrefix(v string) *ListObjectsOutput { + s.Prefix = &v + return s +} + +type ListObjectsV2Input struct { + _ struct{} `type:"structure"` + + // Name of the bucket to list. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // ContinuationToken indicates Amazon S3 that the list is being continued on + // this bucket with a token. ContinuationToken is obfuscated and is not a real + // key + ContinuationToken *string `location:"querystring" locationName:"continuation-token" type:"string"` + + // A delimiter is a character you use to group keys. + Delimiter *string `location:"querystring" locationName:"delimiter" type:"string"` + + // Encoding type used by Amazon S3 to encode object keys in the response. + EncodingType *string `location:"querystring" locationName:"encoding-type" type:"string" enum:"EncodingType"` + + // The owner field is not present in listV2 by default, if you want to return + // owner field with each key in the result then set the fetch owner field to + // true + FetchOwner *bool `location:"querystring" locationName:"fetch-owner" type:"boolean"` + + // Sets the maximum number of keys returned in the response. The response might + // contain fewer keys but will never contain more. + MaxKeys *int64 `location:"querystring" locationName:"max-keys" type:"integer"` + + // Limits the response to keys that begin with the specified prefix. + Prefix *string `location:"querystring" locationName:"prefix" type:"string"` + + // Confirms that the requester knows that she or he will be charged for the + // list objects request in V2 style. Bucket owners need not specify this parameter + // in their requests. + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // StartAfter is where you want Amazon S3 to start listing from. Amazon S3 starts + // listing after this specified key. StartAfter can be any key in the bucket + StartAfter *string `location:"querystring" locationName:"start-after" type:"string"` +} + +// String returns the string representation +func (s ListObjectsV2Input) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListObjectsV2Input) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListObjectsV2Input) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListObjectsV2Input"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListObjectsV2Input) SetBucket(v string) *ListObjectsV2Input { + s.Bucket = &v + return s +} + +func (s *ListObjectsV2Input) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListObjectsV2Input) SetContinuationToken(v string) *ListObjectsV2Input { + s.ContinuationToken = &v + return s +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListObjectsV2Input) SetDelimiter(v string) *ListObjectsV2Input { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListObjectsV2Input) SetEncodingType(v string) *ListObjectsV2Input { + s.EncodingType = &v + return s +} + +// SetFetchOwner sets the FetchOwner field's value. +func (s *ListObjectsV2Input) SetFetchOwner(v bool) *ListObjectsV2Input { + s.FetchOwner = &v + return s +} + +// SetMaxKeys sets the MaxKeys field's value. +func (s *ListObjectsV2Input) SetMaxKeys(v int64) *ListObjectsV2Input { + s.MaxKeys = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListObjectsV2Input) SetPrefix(v string) *ListObjectsV2Input { + s.Prefix = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *ListObjectsV2Input) SetRequestPayer(v string) *ListObjectsV2Input { + s.RequestPayer = &v + return s +} + +// SetStartAfter sets the StartAfter field's value. +func (s *ListObjectsV2Input) SetStartAfter(v string) *ListObjectsV2Input { + s.StartAfter = &v + return s +} + +type ListObjectsV2Output struct { + _ struct{} `type:"structure"` + + // CommonPrefixes contains all (if there are any) keys between Prefix and the + // next occurrence of the string specified by delimiter + CommonPrefixes []*CommonPrefix `type:"list" flattened:"true"` + + // Metadata about each object returned. + Contents []*Object `type:"list" flattened:"true"` + + // ContinuationToken indicates Amazon S3 that the list is being continued on + // this bucket with a token. ContinuationToken is obfuscated and is not a real + // key + ContinuationToken *string `type:"string"` + + // A delimiter is a character you use to group keys. + Delimiter *string `type:"string"` + + // Encoding type used by Amazon S3 to encode object keys in the response. + EncodingType *string `type:"string" enum:"EncodingType"` + + // A flag that indicates whether or not Amazon S3 returned all of the results + // that satisfied the search criteria. + IsTruncated *bool `type:"boolean"` + + // KeyCount is the number of keys returned with this request. KeyCount will + // always be less than equals to MaxKeys field. Say you ask for 50 keys, your + // result will include less than equals 50 keys + KeyCount *int64 `type:"integer"` + + // Sets the maximum number of keys returned in the response. The response might + // contain fewer keys but will never contain more. + MaxKeys *int64 `type:"integer"` + + // Name of the bucket to list. + Name *string `type:"string"` + + // NextContinuationToken is sent when isTruncated is true which means there + // are more keys in the bucket that can be listed. The next list requests to + // Amazon S3 can be continued with this NextContinuationToken. NextContinuationToken + // is obfuscated and is not a real key + NextContinuationToken *string `type:"string"` + + // Limits the response to keys that begin with the specified prefix. + Prefix *string `type:"string"` + + // StartAfter is where you want Amazon S3 to start listing from. Amazon S3 starts + // listing after this specified key. StartAfter can be any key in the bucket + StartAfter *string `type:"string"` +} + +// String returns the string representation +func (s ListObjectsV2Output) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListObjectsV2Output) GoString() string { + return s.String() +} + +// SetCommonPrefixes sets the CommonPrefixes field's value. +func (s *ListObjectsV2Output) SetCommonPrefixes(v []*CommonPrefix) *ListObjectsV2Output { + s.CommonPrefixes = v + return s +} + +// SetContents sets the Contents field's value. +func (s *ListObjectsV2Output) SetContents(v []*Object) *ListObjectsV2Output { + s.Contents = v + return s +} + +// SetContinuationToken sets the ContinuationToken field's value. +func (s *ListObjectsV2Output) SetContinuationToken(v string) *ListObjectsV2Output { + s.ContinuationToken = &v + return s +} + +// SetDelimiter sets the Delimiter field's value. +func (s *ListObjectsV2Output) SetDelimiter(v string) *ListObjectsV2Output { + s.Delimiter = &v + return s +} + +// SetEncodingType sets the EncodingType field's value. +func (s *ListObjectsV2Output) SetEncodingType(v string) *ListObjectsV2Output { + s.EncodingType = &v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListObjectsV2Output) SetIsTruncated(v bool) *ListObjectsV2Output { + s.IsTruncated = &v + return s +} + +// SetKeyCount sets the KeyCount field's value. +func (s *ListObjectsV2Output) SetKeyCount(v int64) *ListObjectsV2Output { + s.KeyCount = &v + return s +} + +// SetMaxKeys sets the MaxKeys field's value. +func (s *ListObjectsV2Output) SetMaxKeys(v int64) *ListObjectsV2Output { + s.MaxKeys = &v + return s +} + +// SetName sets the Name field's value. +func (s *ListObjectsV2Output) SetName(v string) *ListObjectsV2Output { + s.Name = &v + return s +} + +// SetNextContinuationToken sets the NextContinuationToken field's value. +func (s *ListObjectsV2Output) SetNextContinuationToken(v string) *ListObjectsV2Output { + s.NextContinuationToken = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ListObjectsV2Output) SetPrefix(v string) *ListObjectsV2Output { + s.Prefix = &v + return s +} + +// SetStartAfter sets the StartAfter field's value. +func (s *ListObjectsV2Output) SetStartAfter(v string) *ListObjectsV2Output { + s.StartAfter = &v + return s +} + +type ListPartsInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Sets the maximum number of parts to return. + MaxParts *int64 `location:"querystring" locationName:"max-parts" type:"integer"` + + // Specifies the part after which listing should begin. Only parts with higher + // part numbers will be listed. + PartNumberMarker *int64 `location:"querystring" locationName:"part-number-marker" type:"integer"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Upload ID identifying the multipart upload whose parts are being listed. + // + // UploadId is a required field + UploadId *string `location:"querystring" locationName:"uploadId" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListPartsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPartsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPartsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPartsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.UploadId == nil { + invalidParams.Add(request.NewErrParamRequired("UploadId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *ListPartsInput) SetBucket(v string) *ListPartsInput { + s.Bucket = &v + return s +} + +func (s *ListPartsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *ListPartsInput) SetKey(v string) *ListPartsInput { + s.Key = &v + return s +} + +// SetMaxParts sets the MaxParts field's value. +func (s *ListPartsInput) SetMaxParts(v int64) *ListPartsInput { + s.MaxParts = &v + return s +} + +// SetPartNumberMarker sets the PartNumberMarker field's value. +func (s *ListPartsInput) SetPartNumberMarker(v int64) *ListPartsInput { + s.PartNumberMarker = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *ListPartsInput) SetRequestPayer(v string) *ListPartsInput { + s.RequestPayer = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *ListPartsInput) SetUploadId(v string) *ListPartsInput { + s.UploadId = &v + return s +} + +type ListPartsOutput struct { + _ struct{} `type:"structure"` + + // Date when multipart upload will become eligible for abort operation by lifecycle. + AbortDate *time.Time `location:"header" locationName:"x-amz-abort-date" type:"timestamp" timestampFormat:"rfc822"` + + // Id of the lifecycle rule that makes a multipart upload eligible for abort + // operation. + AbortRuleId *string `location:"header" locationName:"x-amz-abort-rule-id" type:"string"` + + // Name of the bucket to which the multipart upload was initiated. + Bucket *string `type:"string"` + + // Identifies who initiated the multipart upload. + Initiator *Initiator `type:"structure"` + + // Indicates whether the returned list of parts is truncated. + IsTruncated *bool `type:"boolean"` + + // Object key for which the multipart upload was initiated. + Key *string `min:"1" type:"string"` + + // Maximum number of parts that were allowed in the response. + MaxParts *int64 `type:"integer"` + + // When a list is truncated, this element specifies the last part in the list, + // as well as the value to use for the part-number-marker request parameter + // in a subsequent request. + NextPartNumberMarker *int64 `type:"integer"` + + Owner *Owner `type:"structure"` + + // Part number after which listing begins. + PartNumberMarker *int64 `type:"integer"` + + Parts []*Part `locationName:"Part" type:"list" flattened:"true"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"StorageClass"` + + // Upload ID identifying the multipart upload whose parts are being listed. + UploadId *string `type:"string"` +} + +// String returns the string representation +func (s ListPartsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPartsOutput) GoString() string { + return s.String() +} + +// SetAbortDate sets the AbortDate field's value. +func (s *ListPartsOutput) SetAbortDate(v time.Time) *ListPartsOutput { + s.AbortDate = &v + return s +} + +// SetAbortRuleId sets the AbortRuleId field's value. +func (s *ListPartsOutput) SetAbortRuleId(v string) *ListPartsOutput { + s.AbortRuleId = &v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *ListPartsOutput) SetBucket(v string) *ListPartsOutput { + s.Bucket = &v + return s +} + +func (s *ListPartsOutput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetInitiator sets the Initiator field's value. +func (s *ListPartsOutput) SetInitiator(v *Initiator) *ListPartsOutput { + s.Initiator = v + return s +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListPartsOutput) SetIsTruncated(v bool) *ListPartsOutput { + s.IsTruncated = &v + return s +} + +// SetKey sets the Key field's value. +func (s *ListPartsOutput) SetKey(v string) *ListPartsOutput { + s.Key = &v + return s +} + +// SetMaxParts sets the MaxParts field's value. +func (s *ListPartsOutput) SetMaxParts(v int64) *ListPartsOutput { + s.MaxParts = &v + return s +} + +// SetNextPartNumberMarker sets the NextPartNumberMarker field's value. +func (s *ListPartsOutput) SetNextPartNumberMarker(v int64) *ListPartsOutput { + s.NextPartNumberMarker = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *ListPartsOutput) SetOwner(v *Owner) *ListPartsOutput { + s.Owner = v + return s +} + +// SetPartNumberMarker sets the PartNumberMarker field's value. +func (s *ListPartsOutput) SetPartNumberMarker(v int64) *ListPartsOutput { + s.PartNumberMarker = &v + return s +} + +// SetParts sets the Parts field's value. +func (s *ListPartsOutput) SetParts(v []*Part) *ListPartsOutput { + s.Parts = v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *ListPartsOutput) SetRequestCharged(v string) *ListPartsOutput { + s.RequestCharged = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *ListPartsOutput) SetStorageClass(v string) *ListPartsOutput { + s.StorageClass = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *ListPartsOutput) SetUploadId(v string) *ListPartsOutput { + s.UploadId = &v + return s +} + +// Describes an S3 location that will receive the results of the restore request. +type Location struct { + _ struct{} `type:"structure"` + + // A list of grants that control access to the staged results. + AccessControlList []*Grant `locationNameList:"Grant" type:"list"` + + // The name of the bucket where the restore results will be placed. + // + // BucketName is a required field + BucketName *string `type:"string" required:"true"` + + // The canned ACL to apply to the restore results. + CannedACL *string `type:"string" enum:"ObjectCannedACL"` + + // Describes the server-side encryption that will be applied to the restore + // results. + Encryption *Encryption `type:"structure"` + + // The prefix that is prepended to the restore results for this request. + // + // Prefix is a required field + Prefix *string `type:"string" required:"true"` + + // The class of storage used to store the restore results. + StorageClass *string `type:"string" enum:"StorageClass"` + + // The tag-set that is applied to the restore results. + Tagging *Tagging `type:"structure"` + + // A list of metadata to store with the restore results in S3. + UserMetadata []*MetadataEntry `locationNameList:"MetadataEntry" type:"list"` +} + +// String returns the string representation +func (s Location) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Location) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Location) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Location"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.Prefix == nil { + invalidParams.Add(request.NewErrParamRequired("Prefix")) + } + if s.AccessControlList != nil { + for i, v := range s.AccessControlList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AccessControlList", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } + if s.Tagging != nil { + if err := s.Tagging.Validate(); err != nil { + invalidParams.AddNested("Tagging", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessControlList sets the AccessControlList field's value. +func (s *Location) SetAccessControlList(v []*Grant) *Location { + s.AccessControlList = v + return s +} + +// SetBucketName sets the BucketName field's value. +func (s *Location) SetBucketName(v string) *Location { + s.BucketName = &v + return s +} + +// SetCannedACL sets the CannedACL field's value. +func (s *Location) SetCannedACL(v string) *Location { + s.CannedACL = &v + return s +} + +// SetEncryption sets the Encryption field's value. +func (s *Location) SetEncryption(v *Encryption) *Location { + s.Encryption = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *Location) SetPrefix(v string) *Location { + s.Prefix = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *Location) SetStorageClass(v string) *Location { + s.StorageClass = &v + return s +} + +// SetTagging sets the Tagging field's value. +func (s *Location) SetTagging(v *Tagging) *Location { + s.Tagging = v + return s +} + +// SetUserMetadata sets the UserMetadata field's value. +func (s *Location) SetUserMetadata(v []*MetadataEntry) *Location { + s.UserMetadata = v + return s +} + +// Container for logging information. Presence of this element indicates that +// logging is enabled. Parameters TargetBucket and TargetPrefix are required +// in this case. +type LoggingEnabled struct { + _ struct{} `type:"structure"` + + // Specifies the bucket where you want Amazon S3 to store server access logs. + // You can have your logs delivered to any bucket that you own, including the + // same bucket that is being logged. You can also configure multiple buckets + // to deliver their logs to the same target bucket. In this case you should + // choose a different TargetPrefix for each source bucket so that the delivered + // log files can be distinguished by key. + // + // TargetBucket is a required field + TargetBucket *string `type:"string" required:"true"` + + TargetGrants []*TargetGrant `locationNameList:"Grant" type:"list"` + + // This element lets you specify a prefix for the keys that the log files will + // be stored under. + // + // TargetPrefix is a required field + TargetPrefix *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s LoggingEnabled) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoggingEnabled) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LoggingEnabled) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LoggingEnabled"} + if s.TargetBucket == nil { + invalidParams.Add(request.NewErrParamRequired("TargetBucket")) + } + if s.TargetPrefix == nil { + invalidParams.Add(request.NewErrParamRequired("TargetPrefix")) + } + if s.TargetGrants != nil { + for i, v := range s.TargetGrants { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TargetGrants", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTargetBucket sets the TargetBucket field's value. +func (s *LoggingEnabled) SetTargetBucket(v string) *LoggingEnabled { + s.TargetBucket = &v + return s +} + +// SetTargetGrants sets the TargetGrants field's value. +func (s *LoggingEnabled) SetTargetGrants(v []*TargetGrant) *LoggingEnabled { + s.TargetGrants = v + return s +} + +// SetTargetPrefix sets the TargetPrefix field's value. +func (s *LoggingEnabled) SetTargetPrefix(v string) *LoggingEnabled { + s.TargetPrefix = &v + return s +} + +// A metadata key-value pair to store with an object. +type MetadataEntry struct { + _ struct{} `type:"structure"` + + Name *string `type:"string"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s MetadataEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetadataEntry) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *MetadataEntry) SetName(v string) *MetadataEntry { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *MetadataEntry) SetValue(v string) *MetadataEntry { + s.Value = &v + return s +} + +type MetricsAndOperator struct { + _ struct{} `type:"structure"` + + // The prefix used when evaluating an AND predicate. + Prefix *string `type:"string"` + + // The list of tags used when evaluating an AND predicate. + Tags []*Tag `locationName:"Tag" locationNameList:"Tag" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s MetricsAndOperator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricsAndOperator) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricsAndOperator) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricsAndOperator"} + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPrefix sets the Prefix field's value. +func (s *MetricsAndOperator) SetPrefix(v string) *MetricsAndOperator { + s.Prefix = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *MetricsAndOperator) SetTags(v []*Tag) *MetricsAndOperator { + s.Tags = v + return s +} + +type MetricsConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies a metrics configuration filter. The metrics configuration will + // only include objects that meet the filter's criteria. A filter must be a + // prefix, a tag, or a conjunction (MetricsAndOperator). + Filter *MetricsFilter `type:"structure"` + + // The ID used to identify the metrics configuration. + // + // Id is a required field + Id *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s MetricsConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricsConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricsConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricsConfiguration"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Filter != nil { + if err := s.Filter.Validate(); err != nil { + invalidParams.AddNested("Filter", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilter sets the Filter field's value. +func (s *MetricsConfiguration) SetFilter(v *MetricsFilter) *MetricsConfiguration { + s.Filter = v + return s +} + +// SetId sets the Id field's value. +func (s *MetricsConfiguration) SetId(v string) *MetricsConfiguration { + s.Id = &v + return s +} + +type MetricsFilter struct { + _ struct{} `type:"structure"` + + // A conjunction (logical AND) of predicates, which is used in evaluating a + // metrics filter. The operator must have at least two predicates, and an object + // must match all of the predicates in order for the filter to apply. + And *MetricsAndOperator `type:"structure"` + + // The prefix used when evaluating a metrics filter. + Prefix *string `type:"string"` + + // The tag used when evaluating a metrics filter. + Tag *Tag `type:"structure"` +} + +// String returns the string representation +func (s MetricsFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricsFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricsFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricsFilter"} + if s.And != nil { + if err := s.And.Validate(); err != nil { + invalidParams.AddNested("And", err.(request.ErrInvalidParams)) + } + } + if s.Tag != nil { + if err := s.Tag.Validate(); err != nil { + invalidParams.AddNested("Tag", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAnd sets the And field's value. +func (s *MetricsFilter) SetAnd(v *MetricsAndOperator) *MetricsFilter { + s.And = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *MetricsFilter) SetPrefix(v string) *MetricsFilter { + s.Prefix = &v + return s +} + +// SetTag sets the Tag field's value. +func (s *MetricsFilter) SetTag(v *Tag) *MetricsFilter { + s.Tag = v + return s +} + +type MultipartUpload struct { + _ struct{} `type:"structure"` + + // Date and time at which the multipart upload was initiated. + Initiated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // Identifies who initiated the multipart upload. + Initiator *Initiator `type:"structure"` + + // Key of the object for which the multipart upload was initiated. + Key *string `min:"1" type:"string"` + + Owner *Owner `type:"structure"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"StorageClass"` + + // Upload ID that identifies the multipart upload. + UploadId *string `type:"string"` +} + +// String returns the string representation +func (s MultipartUpload) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MultipartUpload) GoString() string { + return s.String() +} + +// SetInitiated sets the Initiated field's value. +func (s *MultipartUpload) SetInitiated(v time.Time) *MultipartUpload { + s.Initiated = &v + return s +} + +// SetInitiator sets the Initiator field's value. +func (s *MultipartUpload) SetInitiator(v *Initiator) *MultipartUpload { + s.Initiator = v + return s +} + +// SetKey sets the Key field's value. +func (s *MultipartUpload) SetKey(v string) *MultipartUpload { + s.Key = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *MultipartUpload) SetOwner(v *Owner) *MultipartUpload { + s.Owner = v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *MultipartUpload) SetStorageClass(v string) *MultipartUpload { + s.StorageClass = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *MultipartUpload) SetUploadId(v string) *MultipartUpload { + s.UploadId = &v + return s +} + +// Specifies when noncurrent object versions expire. Upon expiration, Amazon +// S3 permanently deletes the noncurrent object versions. You set this lifecycle +// configuration action on a bucket that has versioning enabled (or suspended) +// to request that Amazon S3 delete noncurrent object versions at a specific +// period in the object's lifetime. +type NoncurrentVersionExpiration struct { + _ struct{} `type:"structure"` + + // Specifies the number of days an object is noncurrent before Amazon S3 can + // perform the associated action. For information about the noncurrent days + // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent + // (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) + NoncurrentDays *int64 `type:"integer"` +} + +// String returns the string representation +func (s NoncurrentVersionExpiration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NoncurrentVersionExpiration) GoString() string { + return s.String() +} + +// SetNoncurrentDays sets the NoncurrentDays field's value. +func (s *NoncurrentVersionExpiration) SetNoncurrentDays(v int64) *NoncurrentVersionExpiration { + s.NoncurrentDays = &v + return s +} + +// Container for the transition rule that describes when noncurrent objects +// transition to the STANDARD_IA, ONEZONE_IA or GLACIER storage class. If your +// bucket is versioning-enabled (or versioning is suspended), you can set this +// action to request that Amazon S3 transition noncurrent object versions to +// the STANDARD_IA, ONEZONE_IA or GLACIER storage class at a specific period +// in the object's lifetime. +type NoncurrentVersionTransition struct { + _ struct{} `type:"structure"` + + // Specifies the number of days an object is noncurrent before Amazon S3 can + // perform the associated action. For information about the noncurrent days + // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent + // (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) + NoncurrentDays *int64 `type:"integer"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"TransitionStorageClass"` +} + +// String returns the string representation +func (s NoncurrentVersionTransition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NoncurrentVersionTransition) GoString() string { + return s.String() +} + +// SetNoncurrentDays sets the NoncurrentDays field's value. +func (s *NoncurrentVersionTransition) SetNoncurrentDays(v int64) *NoncurrentVersionTransition { + s.NoncurrentDays = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *NoncurrentVersionTransition) SetStorageClass(v string) *NoncurrentVersionTransition { + s.StorageClass = &v + return s +} + +// Container for specifying the notification configuration of the bucket. If +// this element is empty, notifications are turned off on the bucket. +type NotificationConfiguration struct { + _ struct{} `type:"structure"` + + LambdaFunctionConfigurations []*LambdaFunctionConfiguration `locationName:"CloudFunctionConfiguration" type:"list" flattened:"true"` + + QueueConfigurations []*QueueConfiguration `locationName:"QueueConfiguration" type:"list" flattened:"true"` + + TopicConfigurations []*TopicConfiguration `locationName:"TopicConfiguration" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s NotificationConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotificationConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NotificationConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NotificationConfiguration"} + if s.LambdaFunctionConfigurations != nil { + for i, v := range s.LambdaFunctionConfigurations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LambdaFunctionConfigurations", i), err.(request.ErrInvalidParams)) + } + } + } + if s.QueueConfigurations != nil { + for i, v := range s.QueueConfigurations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "QueueConfigurations", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TopicConfigurations != nil { + for i, v := range s.TopicConfigurations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TopicConfigurations", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLambdaFunctionConfigurations sets the LambdaFunctionConfigurations field's value. +func (s *NotificationConfiguration) SetLambdaFunctionConfigurations(v []*LambdaFunctionConfiguration) *NotificationConfiguration { + s.LambdaFunctionConfigurations = v + return s +} + +// SetQueueConfigurations sets the QueueConfigurations field's value. +func (s *NotificationConfiguration) SetQueueConfigurations(v []*QueueConfiguration) *NotificationConfiguration { + s.QueueConfigurations = v + return s +} + +// SetTopicConfigurations sets the TopicConfigurations field's value. +func (s *NotificationConfiguration) SetTopicConfigurations(v []*TopicConfiguration) *NotificationConfiguration { + s.TopicConfigurations = v + return s +} + +type NotificationConfigurationDeprecated struct { + _ struct{} `type:"structure"` + + CloudFunctionConfiguration *CloudFunctionConfiguration `type:"structure"` + + QueueConfiguration *QueueConfigurationDeprecated `type:"structure"` + + TopicConfiguration *TopicConfigurationDeprecated `type:"structure"` +} + +// String returns the string representation +func (s NotificationConfigurationDeprecated) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotificationConfigurationDeprecated) GoString() string { + return s.String() +} + +// SetCloudFunctionConfiguration sets the CloudFunctionConfiguration field's value. +func (s *NotificationConfigurationDeprecated) SetCloudFunctionConfiguration(v *CloudFunctionConfiguration) *NotificationConfigurationDeprecated { + s.CloudFunctionConfiguration = v + return s +} + +// SetQueueConfiguration sets the QueueConfiguration field's value. +func (s *NotificationConfigurationDeprecated) SetQueueConfiguration(v *QueueConfigurationDeprecated) *NotificationConfigurationDeprecated { + s.QueueConfiguration = v + return s +} + +// SetTopicConfiguration sets the TopicConfiguration field's value. +func (s *NotificationConfigurationDeprecated) SetTopicConfiguration(v *TopicConfigurationDeprecated) *NotificationConfigurationDeprecated { + s.TopicConfiguration = v + return s +} + +// Container for object key name filtering rules. For information about key +// name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) +type NotificationConfigurationFilter struct { + _ struct{} `type:"structure"` + + // Container for object key name prefix and suffix filtering rules. + Key *KeyFilter `locationName:"S3Key" type:"structure"` +} + +// String returns the string representation +func (s NotificationConfigurationFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotificationConfigurationFilter) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *NotificationConfigurationFilter) SetKey(v *KeyFilter) *NotificationConfigurationFilter { + s.Key = v + return s +} + +type Object struct { + _ struct{} `type:"structure"` + + ETag *string `type:"string"` + + Key *string `min:"1" type:"string"` + + LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + Owner *Owner `type:"structure"` + + Size *int64 `type:"integer"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"ObjectStorageClass"` +} + +// String returns the string representation +func (s Object) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Object) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *Object) SetETag(v string) *Object { + s.ETag = &v + return s +} + +// SetKey sets the Key field's value. +func (s *Object) SetKey(v string) *Object { + s.Key = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *Object) SetLastModified(v time.Time) *Object { + s.LastModified = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *Object) SetOwner(v *Owner) *Object { + s.Owner = v + return s +} + +// SetSize sets the Size field's value. +func (s *Object) SetSize(v int64) *Object { + s.Size = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *Object) SetStorageClass(v string) *Object { + s.StorageClass = &v + return s +} + +type ObjectIdentifier struct { + _ struct{} `type:"structure"` + + // Key name of the object to delete. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // VersionId for the specific version of the object to delete. + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s ObjectIdentifier) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ObjectIdentifier) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ObjectIdentifier) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ObjectIdentifier"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ObjectIdentifier) SetKey(v string) *ObjectIdentifier { + s.Key = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *ObjectIdentifier) SetVersionId(v string) *ObjectIdentifier { + s.VersionId = &v + return s +} + +type ObjectVersion struct { + _ struct{} `type:"structure"` + + ETag *string `type:"string"` + + // Specifies whether the object is (true) or is not (false) the latest version + // of an object. + IsLatest *bool `type:"boolean"` + + // The object key. + Key *string `min:"1" type:"string"` + + // Date and time the object was last modified. + LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + Owner *Owner `type:"structure"` + + // Size in bytes of the object. + Size *int64 `type:"integer"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"ObjectVersionStorageClass"` + + // Version ID of an object. + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s ObjectVersion) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ObjectVersion) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *ObjectVersion) SetETag(v string) *ObjectVersion { + s.ETag = &v + return s +} + +// SetIsLatest sets the IsLatest field's value. +func (s *ObjectVersion) SetIsLatest(v bool) *ObjectVersion { + s.IsLatest = &v + return s +} + +// SetKey sets the Key field's value. +func (s *ObjectVersion) SetKey(v string) *ObjectVersion { + s.Key = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *ObjectVersion) SetLastModified(v time.Time) *ObjectVersion { + s.LastModified = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *ObjectVersion) SetOwner(v *Owner) *ObjectVersion { + s.Owner = v + return s +} + +// SetSize sets the Size field's value. +func (s *ObjectVersion) SetSize(v int64) *ObjectVersion { + s.Size = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *ObjectVersion) SetStorageClass(v string) *ObjectVersion { + s.StorageClass = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *ObjectVersion) SetVersionId(v string) *ObjectVersion { + s.VersionId = &v + return s +} + +// Describes the location where the restore job's output is stored. +type OutputLocation struct { + _ struct{} `type:"structure"` + + // Describes an S3 location that will receive the results of the restore request. + S3 *Location `type:"structure"` +} + +// String returns the string representation +func (s OutputLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputLocation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputLocation"} + if s.S3 != nil { + if err := s.S3.Validate(); err != nil { + invalidParams.AddNested("S3", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3 sets the S3 field's value. +func (s *OutputLocation) SetS3(v *Location) *OutputLocation { + s.S3 = v + return s +} + +// Describes how results of the Select job are serialized. +type OutputSerialization struct { + _ struct{} `type:"structure"` + + // Describes the serialization of CSV-encoded Select results. + CSV *CSVOutput `type:"structure"` + + // Specifies JSON as request's output serialization format. + JSON *JSONOutput `type:"structure"` +} + +// String returns the string representation +func (s OutputSerialization) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputSerialization) GoString() string { + return s.String() +} + +// SetCSV sets the CSV field's value. +func (s *OutputSerialization) SetCSV(v *CSVOutput) *OutputSerialization { + s.CSV = v + return s +} + +// SetJSON sets the JSON field's value. +func (s *OutputSerialization) SetJSON(v *JSONOutput) *OutputSerialization { + s.JSON = v + return s +} + +type Owner struct { + _ struct{} `type:"structure"` + + DisplayName *string `type:"string"` + + ID *string `type:"string"` +} + +// String returns the string representation +func (s Owner) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Owner) GoString() string { + return s.String() +} + +// SetDisplayName sets the DisplayName field's value. +func (s *Owner) SetDisplayName(v string) *Owner { + s.DisplayName = &v + return s +} + +// SetID sets the ID field's value. +func (s *Owner) SetID(v string) *Owner { + s.ID = &v + return s +} + +type Part struct { + _ struct{} `type:"structure"` + + // Entity tag returned when the part was uploaded. + ETag *string `type:"string"` + + // Date and time at which the part was uploaded. + LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // Part number identifying the part. This is a positive integer between 1 and + // 10,000. + PartNumber *int64 `type:"integer"` + + // Size of the uploaded part data. + Size *int64 `type:"integer"` +} + +// String returns the string representation +func (s Part) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Part) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *Part) SetETag(v string) *Part { + s.ETag = &v + return s +} + +// SetLastModified sets the LastModified field's value. +func (s *Part) SetLastModified(v time.Time) *Part { + s.LastModified = &v + return s +} + +// SetPartNumber sets the PartNumber field's value. +func (s *Part) SetPartNumber(v int64) *Part { + s.PartNumber = &v + return s +} + +// SetSize sets the Size field's value. +func (s *Part) SetSize(v int64) *Part { + s.Size = &v + return s +} + +type PutBucketAccelerateConfigurationInput struct { + _ struct{} `type:"structure" payload:"AccelerateConfiguration"` + + // Specifies the Accelerate Configuration you want to set for the bucket. + // + // AccelerateConfiguration is a required field + AccelerateConfiguration *AccelerateConfiguration `locationName:"AccelerateConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // Name of the bucket for which the accelerate configuration is set. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutBucketAccelerateConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketAccelerateConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketAccelerateConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketAccelerateConfigurationInput"} + if s.AccelerateConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("AccelerateConfiguration")) + } + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccelerateConfiguration sets the AccelerateConfiguration field's value. +func (s *PutBucketAccelerateConfigurationInput) SetAccelerateConfiguration(v *AccelerateConfiguration) *PutBucketAccelerateConfigurationInput { + s.AccelerateConfiguration = v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketAccelerateConfigurationInput) SetBucket(v string) *PutBucketAccelerateConfigurationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketAccelerateConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type PutBucketAccelerateConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketAccelerateConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketAccelerateConfigurationOutput) GoString() string { + return s.String() +} + +type PutBucketAclInput struct { + _ struct{} `type:"structure" payload:"AccessControlPolicy"` + + // The canned ACL to apply to the bucket. + ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"BucketCannedACL"` + + AccessControlPolicy *AccessControlPolicy `locationName:"AccessControlPolicy" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Allows grantee the read, write, read ACP, and write ACP permissions on the + // bucket. + GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` + + // Allows grantee to list the objects in the bucket. + GrantRead *string `location:"header" locationName:"x-amz-grant-read" type:"string"` + + // Allows grantee to read the bucket ACL. + GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` + + // Allows grantee to create, overwrite, and delete any object in the bucket. + GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` + + // Allows grantee to write the ACL for the applicable bucket. + GrantWriteACP *string `location:"header" locationName:"x-amz-grant-write-acp" type:"string"` +} + +// String returns the string representation +func (s PutBucketAclInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketAclInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketAclInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketAclInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.AccessControlPolicy != nil { + if err := s.AccessControlPolicy.Validate(); err != nil { + invalidParams.AddNested("AccessControlPolicy", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetACL sets the ACL field's value. +func (s *PutBucketAclInput) SetACL(v string) *PutBucketAclInput { + s.ACL = &v + return s +} + +// SetAccessControlPolicy sets the AccessControlPolicy field's value. +func (s *PutBucketAclInput) SetAccessControlPolicy(v *AccessControlPolicy) *PutBucketAclInput { + s.AccessControlPolicy = v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketAclInput) SetBucket(v string) *PutBucketAclInput { + s.Bucket = &v + return s +} + +func (s *PutBucketAclInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetGrantFullControl sets the GrantFullControl field's value. +func (s *PutBucketAclInput) SetGrantFullControl(v string) *PutBucketAclInput { + s.GrantFullControl = &v + return s +} + +// SetGrantRead sets the GrantRead field's value. +func (s *PutBucketAclInput) SetGrantRead(v string) *PutBucketAclInput { + s.GrantRead = &v + return s +} + +// SetGrantReadACP sets the GrantReadACP field's value. +func (s *PutBucketAclInput) SetGrantReadACP(v string) *PutBucketAclInput { + s.GrantReadACP = &v + return s +} + +// SetGrantWrite sets the GrantWrite field's value. +func (s *PutBucketAclInput) SetGrantWrite(v string) *PutBucketAclInput { + s.GrantWrite = &v + return s +} + +// SetGrantWriteACP sets the GrantWriteACP field's value. +func (s *PutBucketAclInput) SetGrantWriteACP(v string) *PutBucketAclInput { + s.GrantWriteACP = &v + return s +} + +type PutBucketAclOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketAclOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketAclOutput) GoString() string { + return s.String() +} + +type PutBucketAnalyticsConfigurationInput struct { + _ struct{} `type:"structure" payload:"AnalyticsConfiguration"` + + // The configuration and any analyses for the analytics filter. + // + // AnalyticsConfiguration is a required field + AnalyticsConfiguration *AnalyticsConfiguration `locationName:"AnalyticsConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // The name of the bucket to which an analytics configuration is stored. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The identifier used to represent an analytics configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutBucketAnalyticsConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketAnalyticsConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketAnalyticsConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketAnalyticsConfigurationInput"} + if s.AnalyticsConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("AnalyticsConfiguration")) + } + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.AnalyticsConfiguration != nil { + if err := s.AnalyticsConfiguration.Validate(); err != nil { + invalidParams.AddNested("AnalyticsConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAnalyticsConfiguration sets the AnalyticsConfiguration field's value. +func (s *PutBucketAnalyticsConfigurationInput) SetAnalyticsConfiguration(v *AnalyticsConfiguration) *PutBucketAnalyticsConfigurationInput { + s.AnalyticsConfiguration = v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketAnalyticsConfigurationInput) SetBucket(v string) *PutBucketAnalyticsConfigurationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketAnalyticsConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *PutBucketAnalyticsConfigurationInput) SetId(v string) *PutBucketAnalyticsConfigurationInput { + s.Id = &v + return s +} + +type PutBucketAnalyticsConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketAnalyticsConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketAnalyticsConfigurationOutput) GoString() string { + return s.String() +} + +type PutBucketCorsInput struct { + _ struct{} `type:"structure" payload:"CORSConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // CORSConfiguration is a required field + CORSConfiguration *CORSConfiguration `locationName:"CORSConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketCorsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketCorsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketCorsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketCorsInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.CORSConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("CORSConfiguration")) + } + if s.CORSConfiguration != nil { + if err := s.CORSConfiguration.Validate(); err != nil { + invalidParams.AddNested("CORSConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketCorsInput) SetBucket(v string) *PutBucketCorsInput { + s.Bucket = &v + return s +} + +func (s *PutBucketCorsInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCORSConfiguration sets the CORSConfiguration field's value. +func (s *PutBucketCorsInput) SetCORSConfiguration(v *CORSConfiguration) *PutBucketCorsInput { + s.CORSConfiguration = v + return s +} + +type PutBucketCorsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketCorsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketCorsOutput) GoString() string { + return s.String() +} + +type PutBucketEncryptionInput struct { + _ struct{} `type:"structure" payload:"ServerSideEncryptionConfiguration"` + + // The name of the bucket for which the server-side encryption configuration + // is set. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Container for server-side encryption configuration rules. Currently S3 supports + // one rule only. + // + // ServerSideEncryptionConfiguration is a required field + ServerSideEncryptionConfiguration *ServerSideEncryptionConfiguration `locationName:"ServerSideEncryptionConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketEncryptionInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.ServerSideEncryptionConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("ServerSideEncryptionConfiguration")) + } + if s.ServerSideEncryptionConfiguration != nil { + if err := s.ServerSideEncryptionConfiguration.Validate(); err != nil { + invalidParams.AddNested("ServerSideEncryptionConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketEncryptionInput) SetBucket(v string) *PutBucketEncryptionInput { + s.Bucket = &v + return s +} + +func (s *PutBucketEncryptionInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetServerSideEncryptionConfiguration sets the ServerSideEncryptionConfiguration field's value. +func (s *PutBucketEncryptionInput) SetServerSideEncryptionConfiguration(v *ServerSideEncryptionConfiguration) *PutBucketEncryptionInput { + s.ServerSideEncryptionConfiguration = v + return s +} + +type PutBucketEncryptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketEncryptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketEncryptionOutput) GoString() string { + return s.String() +} + +type PutBucketInventoryConfigurationInput struct { + _ struct{} `type:"structure" payload:"InventoryConfiguration"` + + // The name of the bucket where the inventory configuration will be stored. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ID used to identify the inventory configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` + + // Specifies the inventory configuration. + // + // InventoryConfiguration is a required field + InventoryConfiguration *InventoryConfiguration `locationName:"InventoryConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketInventoryConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketInventoryConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketInventoryConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketInventoryConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.InventoryConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("InventoryConfiguration")) + } + if s.InventoryConfiguration != nil { + if err := s.InventoryConfiguration.Validate(); err != nil { + invalidParams.AddNested("InventoryConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketInventoryConfigurationInput) SetBucket(v string) *PutBucketInventoryConfigurationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketInventoryConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *PutBucketInventoryConfigurationInput) SetId(v string) *PutBucketInventoryConfigurationInput { + s.Id = &v + return s +} + +// SetInventoryConfiguration sets the InventoryConfiguration field's value. +func (s *PutBucketInventoryConfigurationInput) SetInventoryConfiguration(v *InventoryConfiguration) *PutBucketInventoryConfigurationInput { + s.InventoryConfiguration = v + return s +} + +type PutBucketInventoryConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketInventoryConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketInventoryConfigurationOutput) GoString() string { + return s.String() +} + +type PutBucketLifecycleConfigurationInput struct { + _ struct{} `type:"structure" payload:"LifecycleConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + LifecycleConfiguration *BucketLifecycleConfiguration `locationName:"LifecycleConfiguration" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketLifecycleConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketLifecycleConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketLifecycleConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketLifecycleConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.LifecycleConfiguration != nil { + if err := s.LifecycleConfiguration.Validate(); err != nil { + invalidParams.AddNested("LifecycleConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketLifecycleConfigurationInput) SetBucket(v string) *PutBucketLifecycleConfigurationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketLifecycleConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetLifecycleConfiguration sets the LifecycleConfiguration field's value. +func (s *PutBucketLifecycleConfigurationInput) SetLifecycleConfiguration(v *BucketLifecycleConfiguration) *PutBucketLifecycleConfigurationInput { + s.LifecycleConfiguration = v + return s +} + +type PutBucketLifecycleConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketLifecycleConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketLifecycleConfigurationOutput) GoString() string { + return s.String() +} + +type PutBucketLifecycleInput struct { + _ struct{} `type:"structure" payload:"LifecycleConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + LifecycleConfiguration *LifecycleConfiguration `locationName:"LifecycleConfiguration" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketLifecycleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketLifecycleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketLifecycleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketLifecycleInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.LifecycleConfiguration != nil { + if err := s.LifecycleConfiguration.Validate(); err != nil { + invalidParams.AddNested("LifecycleConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketLifecycleInput) SetBucket(v string) *PutBucketLifecycleInput { + s.Bucket = &v + return s +} + +func (s *PutBucketLifecycleInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetLifecycleConfiguration sets the LifecycleConfiguration field's value. +func (s *PutBucketLifecycleInput) SetLifecycleConfiguration(v *LifecycleConfiguration) *PutBucketLifecycleInput { + s.LifecycleConfiguration = v + return s +} + +type PutBucketLifecycleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketLifecycleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketLifecycleOutput) GoString() string { + return s.String() +} + +type PutBucketLoggingInput struct { + _ struct{} `type:"structure" payload:"BucketLoggingStatus"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // BucketLoggingStatus is a required field + BucketLoggingStatus *BucketLoggingStatus `locationName:"BucketLoggingStatus" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketLoggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketLoggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketLoggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketLoggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.BucketLoggingStatus == nil { + invalidParams.Add(request.NewErrParamRequired("BucketLoggingStatus")) + } + if s.BucketLoggingStatus != nil { + if err := s.BucketLoggingStatus.Validate(); err != nil { + invalidParams.AddNested("BucketLoggingStatus", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketLoggingInput) SetBucket(v string) *PutBucketLoggingInput { + s.Bucket = &v + return s +} + +func (s *PutBucketLoggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetBucketLoggingStatus sets the BucketLoggingStatus field's value. +func (s *PutBucketLoggingInput) SetBucketLoggingStatus(v *BucketLoggingStatus) *PutBucketLoggingInput { + s.BucketLoggingStatus = v + return s +} + +type PutBucketLoggingOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketLoggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketLoggingOutput) GoString() string { + return s.String() +} + +type PutBucketMetricsConfigurationInput struct { + _ struct{} `type:"structure" payload:"MetricsConfiguration"` + + // The name of the bucket for which the metrics configuration is set. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The ID used to identify the metrics configuration. + // + // Id is a required field + Id *string `location:"querystring" locationName:"id" type:"string" required:"true"` + + // Specifies the metrics configuration. + // + // MetricsConfiguration is a required field + MetricsConfiguration *MetricsConfiguration `locationName:"MetricsConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketMetricsConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketMetricsConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketMetricsConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketMetricsConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.MetricsConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("MetricsConfiguration")) + } + if s.MetricsConfiguration != nil { + if err := s.MetricsConfiguration.Validate(); err != nil { + invalidParams.AddNested("MetricsConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketMetricsConfigurationInput) SetBucket(v string) *PutBucketMetricsConfigurationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketMetricsConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetId sets the Id field's value. +func (s *PutBucketMetricsConfigurationInput) SetId(v string) *PutBucketMetricsConfigurationInput { + s.Id = &v + return s +} + +// SetMetricsConfiguration sets the MetricsConfiguration field's value. +func (s *PutBucketMetricsConfigurationInput) SetMetricsConfiguration(v *MetricsConfiguration) *PutBucketMetricsConfigurationInput { + s.MetricsConfiguration = v + return s +} + +type PutBucketMetricsConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketMetricsConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketMetricsConfigurationOutput) GoString() string { + return s.String() +} + +type PutBucketNotificationConfigurationInput struct { + _ struct{} `type:"structure" payload:"NotificationConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Container for specifying the notification configuration of the bucket. If + // this element is empty, notifications are turned off on the bucket. + // + // NotificationConfiguration is a required field + NotificationConfiguration *NotificationConfiguration `locationName:"NotificationConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketNotificationConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketNotificationConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketNotificationConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketNotificationConfigurationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.NotificationConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("NotificationConfiguration")) + } + if s.NotificationConfiguration != nil { + if err := s.NotificationConfiguration.Validate(); err != nil { + invalidParams.AddNested("NotificationConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketNotificationConfigurationInput) SetBucket(v string) *PutBucketNotificationConfigurationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketNotificationConfigurationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetNotificationConfiguration sets the NotificationConfiguration field's value. +func (s *PutBucketNotificationConfigurationInput) SetNotificationConfiguration(v *NotificationConfiguration) *PutBucketNotificationConfigurationInput { + s.NotificationConfiguration = v + return s +} + +type PutBucketNotificationConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketNotificationConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketNotificationConfigurationOutput) GoString() string { + return s.String() +} + +type PutBucketNotificationInput struct { + _ struct{} `type:"structure" payload:"NotificationConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // NotificationConfiguration is a required field + NotificationConfiguration *NotificationConfigurationDeprecated `locationName:"NotificationConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketNotificationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketNotificationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketNotificationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketNotificationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.NotificationConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("NotificationConfiguration")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketNotificationInput) SetBucket(v string) *PutBucketNotificationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketNotificationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetNotificationConfiguration sets the NotificationConfiguration field's value. +func (s *PutBucketNotificationInput) SetNotificationConfiguration(v *NotificationConfigurationDeprecated) *PutBucketNotificationInput { + s.NotificationConfiguration = v + return s +} + +type PutBucketNotificationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketNotificationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketNotificationOutput) GoString() string { + return s.String() +} + +type PutBucketPolicyInput struct { + _ struct{} `type:"structure" payload:"Policy"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Set this parameter to true to confirm that you want to remove your permissions + // to change this bucket policy in the future. + ConfirmRemoveSelfBucketAccess *bool `location:"header" locationName:"x-amz-confirm-remove-self-bucket-access" type:"boolean"` + + // The bucket policy as a JSON document. + // + // Policy is a required field + Policy *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s PutBucketPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketPolicyInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Policy == nil { + invalidParams.Add(request.NewErrParamRequired("Policy")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketPolicyInput) SetBucket(v string) *PutBucketPolicyInput { + s.Bucket = &v + return s +} + +func (s *PutBucketPolicyInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetConfirmRemoveSelfBucketAccess sets the ConfirmRemoveSelfBucketAccess field's value. +func (s *PutBucketPolicyInput) SetConfirmRemoveSelfBucketAccess(v bool) *PutBucketPolicyInput { + s.ConfirmRemoveSelfBucketAccess = &v + return s +} + +// SetPolicy sets the Policy field's value. +func (s *PutBucketPolicyInput) SetPolicy(v string) *PutBucketPolicyInput { + s.Policy = &v + return s +} + +type PutBucketPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketPolicyOutput) GoString() string { + return s.String() +} + +type PutBucketReplicationInput struct { + _ struct{} `type:"structure" payload:"ReplicationConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Container for replication rules. You can add as many as 1,000 rules. Total + // replication configuration size can be up to 2 MB. + // + // ReplicationConfiguration is a required field + ReplicationConfiguration *ReplicationConfiguration `locationName:"ReplicationConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketReplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketReplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketReplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketReplicationInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.ReplicationConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("ReplicationConfiguration")) + } + if s.ReplicationConfiguration != nil { + if err := s.ReplicationConfiguration.Validate(); err != nil { + invalidParams.AddNested("ReplicationConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketReplicationInput) SetBucket(v string) *PutBucketReplicationInput { + s.Bucket = &v + return s +} + +func (s *PutBucketReplicationInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetReplicationConfiguration sets the ReplicationConfiguration field's value. +func (s *PutBucketReplicationInput) SetReplicationConfiguration(v *ReplicationConfiguration) *PutBucketReplicationInput { + s.ReplicationConfiguration = v + return s +} + +type PutBucketReplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketReplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketReplicationOutput) GoString() string { + return s.String() +} + +type PutBucketRequestPaymentInput struct { + _ struct{} `type:"structure" payload:"RequestPaymentConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // RequestPaymentConfiguration is a required field + RequestPaymentConfiguration *RequestPaymentConfiguration `locationName:"RequestPaymentConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketRequestPaymentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketRequestPaymentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketRequestPaymentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketRequestPaymentInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.RequestPaymentConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("RequestPaymentConfiguration")) + } + if s.RequestPaymentConfiguration != nil { + if err := s.RequestPaymentConfiguration.Validate(); err != nil { + invalidParams.AddNested("RequestPaymentConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketRequestPaymentInput) SetBucket(v string) *PutBucketRequestPaymentInput { + s.Bucket = &v + return s +} + +func (s *PutBucketRequestPaymentInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetRequestPaymentConfiguration sets the RequestPaymentConfiguration field's value. +func (s *PutBucketRequestPaymentInput) SetRequestPaymentConfiguration(v *RequestPaymentConfiguration) *PutBucketRequestPaymentInput { + s.RequestPaymentConfiguration = v + return s +} + +type PutBucketRequestPaymentOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketRequestPaymentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketRequestPaymentOutput) GoString() string { + return s.String() +} + +type PutBucketTaggingInput struct { + _ struct{} `type:"structure" payload:"Tagging"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Tagging is a required field + Tagging *Tagging `locationName:"Tagging" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketTaggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketTaggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketTaggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketTaggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Tagging == nil { + invalidParams.Add(request.NewErrParamRequired("Tagging")) + } + if s.Tagging != nil { + if err := s.Tagging.Validate(); err != nil { + invalidParams.AddNested("Tagging", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketTaggingInput) SetBucket(v string) *PutBucketTaggingInput { + s.Bucket = &v + return s +} + +func (s *PutBucketTaggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetTagging sets the Tagging field's value. +func (s *PutBucketTaggingInput) SetTagging(v *Tagging) *PutBucketTaggingInput { + s.Tagging = v + return s +} + +type PutBucketTaggingOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketTaggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketTaggingOutput) GoString() string { + return s.String() +} + +type PutBucketVersioningInput struct { + _ struct{} `type:"structure" payload:"VersioningConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The concatenation of the authentication device's serial number, a space, + // and the value that is displayed on your authentication device. + MFA *string `location:"header" locationName:"x-amz-mfa" type:"string"` + + // VersioningConfiguration is a required field + VersioningConfiguration *VersioningConfiguration `locationName:"VersioningConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketVersioningInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketVersioningInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketVersioningInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketVersioningInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.VersioningConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("VersioningConfiguration")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketVersioningInput) SetBucket(v string) *PutBucketVersioningInput { + s.Bucket = &v + return s +} + +func (s *PutBucketVersioningInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetMFA sets the MFA field's value. +func (s *PutBucketVersioningInput) SetMFA(v string) *PutBucketVersioningInput { + s.MFA = &v + return s +} + +// SetVersioningConfiguration sets the VersioningConfiguration field's value. +func (s *PutBucketVersioningInput) SetVersioningConfiguration(v *VersioningConfiguration) *PutBucketVersioningInput { + s.VersioningConfiguration = v + return s +} + +type PutBucketVersioningOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketVersioningOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketVersioningOutput) GoString() string { + return s.String() +} + +type PutBucketWebsiteInput struct { + _ struct{} `type:"structure" payload:"WebsiteConfiguration"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // WebsiteConfiguration is a required field + WebsiteConfiguration *WebsiteConfiguration `locationName:"WebsiteConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketWebsiteInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketWebsiteInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketWebsiteInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketWebsiteInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.WebsiteConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("WebsiteConfiguration")) + } + if s.WebsiteConfiguration != nil { + if err := s.WebsiteConfiguration.Validate(); err != nil { + invalidParams.AddNested("WebsiteConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketWebsiteInput) SetBucket(v string) *PutBucketWebsiteInput { + s.Bucket = &v + return s +} + +func (s *PutBucketWebsiteInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetWebsiteConfiguration sets the WebsiteConfiguration field's value. +func (s *PutBucketWebsiteInput) SetWebsiteConfiguration(v *WebsiteConfiguration) *PutBucketWebsiteInput { + s.WebsiteConfiguration = v + return s +} + +type PutBucketWebsiteOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketWebsiteOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketWebsiteOutput) GoString() string { + return s.String() +} + +type PutObjectAclInput struct { + _ struct{} `type:"structure" payload:"AccessControlPolicy"` + + // The canned ACL to apply to the object. + ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` + + AccessControlPolicy *AccessControlPolicy `locationName:"AccessControlPolicy" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Allows grantee the read, write, read ACP, and write ACP permissions on the + // bucket. + GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` + + // Allows grantee to list the objects in the bucket. + GrantRead *string `location:"header" locationName:"x-amz-grant-read" type:"string"` + + // Allows grantee to read the bucket ACL. + GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` + + // Allows grantee to create, overwrite, and delete any object in the bucket. + GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` + + // Allows grantee to write the ACL for the applicable bucket. + GrantWriteACP *string `location:"header" locationName:"x-amz-grant-write-acp" type:"string"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // VersionId used to reference a specific version of the object. + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s PutObjectAclInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutObjectAclInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutObjectAclInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutObjectAclInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.AccessControlPolicy != nil { + if err := s.AccessControlPolicy.Validate(); err != nil { + invalidParams.AddNested("AccessControlPolicy", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetACL sets the ACL field's value. +func (s *PutObjectAclInput) SetACL(v string) *PutObjectAclInput { + s.ACL = &v + return s +} + +// SetAccessControlPolicy sets the AccessControlPolicy field's value. +func (s *PutObjectAclInput) SetAccessControlPolicy(v *AccessControlPolicy) *PutObjectAclInput { + s.AccessControlPolicy = v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *PutObjectAclInput) SetBucket(v string) *PutObjectAclInput { + s.Bucket = &v + return s +} + +func (s *PutObjectAclInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetGrantFullControl sets the GrantFullControl field's value. +func (s *PutObjectAclInput) SetGrantFullControl(v string) *PutObjectAclInput { + s.GrantFullControl = &v + return s +} + +// SetGrantRead sets the GrantRead field's value. +func (s *PutObjectAclInput) SetGrantRead(v string) *PutObjectAclInput { + s.GrantRead = &v + return s +} + +// SetGrantReadACP sets the GrantReadACP field's value. +func (s *PutObjectAclInput) SetGrantReadACP(v string) *PutObjectAclInput { + s.GrantReadACP = &v + return s +} + +// SetGrantWrite sets the GrantWrite field's value. +func (s *PutObjectAclInput) SetGrantWrite(v string) *PutObjectAclInput { + s.GrantWrite = &v + return s +} + +// SetGrantWriteACP sets the GrantWriteACP field's value. +func (s *PutObjectAclInput) SetGrantWriteACP(v string) *PutObjectAclInput { + s.GrantWriteACP = &v + return s +} + +// SetKey sets the Key field's value. +func (s *PutObjectAclInput) SetKey(v string) *PutObjectAclInput { + s.Key = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *PutObjectAclInput) SetRequestPayer(v string) *PutObjectAclInput { + s.RequestPayer = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *PutObjectAclInput) SetVersionId(v string) *PutObjectAclInput { + s.VersionId = &v + return s +} + +type PutObjectAclOutput struct { + _ struct{} `type:"structure"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` +} + +// String returns the string representation +func (s PutObjectAclOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutObjectAclOutput) GoString() string { + return s.String() +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *PutObjectAclOutput) SetRequestCharged(v string) *PutObjectAclOutput { + s.RequestCharged = &v + return s +} + +type PutObjectInput struct { + _ struct{} `type:"structure" payload:"Body"` + + // The canned ACL to apply to the object. + ACL *string `location:"header" locationName:"x-amz-acl" type:"string" enum:"ObjectCannedACL"` + + // Object data. + Body io.ReadSeeker `type:"blob"` + + // Name of the bucket to which the PUT operation was initiated. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Specifies caching behavior along the request/reply chain. + CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"` + + // Specifies presentational information for the object. + ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"` + + // Specifies what content encodings have been applied to the object and thus + // what decoding mechanisms must be applied to obtain the media-type referenced + // by the Content-Type header field. + ContentEncoding *string `location:"header" locationName:"Content-Encoding" type:"string"` + + // The language the content is in. + ContentLanguage *string `location:"header" locationName:"Content-Language" type:"string"` + + // Size of the body in bytes. This parameter is useful when the size of the + // body cannot be determined automatically. + ContentLength *int64 `location:"header" locationName:"Content-Length" type:"long"` + + // The base64-encoded 128-bit MD5 digest of the part data. + ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"` + + // A standard MIME type describing the format of the object data. + ContentType *string `location:"header" locationName:"Content-Type" type:"string"` + + // The date and time at which the object is no longer cacheable. + Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp" timestampFormat:"rfc822"` + + // Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object. + GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` + + // Allows grantee to read the object data and its metadata. + GrantRead *string `location:"header" locationName:"x-amz-grant-read" type:"string"` + + // Allows grantee to read the object ACL. + GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` + + // Allows grantee to write the ACL for the applicable object. + GrantWriteACP *string `location:"header" locationName:"x-amz-grant-write-acp" type:"string"` + + // Object key for which the PUT operation was initiated. + // + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // A map of metadata to store with the object in S3. + Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // Specifies the AWS KMS key ID to use for object encryption. All GET and PUT + // requests for an object protected by AWS KMS will fail if not made via SSL + // or using SigV4. Documentation on configuring any of the officially supported + // AWS SDKs and CLI can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // The type of storage to use for the object. Defaults to 'STANDARD'. + StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` + + // The tag-set for the object. The tag-set must be encoded as URL Query parameters + Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"` + + // If the bucket is configured as a website, redirects requests for this object + // to another object in the same bucket or to an external URL. Amazon S3 stores + // the value of this header in the object metadata. + WebsiteRedirectLocation *string `location:"header" locationName:"x-amz-website-redirect-location" type:"string"` +} + +// String returns the string representation +func (s PutObjectInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutObjectInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutObjectInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutObjectInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetACL sets the ACL field's value. +func (s *PutObjectInput) SetACL(v string) *PutObjectInput { + s.ACL = &v + return s +} + +// SetBody sets the Body field's value. +func (s *PutObjectInput) SetBody(v io.ReadSeeker) *PutObjectInput { + s.Body = v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *PutObjectInput) SetBucket(v string) *PutObjectInput { + s.Bucket = &v + return s +} + +func (s *PutObjectInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCacheControl sets the CacheControl field's value. +func (s *PutObjectInput) SetCacheControl(v string) *PutObjectInput { + s.CacheControl = &v + return s +} + +// SetContentDisposition sets the ContentDisposition field's value. +func (s *PutObjectInput) SetContentDisposition(v string) *PutObjectInput { + s.ContentDisposition = &v + return s +} + +// SetContentEncoding sets the ContentEncoding field's value. +func (s *PutObjectInput) SetContentEncoding(v string) *PutObjectInput { + s.ContentEncoding = &v + return s +} + +// SetContentLanguage sets the ContentLanguage field's value. +func (s *PutObjectInput) SetContentLanguage(v string) *PutObjectInput { + s.ContentLanguage = &v + return s +} + +// SetContentLength sets the ContentLength field's value. +func (s *PutObjectInput) SetContentLength(v int64) *PutObjectInput { + s.ContentLength = &v + return s +} + +// SetContentMD5 sets the ContentMD5 field's value. +func (s *PutObjectInput) SetContentMD5(v string) *PutObjectInput { + s.ContentMD5 = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *PutObjectInput) SetContentType(v string) *PutObjectInput { + s.ContentType = &v + return s +} + +// SetExpires sets the Expires field's value. +func (s *PutObjectInput) SetExpires(v time.Time) *PutObjectInput { + s.Expires = &v + return s +} + +// SetGrantFullControl sets the GrantFullControl field's value. +func (s *PutObjectInput) SetGrantFullControl(v string) *PutObjectInput { + s.GrantFullControl = &v + return s +} + +// SetGrantRead sets the GrantRead field's value. +func (s *PutObjectInput) SetGrantRead(v string) *PutObjectInput { + s.GrantRead = &v + return s +} + +// SetGrantReadACP sets the GrantReadACP field's value. +func (s *PutObjectInput) SetGrantReadACP(v string) *PutObjectInput { + s.GrantReadACP = &v + return s +} + +// SetGrantWriteACP sets the GrantWriteACP field's value. +func (s *PutObjectInput) SetGrantWriteACP(v string) *PutObjectInput { + s.GrantWriteACP = &v + return s +} + +// SetKey sets the Key field's value. +func (s *PutObjectInput) SetKey(v string) *PutObjectInput { + s.Key = &v + return s +} + +// SetMetadata sets the Metadata field's value. +func (s *PutObjectInput) SetMetadata(v map[string]*string) *PutObjectInput { + s.Metadata = v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *PutObjectInput) SetRequestPayer(v string) *PutObjectInput { + s.RequestPayer = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *PutObjectInput) SetSSECustomerAlgorithm(v string) *PutObjectInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *PutObjectInput) SetSSECustomerKey(v string) *PutObjectInput { + s.SSECustomerKey = &v + return s +} + +func (s *PutObjectInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *PutObjectInput) SetSSECustomerKeyMD5(v string) *PutObjectInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *PutObjectInput) SetSSEKMSKeyId(v string) *PutObjectInput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *PutObjectInput) SetServerSideEncryption(v string) *PutObjectInput { + s.ServerSideEncryption = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *PutObjectInput) SetStorageClass(v string) *PutObjectInput { + s.StorageClass = &v + return s +} + +// SetTagging sets the Tagging field's value. +func (s *PutObjectInput) SetTagging(v string) *PutObjectInput { + s.Tagging = &v + return s +} + +// SetWebsiteRedirectLocation sets the WebsiteRedirectLocation field's value. +func (s *PutObjectInput) SetWebsiteRedirectLocation(v string) *PutObjectInput { + s.WebsiteRedirectLocation = &v + return s +} + +type PutObjectOutput struct { + _ struct{} `type:"structure"` + + // Entity tag for the uploaded object. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // If the object expiration is configured, this will contain the expiration + // date (expiry-date) and rule ID (rule-id). The value of rule-id is URL encoded. + Expiration *string `location:"header" locationName:"x-amz-expiration" type:"string"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` + + // Version of the object. + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s PutObjectOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutObjectOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *PutObjectOutput) SetETag(v string) *PutObjectOutput { + s.ETag = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *PutObjectOutput) SetExpiration(v string) *PutObjectOutput { + s.Expiration = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *PutObjectOutput) SetRequestCharged(v string) *PutObjectOutput { + s.RequestCharged = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *PutObjectOutput) SetSSECustomerAlgorithm(v string) *PutObjectOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *PutObjectOutput) SetSSECustomerKeyMD5(v string) *PutObjectOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *PutObjectOutput) SetSSEKMSKeyId(v string) *PutObjectOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *PutObjectOutput) SetServerSideEncryption(v string) *PutObjectOutput { + s.ServerSideEncryption = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *PutObjectOutput) SetVersionId(v string) *PutObjectOutput { + s.VersionId = &v + return s +} + +type PutObjectTaggingInput struct { + _ struct{} `type:"structure" payload:"Tagging"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Tagging is a required field + Tagging *Tagging `locationName:"Tagging" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s PutObjectTaggingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutObjectTaggingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutObjectTaggingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutObjectTaggingInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Tagging == nil { + invalidParams.Add(request.NewErrParamRequired("Tagging")) + } + if s.Tagging != nil { + if err := s.Tagging.Validate(); err != nil { + invalidParams.AddNested("Tagging", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutObjectTaggingInput) SetBucket(v string) *PutObjectTaggingInput { + s.Bucket = &v + return s +} + +func (s *PutObjectTaggingInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *PutObjectTaggingInput) SetKey(v string) *PutObjectTaggingInput { + s.Key = &v + return s +} + +// SetTagging sets the Tagging field's value. +func (s *PutObjectTaggingInput) SetTagging(v *Tagging) *PutObjectTaggingInput { + s.Tagging = v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *PutObjectTaggingInput) SetVersionId(v string) *PutObjectTaggingInput { + s.VersionId = &v + return s +} + +type PutObjectTaggingOutput struct { + _ struct{} `type:"structure"` + + VersionId *string `location:"header" locationName:"x-amz-version-id" type:"string"` +} + +// String returns the string representation +func (s PutObjectTaggingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutObjectTaggingOutput) GoString() string { + return s.String() +} + +// SetVersionId sets the VersionId field's value. +func (s *PutObjectTaggingOutput) SetVersionId(v string) *PutObjectTaggingOutput { + s.VersionId = &v + return s +} + +// Container for specifying an configuration when you want Amazon S3 to publish +// events to an Amazon Simple Queue Service (Amazon SQS) queue. +type QueueConfiguration struct { + _ struct{} `type:"structure"` + + // Events is a required field + Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` + + // Container for object key name filtering rules. For information about key + // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + Filter *NotificationConfigurationFilter `type:"structure"` + + // Optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + // Amazon SQS queue ARN to which Amazon S3 will publish a message when it detects + // events of specified type. + // + // QueueArn is a required field + QueueArn *string `locationName:"Queue" type:"string" required:"true"` +} + +// String returns the string representation +func (s QueueConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueueConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *QueueConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueueConfiguration"} + if s.Events == nil { + invalidParams.Add(request.NewErrParamRequired("Events")) + } + if s.QueueArn == nil { + invalidParams.Add(request.NewErrParamRequired("QueueArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEvents sets the Events field's value. +func (s *QueueConfiguration) SetEvents(v []*string) *QueueConfiguration { + s.Events = v + return s +} + +// SetFilter sets the Filter field's value. +func (s *QueueConfiguration) SetFilter(v *NotificationConfigurationFilter) *QueueConfiguration { + s.Filter = v + return s +} + +// SetId sets the Id field's value. +func (s *QueueConfiguration) SetId(v string) *QueueConfiguration { + s.Id = &v + return s +} + +// SetQueueArn sets the QueueArn field's value. +func (s *QueueConfiguration) SetQueueArn(v string) *QueueConfiguration { + s.QueueArn = &v + return s +} + +type QueueConfigurationDeprecated struct { + _ struct{} `type:"structure"` + + // Bucket event for which to send notifications. + Event *string `deprecated:"true" type:"string" enum:"Event"` + + Events []*string `locationName:"Event" type:"list" flattened:"true"` + + // Optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + Queue *string `type:"string"` +} + +// String returns the string representation +func (s QueueConfigurationDeprecated) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueueConfigurationDeprecated) GoString() string { + return s.String() +} + +// SetEvent sets the Event field's value. +func (s *QueueConfigurationDeprecated) SetEvent(v string) *QueueConfigurationDeprecated { + s.Event = &v + return s +} + +// SetEvents sets the Events field's value. +func (s *QueueConfigurationDeprecated) SetEvents(v []*string) *QueueConfigurationDeprecated { + s.Events = v + return s +} + +// SetId sets the Id field's value. +func (s *QueueConfigurationDeprecated) SetId(v string) *QueueConfigurationDeprecated { + s.Id = &v + return s +} + +// SetQueue sets the Queue field's value. +func (s *QueueConfigurationDeprecated) SetQueue(v string) *QueueConfigurationDeprecated { + s.Queue = &v + return s +} + +type Redirect struct { + _ struct{} `type:"structure"` + + // The host name to use in the redirect request. + HostName *string `type:"string"` + + // The HTTP redirect code to use on the response. Not required if one of the + // siblings is present. + HttpRedirectCode *string `type:"string"` + + // Protocol to use (http, https) when redirecting requests. The default is the + // protocol that is used in the original request. + Protocol *string `type:"string" enum:"Protocol"` + + // The object key prefix to use in the redirect request. For example, to redirect + // requests for all pages with prefix docs/ (objects in the docs/ folder) to + // documents/, you can set a condition block with KeyPrefixEquals set to docs/ + // and in the Redirect set ReplaceKeyPrefixWith to /documents. Not required + // if one of the siblings is present. Can be present only if ReplaceKeyWith + // is not provided. + ReplaceKeyPrefixWith *string `type:"string"` + + // The specific object key to use in the redirect request. For example, redirect + // request to error.html. Not required if one of the sibling is present. Can + // be present only if ReplaceKeyPrefixWith is not provided. + ReplaceKeyWith *string `type:"string"` +} + +// String returns the string representation +func (s Redirect) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Redirect) GoString() string { + return s.String() +} + +// SetHostName sets the HostName field's value. +func (s *Redirect) SetHostName(v string) *Redirect { + s.HostName = &v + return s +} + +// SetHttpRedirectCode sets the HttpRedirectCode field's value. +func (s *Redirect) SetHttpRedirectCode(v string) *Redirect { + s.HttpRedirectCode = &v + return s +} + +// SetProtocol sets the Protocol field's value. +func (s *Redirect) SetProtocol(v string) *Redirect { + s.Protocol = &v + return s +} + +// SetReplaceKeyPrefixWith sets the ReplaceKeyPrefixWith field's value. +func (s *Redirect) SetReplaceKeyPrefixWith(v string) *Redirect { + s.ReplaceKeyPrefixWith = &v + return s +} + +// SetReplaceKeyWith sets the ReplaceKeyWith field's value. +func (s *Redirect) SetReplaceKeyWith(v string) *Redirect { + s.ReplaceKeyWith = &v + return s +} + +type RedirectAllRequestsTo struct { + _ struct{} `type:"structure"` + + // Name of the host where requests will be redirected. + // + // HostName is a required field + HostName *string `type:"string" required:"true"` + + // Protocol to use (http, https) when redirecting requests. The default is the + // protocol that is used in the original request. + Protocol *string `type:"string" enum:"Protocol"` +} + +// String returns the string representation +func (s RedirectAllRequestsTo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RedirectAllRequestsTo) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RedirectAllRequestsTo) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RedirectAllRequestsTo"} + if s.HostName == nil { + invalidParams.Add(request.NewErrParamRequired("HostName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHostName sets the HostName field's value. +func (s *RedirectAllRequestsTo) SetHostName(v string) *RedirectAllRequestsTo { + s.HostName = &v + return s +} + +// SetProtocol sets the Protocol field's value. +func (s *RedirectAllRequestsTo) SetProtocol(v string) *RedirectAllRequestsTo { + s.Protocol = &v + return s +} + +// Container for replication rules. You can add as many as 1,000 rules. Total +// replication configuration size can be up to 2 MB. +type ReplicationConfiguration struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of an IAM role for Amazon S3 to assume when replicating + // the objects. + // + // Role is a required field + Role *string `type:"string" required:"true"` + + // Container for information about a particular replication rule. Replication + // configuration must have at least one rule and can contain up to 1,000 rules. + // + // Rules is a required field + Rules []*ReplicationRule `locationName:"Rule" type:"list" flattened:"true" required:"true"` +} + +// String returns the string representation +func (s ReplicationConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicationConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicationConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicationConfiguration"} + if s.Role == nil { + invalidParams.Add(request.NewErrParamRequired("Role")) + } + if s.Rules == nil { + invalidParams.Add(request.NewErrParamRequired("Rules")) + } + if s.Rules != nil { + for i, v := range s.Rules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Rules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRole sets the Role field's value. +func (s *ReplicationConfiguration) SetRole(v string) *ReplicationConfiguration { + s.Role = &v + return s +} + +// SetRules sets the Rules field's value. +func (s *ReplicationConfiguration) SetRules(v []*ReplicationRule) *ReplicationConfiguration { + s.Rules = v + return s +} + +// Container for information about a particular replication rule. +type ReplicationRule struct { + _ struct{} `type:"structure"` + + // Container for replication destination information. + // + // Destination is a required field + Destination *Destination `type:"structure" required:"true"` + + // Unique identifier for the rule. The value cannot be longer than 255 characters. + ID *string `type:"string"` + + // Object keyname prefix identifying one or more objects to which the rule applies. + // Maximum prefix length can be up to 1,024 characters. Overlapping prefixes + // are not supported. + // + // Prefix is a required field + Prefix *string `type:"string" required:"true"` + + // Container for filters that define which source objects should be replicated. + SourceSelectionCriteria *SourceSelectionCriteria `type:"structure"` + + // The rule is ignored if status is not Enabled. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"ReplicationRuleStatus"` +} + +// String returns the string representation +func (s ReplicationRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicationRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicationRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicationRule"} + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.Prefix == nil { + invalidParams.Add(request.NewErrParamRequired("Prefix")) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + if s.Destination != nil { + if err := s.Destination.Validate(); err != nil { + invalidParams.AddNested("Destination", err.(request.ErrInvalidParams)) + } + } + if s.SourceSelectionCriteria != nil { + if err := s.SourceSelectionCriteria.Validate(); err != nil { + invalidParams.AddNested("SourceSelectionCriteria", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestination sets the Destination field's value. +func (s *ReplicationRule) SetDestination(v *Destination) *ReplicationRule { + s.Destination = v + return s +} + +// SetID sets the ID field's value. +func (s *ReplicationRule) SetID(v string) *ReplicationRule { + s.ID = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ReplicationRule) SetPrefix(v string) *ReplicationRule { + s.Prefix = &v + return s +} + +// SetSourceSelectionCriteria sets the SourceSelectionCriteria field's value. +func (s *ReplicationRule) SetSourceSelectionCriteria(v *SourceSelectionCriteria) *ReplicationRule { + s.SourceSelectionCriteria = v + return s +} + +// SetStatus sets the Status field's value. +func (s *ReplicationRule) SetStatus(v string) *ReplicationRule { + s.Status = &v + return s +} + +type RequestPaymentConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies who pays for the download and request fees. + // + // Payer is a required field + Payer *string `type:"string" required:"true" enum:"Payer"` +} + +// String returns the string representation +func (s RequestPaymentConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RequestPaymentConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RequestPaymentConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RequestPaymentConfiguration"} + if s.Payer == nil { + invalidParams.Add(request.NewErrParamRequired("Payer")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPayer sets the Payer field's value. +func (s *RequestPaymentConfiguration) SetPayer(v string) *RequestPaymentConfiguration { + s.Payer = &v + return s +} + +type RestoreObjectInput struct { + _ struct{} `type:"structure" payload:"RestoreRequest"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Container for restore job parameters. + RestoreRequest *RestoreRequest `locationName:"RestoreRequest" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + VersionId *string `location:"querystring" locationName:"versionId" type:"string"` +} + +// String returns the string representation +func (s RestoreObjectInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreObjectInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreObjectInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreObjectInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.RestoreRequest != nil { + if err := s.RestoreRequest.Validate(); err != nil { + invalidParams.AddNested("RestoreRequest", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *RestoreObjectInput) SetBucket(v string) *RestoreObjectInput { + s.Bucket = &v + return s +} + +func (s *RestoreObjectInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetKey sets the Key field's value. +func (s *RestoreObjectInput) SetKey(v string) *RestoreObjectInput { + s.Key = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *RestoreObjectInput) SetRequestPayer(v string) *RestoreObjectInput { + s.RequestPayer = &v + return s +} + +// SetRestoreRequest sets the RestoreRequest field's value. +func (s *RestoreObjectInput) SetRestoreRequest(v *RestoreRequest) *RestoreObjectInput { + s.RestoreRequest = v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *RestoreObjectInput) SetVersionId(v string) *RestoreObjectInput { + s.VersionId = &v + return s +} + +type RestoreObjectOutput struct { + _ struct{} `type:"structure"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // Indicates the path in the provided S3 output location where Select results + // will be restored to. + RestoreOutputPath *string `location:"header" locationName:"x-amz-restore-output-path" type:"string"` +} + +// String returns the string representation +func (s RestoreObjectOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreObjectOutput) GoString() string { + return s.String() +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *RestoreObjectOutput) SetRequestCharged(v string) *RestoreObjectOutput { + s.RequestCharged = &v + return s +} + +// SetRestoreOutputPath sets the RestoreOutputPath field's value. +func (s *RestoreObjectOutput) SetRestoreOutputPath(v string) *RestoreObjectOutput { + s.RestoreOutputPath = &v + return s +} + +// Container for restore job parameters. +type RestoreRequest struct { + _ struct{} `type:"structure"` + + // Lifetime of the active copy in days. Do not use with restores that specify + // OutputLocation. + Days *int64 `type:"integer"` + + // The optional description for the job. + Description *string `type:"string"` + + // Glacier related parameters pertaining to this job. Do not use with restores + // that specify OutputLocation. + GlacierJobParameters *GlacierJobParameters `type:"structure"` + + // Describes the location where the restore job's output is stored. + OutputLocation *OutputLocation `type:"structure"` + + // Describes the parameters for Select job types. + SelectParameters *SelectParameters `type:"structure"` + + // Glacier retrieval tier at which the restore will be processed. + Tier *string `type:"string" enum:"Tier"` + + // Type of restore request. + Type *string `type:"string" enum:"RestoreRequestType"` +} + +// String returns the string representation +func (s RestoreRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreRequest"} + if s.GlacierJobParameters != nil { + if err := s.GlacierJobParameters.Validate(); err != nil { + invalidParams.AddNested("GlacierJobParameters", err.(request.ErrInvalidParams)) + } + } + if s.OutputLocation != nil { + if err := s.OutputLocation.Validate(); err != nil { + invalidParams.AddNested("OutputLocation", err.(request.ErrInvalidParams)) + } + } + if s.SelectParameters != nil { + if err := s.SelectParameters.Validate(); err != nil { + invalidParams.AddNested("SelectParameters", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDays sets the Days field's value. +func (s *RestoreRequest) SetDays(v int64) *RestoreRequest { + s.Days = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *RestoreRequest) SetDescription(v string) *RestoreRequest { + s.Description = &v + return s +} + +// SetGlacierJobParameters sets the GlacierJobParameters field's value. +func (s *RestoreRequest) SetGlacierJobParameters(v *GlacierJobParameters) *RestoreRequest { + s.GlacierJobParameters = v + return s +} + +// SetOutputLocation sets the OutputLocation field's value. +func (s *RestoreRequest) SetOutputLocation(v *OutputLocation) *RestoreRequest { + s.OutputLocation = v + return s +} + +// SetSelectParameters sets the SelectParameters field's value. +func (s *RestoreRequest) SetSelectParameters(v *SelectParameters) *RestoreRequest { + s.SelectParameters = v + return s +} + +// SetTier sets the Tier field's value. +func (s *RestoreRequest) SetTier(v string) *RestoreRequest { + s.Tier = &v + return s +} + +// SetType sets the Type field's value. +func (s *RestoreRequest) SetType(v string) *RestoreRequest { + s.Type = &v + return s +} + +type RoutingRule struct { + _ struct{} `type:"structure"` + + // A container for describing a condition that must be met for the specified + // redirect to apply. For example, 1. If request is for pages in the /docs folder, + // redirect to the /documents folder. 2. If request results in HTTP error 4xx, + // redirect request to another host where you might process the error. + Condition *Condition `type:"structure"` + + // Container for redirect information. You can redirect requests to another + // host, to another page, or with another protocol. In the event of an error, + // you can can specify a different error code to return. + // + // Redirect is a required field + Redirect *Redirect `type:"structure" required:"true"` +} + +// String returns the string representation +func (s RoutingRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RoutingRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RoutingRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RoutingRule"} + if s.Redirect == nil { + invalidParams.Add(request.NewErrParamRequired("Redirect")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCondition sets the Condition field's value. +func (s *RoutingRule) SetCondition(v *Condition) *RoutingRule { + s.Condition = v + return s +} + +// SetRedirect sets the Redirect field's value. +func (s *RoutingRule) SetRedirect(v *Redirect) *RoutingRule { + s.Redirect = v + return s +} + +type Rule struct { + _ struct{} `type:"structure"` + + // Specifies the days since the initiation of an Incomplete Multipart Upload + // that Lifecycle will wait before permanently removing all parts of the upload. + AbortIncompleteMultipartUpload *AbortIncompleteMultipartUpload `type:"structure"` + + Expiration *LifecycleExpiration `type:"structure"` + + // Unique identifier for the rule. The value cannot be longer than 255 characters. + ID *string `type:"string"` + + // Specifies when noncurrent object versions expire. Upon expiration, Amazon + // S3 permanently deletes the noncurrent object versions. You set this lifecycle + // configuration action on a bucket that has versioning enabled (or suspended) + // to request that Amazon S3 delete noncurrent object versions at a specific + // period in the object's lifetime. + NoncurrentVersionExpiration *NoncurrentVersionExpiration `type:"structure"` + + // Container for the transition rule that describes when noncurrent objects + // transition to the STANDARD_IA, ONEZONE_IA or GLACIER storage class. If your + // bucket is versioning-enabled (or versioning is suspended), you can set this + // action to request that Amazon S3 transition noncurrent object versions to + // the STANDARD_IA, ONEZONE_IA or GLACIER storage class at a specific period + // in the object's lifetime. + NoncurrentVersionTransition *NoncurrentVersionTransition `type:"structure"` + + // Prefix identifying one or more objects to which the rule applies. + // + // Prefix is a required field + Prefix *string `type:"string" required:"true"` + + // If 'Enabled', the rule is currently being applied. If 'Disabled', the rule + // is not currently being applied. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"ExpirationStatus"` + + Transition *Transition `type:"structure"` +} + +// String returns the string representation +func (s Rule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Rule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Rule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Rule"} + if s.Prefix == nil { + invalidParams.Add(request.NewErrParamRequired("Prefix")) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAbortIncompleteMultipartUpload sets the AbortIncompleteMultipartUpload field's value. +func (s *Rule) SetAbortIncompleteMultipartUpload(v *AbortIncompleteMultipartUpload) *Rule { + s.AbortIncompleteMultipartUpload = v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *Rule) SetExpiration(v *LifecycleExpiration) *Rule { + s.Expiration = v + return s +} + +// SetID sets the ID field's value. +func (s *Rule) SetID(v string) *Rule { + s.ID = &v + return s +} + +// SetNoncurrentVersionExpiration sets the NoncurrentVersionExpiration field's value. +func (s *Rule) SetNoncurrentVersionExpiration(v *NoncurrentVersionExpiration) *Rule { + s.NoncurrentVersionExpiration = v + return s +} + +// SetNoncurrentVersionTransition sets the NoncurrentVersionTransition field's value. +func (s *Rule) SetNoncurrentVersionTransition(v *NoncurrentVersionTransition) *Rule { + s.NoncurrentVersionTransition = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *Rule) SetPrefix(v string) *Rule { + s.Prefix = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *Rule) SetStatus(v string) *Rule { + s.Status = &v + return s +} + +// SetTransition sets the Transition field's value. +func (s *Rule) SetTransition(v *Transition) *Rule { + s.Transition = v + return s +} + +// Specifies the use of SSE-KMS to encrypt delievered Inventory reports. +type SSEKMS struct { + _ struct{} `locationName:"SSE-KMS" type:"structure"` + + // Specifies the ID of the AWS Key Management Service (KMS) master encryption + // key to use for encrypting Inventory reports. + // + // KeyId is a required field + KeyId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s SSEKMS) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSEKMS) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SSEKMS) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SSEKMS"} + if s.KeyId == nil { + invalidParams.Add(request.NewErrParamRequired("KeyId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKeyId sets the KeyId field's value. +func (s *SSEKMS) SetKeyId(v string) *SSEKMS { + s.KeyId = &v + return s +} + +// Specifies the use of SSE-S3 to encrypt delievered Inventory reports. +type SSES3 struct { + _ struct{} `locationName:"SSE-S3" type:"structure"` +} + +// String returns the string representation +func (s SSES3) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSES3) GoString() string { + return s.String() +} + +// Describes the parameters for Select job types. +type SelectParameters struct { + _ struct{} `type:"structure"` + + // The expression that is used to query the object. + // + // Expression is a required field + Expression *string `type:"string" required:"true"` + + // The type of the provided expression (e.g., SQL). + // + // ExpressionType is a required field + ExpressionType *string `type:"string" required:"true" enum:"ExpressionType"` + + // Describes the serialization format of the object. + // + // InputSerialization is a required field + InputSerialization *InputSerialization `type:"structure" required:"true"` + + // Describes how the results of the Select job are serialized. + // + // OutputSerialization is a required field + OutputSerialization *OutputSerialization `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SelectParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SelectParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SelectParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SelectParameters"} + if s.Expression == nil { + invalidParams.Add(request.NewErrParamRequired("Expression")) + } + if s.ExpressionType == nil { + invalidParams.Add(request.NewErrParamRequired("ExpressionType")) + } + if s.InputSerialization == nil { + invalidParams.Add(request.NewErrParamRequired("InputSerialization")) + } + if s.OutputSerialization == nil { + invalidParams.Add(request.NewErrParamRequired("OutputSerialization")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpression sets the Expression field's value. +func (s *SelectParameters) SetExpression(v string) *SelectParameters { + s.Expression = &v + return s +} + +// SetExpressionType sets the ExpressionType field's value. +func (s *SelectParameters) SetExpressionType(v string) *SelectParameters { + s.ExpressionType = &v + return s +} + +// SetInputSerialization sets the InputSerialization field's value. +func (s *SelectParameters) SetInputSerialization(v *InputSerialization) *SelectParameters { + s.InputSerialization = v + return s +} + +// SetOutputSerialization sets the OutputSerialization field's value. +func (s *SelectParameters) SetOutputSerialization(v *OutputSerialization) *SelectParameters { + s.OutputSerialization = v + return s +} + +// Describes the default server-side encryption to apply to new objects in the +// bucket. If Put Object request does not specify any server-side encryption, +// this default encryption will be applied. +type ServerSideEncryptionByDefault struct { + _ struct{} `type:"structure"` + + // KMS master key ID to use for the default encryption. This parameter is allowed + // if SSEAlgorithm is aws:kms. + KMSMasterKeyID *string `type:"string"` + + // Server-side encryption algorithm to use for the default encryption. + // + // SSEAlgorithm is a required field + SSEAlgorithm *string `type:"string" required:"true" enum:"ServerSideEncryption"` +} + +// String returns the string representation +func (s ServerSideEncryptionByDefault) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerSideEncryptionByDefault) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServerSideEncryptionByDefault) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServerSideEncryptionByDefault"} + if s.SSEAlgorithm == nil { + invalidParams.Add(request.NewErrParamRequired("SSEAlgorithm")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKMSMasterKeyID sets the KMSMasterKeyID field's value. +func (s *ServerSideEncryptionByDefault) SetKMSMasterKeyID(v string) *ServerSideEncryptionByDefault { + s.KMSMasterKeyID = &v + return s +} + +// SetSSEAlgorithm sets the SSEAlgorithm field's value. +func (s *ServerSideEncryptionByDefault) SetSSEAlgorithm(v string) *ServerSideEncryptionByDefault { + s.SSEAlgorithm = &v + return s +} + +// Container for server-side encryption configuration rules. Currently S3 supports +// one rule only. +type ServerSideEncryptionConfiguration struct { + _ struct{} `type:"structure"` + + // Container for information about a particular server-side encryption configuration + // rule. + // + // Rules is a required field + Rules []*ServerSideEncryptionRule `locationName:"Rule" type:"list" flattened:"true" required:"true"` +} + +// String returns the string representation +func (s ServerSideEncryptionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerSideEncryptionConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServerSideEncryptionConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServerSideEncryptionConfiguration"} + if s.Rules == nil { + invalidParams.Add(request.NewErrParamRequired("Rules")) + } + if s.Rules != nil { + for i, v := range s.Rules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Rules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRules sets the Rules field's value. +func (s *ServerSideEncryptionConfiguration) SetRules(v []*ServerSideEncryptionRule) *ServerSideEncryptionConfiguration { + s.Rules = v + return s +} + +// Container for information about a particular server-side encryption configuration +// rule. +type ServerSideEncryptionRule struct { + _ struct{} `type:"structure"` + + // Describes the default server-side encryption to apply to new objects in the + // bucket. If Put Object request does not specify any server-side encryption, + // this default encryption will be applied. + ApplyServerSideEncryptionByDefault *ServerSideEncryptionByDefault `type:"structure"` +} + +// String returns the string representation +func (s ServerSideEncryptionRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerSideEncryptionRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServerSideEncryptionRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServerSideEncryptionRule"} + if s.ApplyServerSideEncryptionByDefault != nil { + if err := s.ApplyServerSideEncryptionByDefault.Validate(); err != nil { + invalidParams.AddNested("ApplyServerSideEncryptionByDefault", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyServerSideEncryptionByDefault sets the ApplyServerSideEncryptionByDefault field's value. +func (s *ServerSideEncryptionRule) SetApplyServerSideEncryptionByDefault(v *ServerSideEncryptionByDefault) *ServerSideEncryptionRule { + s.ApplyServerSideEncryptionByDefault = v + return s +} + +// Container for filters that define which source objects should be replicated. +type SourceSelectionCriteria struct { + _ struct{} `type:"structure"` + + // Container for filter information of selection of KMS Encrypted S3 objects. + SseKmsEncryptedObjects *SseKmsEncryptedObjects `type:"structure"` +} + +// String returns the string representation +func (s SourceSelectionCriteria) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SourceSelectionCriteria) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SourceSelectionCriteria) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SourceSelectionCriteria"} + if s.SseKmsEncryptedObjects != nil { + if err := s.SseKmsEncryptedObjects.Validate(); err != nil { + invalidParams.AddNested("SseKmsEncryptedObjects", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSseKmsEncryptedObjects sets the SseKmsEncryptedObjects field's value. +func (s *SourceSelectionCriteria) SetSseKmsEncryptedObjects(v *SseKmsEncryptedObjects) *SourceSelectionCriteria { + s.SseKmsEncryptedObjects = v + return s +} + +// Container for filter information of selection of KMS Encrypted S3 objects. +type SseKmsEncryptedObjects struct { + _ struct{} `type:"structure"` + + // The replication for KMS encrypted S3 objects is disabled if status is not + // Enabled. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"` +} + +// String returns the string representation +func (s SseKmsEncryptedObjects) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SseKmsEncryptedObjects) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SseKmsEncryptedObjects) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SseKmsEncryptedObjects"} + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStatus sets the Status field's value. +func (s *SseKmsEncryptedObjects) SetStatus(v string) *SseKmsEncryptedObjects { + s.Status = &v + return s +} + +type StorageClassAnalysis struct { + _ struct{} `type:"structure"` + + // A container used to describe how data related to the storage class analysis + // should be exported. + DataExport *StorageClassAnalysisDataExport `type:"structure"` +} + +// String returns the string representation +func (s StorageClassAnalysis) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StorageClassAnalysis) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StorageClassAnalysis) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StorageClassAnalysis"} + if s.DataExport != nil { + if err := s.DataExport.Validate(); err != nil { + invalidParams.AddNested("DataExport", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDataExport sets the DataExport field's value. +func (s *StorageClassAnalysis) SetDataExport(v *StorageClassAnalysisDataExport) *StorageClassAnalysis { + s.DataExport = v + return s +} + +type StorageClassAnalysisDataExport struct { + _ struct{} `type:"structure"` + + // The place to store the data for an analysis. + // + // Destination is a required field + Destination *AnalyticsExportDestination `type:"structure" required:"true"` + + // The version of the output schema to use when exporting data. Must be V_1. + // + // OutputSchemaVersion is a required field + OutputSchemaVersion *string `type:"string" required:"true" enum:"StorageClassAnalysisSchemaVersion"` +} + +// String returns the string representation +func (s StorageClassAnalysisDataExport) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StorageClassAnalysisDataExport) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StorageClassAnalysisDataExport) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StorageClassAnalysisDataExport"} + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.OutputSchemaVersion == nil { + invalidParams.Add(request.NewErrParamRequired("OutputSchemaVersion")) + } + if s.Destination != nil { + if err := s.Destination.Validate(); err != nil { + invalidParams.AddNested("Destination", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestination sets the Destination field's value. +func (s *StorageClassAnalysisDataExport) SetDestination(v *AnalyticsExportDestination) *StorageClassAnalysisDataExport { + s.Destination = v + return s +} + +// SetOutputSchemaVersion sets the OutputSchemaVersion field's value. +func (s *StorageClassAnalysisDataExport) SetOutputSchemaVersion(v string) *StorageClassAnalysisDataExport { + s.OutputSchemaVersion = &v + return s +} + +type Tag struct { + _ struct{} `type:"structure"` + + // Name of the tag. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // Value of the tag. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type Tagging struct { + _ struct{} `type:"structure"` + + // TagSet is a required field + TagSet []*Tag `locationNameList:"Tag" type:"list" required:"true"` +} + +// String returns the string representation +func (s Tagging) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tagging) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tagging) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tagging"} + if s.TagSet == nil { + invalidParams.Add(request.NewErrParamRequired("TagSet")) + } + if s.TagSet != nil { + for i, v := range s.TagSet { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TagSet", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTagSet sets the TagSet field's value. +func (s *Tagging) SetTagSet(v []*Tag) *Tagging { + s.TagSet = v + return s +} + +type TargetGrant struct { + _ struct{} `type:"structure"` + + Grantee *Grantee `type:"structure" xmlPrefix:"xsi" xmlURI:"http://www.w3.org/2001/XMLSchema-instance"` + + // Logging permissions assigned to the Grantee for the bucket. + Permission *string `type:"string" enum:"BucketLogsPermission"` +} + +// String returns the string representation +func (s TargetGrant) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetGrant) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TargetGrant) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TargetGrant"} + if s.Grantee != nil { + if err := s.Grantee.Validate(); err != nil { + invalidParams.AddNested("Grantee", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGrantee sets the Grantee field's value. +func (s *TargetGrant) SetGrantee(v *Grantee) *TargetGrant { + s.Grantee = v + return s +} + +// SetPermission sets the Permission field's value. +func (s *TargetGrant) SetPermission(v string) *TargetGrant { + s.Permission = &v + return s +} + +// Container for specifying the configuration when you want Amazon S3 to publish +// events to an Amazon Simple Notification Service (Amazon SNS) topic. +type TopicConfiguration struct { + _ struct{} `type:"structure"` + + // Events is a required field + Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` + + // Container for object key name filtering rules. For information about key + // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + Filter *NotificationConfigurationFilter `type:"structure"` + + // Optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + // Amazon SNS topic ARN to which Amazon S3 will publish a message when it detects + // events of specified type. + // + // TopicArn is a required field + TopicArn *string `locationName:"Topic" type:"string" required:"true"` +} + +// String returns the string representation +func (s TopicConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TopicConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TopicConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TopicConfiguration"} + if s.Events == nil { + invalidParams.Add(request.NewErrParamRequired("Events")) + } + if s.TopicArn == nil { + invalidParams.Add(request.NewErrParamRequired("TopicArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEvents sets the Events field's value. +func (s *TopicConfiguration) SetEvents(v []*string) *TopicConfiguration { + s.Events = v + return s +} + +// SetFilter sets the Filter field's value. +func (s *TopicConfiguration) SetFilter(v *NotificationConfigurationFilter) *TopicConfiguration { + s.Filter = v + return s +} + +// SetId sets the Id field's value. +func (s *TopicConfiguration) SetId(v string) *TopicConfiguration { + s.Id = &v + return s +} + +// SetTopicArn sets the TopicArn field's value. +func (s *TopicConfiguration) SetTopicArn(v string) *TopicConfiguration { + s.TopicArn = &v + return s +} + +type TopicConfigurationDeprecated struct { + _ struct{} `type:"structure"` + + // Bucket event for which to send notifications. + Event *string `deprecated:"true" type:"string" enum:"Event"` + + Events []*string `locationName:"Event" type:"list" flattened:"true"` + + // Optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + // Amazon SNS topic to which Amazon S3 will publish a message to report the + // specified events for the bucket. + Topic *string `type:"string"` +} + +// String returns the string representation +func (s TopicConfigurationDeprecated) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TopicConfigurationDeprecated) GoString() string { + return s.String() +} + +// SetEvent sets the Event field's value. +func (s *TopicConfigurationDeprecated) SetEvent(v string) *TopicConfigurationDeprecated { + s.Event = &v + return s +} + +// SetEvents sets the Events field's value. +func (s *TopicConfigurationDeprecated) SetEvents(v []*string) *TopicConfigurationDeprecated { + s.Events = v + return s +} + +// SetId sets the Id field's value. +func (s *TopicConfigurationDeprecated) SetId(v string) *TopicConfigurationDeprecated { + s.Id = &v + return s +} + +// SetTopic sets the Topic field's value. +func (s *TopicConfigurationDeprecated) SetTopic(v string) *TopicConfigurationDeprecated { + s.Topic = &v + return s +} + +type Transition struct { + _ struct{} `type:"structure"` + + // Indicates at what date the object is to be moved or deleted. Should be in + // GMT ISO 8601 Format. + Date *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // Indicates the lifetime, in days, of the objects that are subject to the rule. + // The value must be a non-zero positive integer. + Days *int64 `type:"integer"` + + // The class of storage used to store the object. + StorageClass *string `type:"string" enum:"TransitionStorageClass"` +} + +// String returns the string representation +func (s Transition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Transition) GoString() string { + return s.String() +} + +// SetDate sets the Date field's value. +func (s *Transition) SetDate(v time.Time) *Transition { + s.Date = &v + return s +} + +// SetDays sets the Days field's value. +func (s *Transition) SetDays(v int64) *Transition { + s.Days = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *Transition) SetStorageClass(v string) *Transition { + s.StorageClass = &v + return s +} + +type UploadPartCopyInput struct { + _ struct{} `type:"structure"` + + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The name of the source bucket and key name of the source object, separated + // by a slash (/). Must be URL-encoded. + // + // CopySource is a required field + CopySource *string `location:"header" locationName:"x-amz-copy-source" type:"string" required:"true"` + + // Copies the object if its entity tag (ETag) matches the specified tag. + CopySourceIfMatch *string `location:"header" locationName:"x-amz-copy-source-if-match" type:"string"` + + // Copies the object if it has been modified since the specified time. + CopySourceIfModifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-modified-since" type:"timestamp" timestampFormat:"rfc822"` + + // Copies the object if its entity tag (ETag) is different than the specified + // ETag. + CopySourceIfNoneMatch *string `location:"header" locationName:"x-amz-copy-source-if-none-match" type:"string"` + + // Copies the object if it hasn't been modified since the specified time. + CopySourceIfUnmodifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-unmodified-since" type:"timestamp" timestampFormat:"rfc822"` + + // The range of bytes to copy from the source object. The range value must use + // the form bytes=first-last, where the first and last are the zero-based byte + // offsets to copy. For example, bytes=0-9 indicates that you want to copy the + // first ten bytes of the source. You can copy a range only if the source object + // is greater than 5 GB. + CopySourceRange *string `location:"header" locationName:"x-amz-copy-source-range" type:"string"` + + // Specifies the algorithm to use when decrypting the source object (e.g., AES256). + CopySourceSSECustomerAlgorithm *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use to decrypt + // the source object. The encryption key provided in this header must be one + // that was used when the source object was created. + CopySourceSSECustomerKey *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + CopySourceSSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-key-MD5" type:"string"` + + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Part number of part being copied. This is a positive integer between 1 and + // 10,000. + // + // PartNumber is a required field + PartNumber *int64 `location:"querystring" locationName:"partNumber" type:"integer" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. This must be the same encryption key specified in the initiate multipart + // upload request. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // Upload ID identifying the multipart upload whose part is being copied. + // + // UploadId is a required field + UploadId *string `location:"querystring" locationName:"uploadId" type:"string" required:"true"` +} + +// String returns the string representation +func (s UploadPartCopyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadPartCopyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UploadPartCopyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UploadPartCopyInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.CopySource == nil { + invalidParams.Add(request.NewErrParamRequired("CopySource")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.PartNumber == nil { + invalidParams.Add(request.NewErrParamRequired("PartNumber")) + } + if s.UploadId == nil { + invalidParams.Add(request.NewErrParamRequired("UploadId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *UploadPartCopyInput) SetBucket(v string) *UploadPartCopyInput { + s.Bucket = &v + return s +} + +func (s *UploadPartCopyInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetCopySource sets the CopySource field's value. +func (s *UploadPartCopyInput) SetCopySource(v string) *UploadPartCopyInput { + s.CopySource = &v + return s +} + +// SetCopySourceIfMatch sets the CopySourceIfMatch field's value. +func (s *UploadPartCopyInput) SetCopySourceIfMatch(v string) *UploadPartCopyInput { + s.CopySourceIfMatch = &v + return s +} + +// SetCopySourceIfModifiedSince sets the CopySourceIfModifiedSince field's value. +func (s *UploadPartCopyInput) SetCopySourceIfModifiedSince(v time.Time) *UploadPartCopyInput { + s.CopySourceIfModifiedSince = &v + return s +} + +// SetCopySourceIfNoneMatch sets the CopySourceIfNoneMatch field's value. +func (s *UploadPartCopyInput) SetCopySourceIfNoneMatch(v string) *UploadPartCopyInput { + s.CopySourceIfNoneMatch = &v + return s +} + +// SetCopySourceIfUnmodifiedSince sets the CopySourceIfUnmodifiedSince field's value. +func (s *UploadPartCopyInput) SetCopySourceIfUnmodifiedSince(v time.Time) *UploadPartCopyInput { + s.CopySourceIfUnmodifiedSince = &v + return s +} + +// SetCopySourceRange sets the CopySourceRange field's value. +func (s *UploadPartCopyInput) SetCopySourceRange(v string) *UploadPartCopyInput { + s.CopySourceRange = &v + return s +} + +// SetCopySourceSSECustomerAlgorithm sets the CopySourceSSECustomerAlgorithm field's value. +func (s *UploadPartCopyInput) SetCopySourceSSECustomerAlgorithm(v string) *UploadPartCopyInput { + s.CopySourceSSECustomerAlgorithm = &v + return s +} + +// SetCopySourceSSECustomerKey sets the CopySourceSSECustomerKey field's value. +func (s *UploadPartCopyInput) SetCopySourceSSECustomerKey(v string) *UploadPartCopyInput { + s.CopySourceSSECustomerKey = &v + return s +} + +func (s *UploadPartCopyInput) getCopySourceSSECustomerKey() (v string) { + if s.CopySourceSSECustomerKey == nil { + return v + } + return *s.CopySourceSSECustomerKey +} + +// SetCopySourceSSECustomerKeyMD5 sets the CopySourceSSECustomerKeyMD5 field's value. +func (s *UploadPartCopyInput) SetCopySourceSSECustomerKeyMD5(v string) *UploadPartCopyInput { + s.CopySourceSSECustomerKeyMD5 = &v + return s +} + +// SetKey sets the Key field's value. +func (s *UploadPartCopyInput) SetKey(v string) *UploadPartCopyInput { + s.Key = &v + return s +} + +// SetPartNumber sets the PartNumber field's value. +func (s *UploadPartCopyInput) SetPartNumber(v int64) *UploadPartCopyInput { + s.PartNumber = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *UploadPartCopyInput) SetRequestPayer(v string) *UploadPartCopyInput { + s.RequestPayer = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *UploadPartCopyInput) SetSSECustomerAlgorithm(v string) *UploadPartCopyInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *UploadPartCopyInput) SetSSECustomerKey(v string) *UploadPartCopyInput { + s.SSECustomerKey = &v + return s +} + +func (s *UploadPartCopyInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *UploadPartCopyInput) SetSSECustomerKeyMD5(v string) *UploadPartCopyInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *UploadPartCopyInput) SetUploadId(v string) *UploadPartCopyInput { + s.UploadId = &v + return s +} + +type UploadPartCopyOutput struct { + _ struct{} `type:"structure" payload:"CopyPartResult"` + + CopyPartResult *CopyPartResult `type:"structure"` + + // The version of the source object that was copied, if you have enabled versioning + // on the source bucket. + CopySourceVersionId *string `location:"header" locationName:"x-amz-copy-source-version-id" type:"string"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` +} + +// String returns the string representation +func (s UploadPartCopyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadPartCopyOutput) GoString() string { + return s.String() +} + +// SetCopyPartResult sets the CopyPartResult field's value. +func (s *UploadPartCopyOutput) SetCopyPartResult(v *CopyPartResult) *UploadPartCopyOutput { + s.CopyPartResult = v + return s +} + +// SetCopySourceVersionId sets the CopySourceVersionId field's value. +func (s *UploadPartCopyOutput) SetCopySourceVersionId(v string) *UploadPartCopyOutput { + s.CopySourceVersionId = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *UploadPartCopyOutput) SetRequestCharged(v string) *UploadPartCopyOutput { + s.RequestCharged = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *UploadPartCopyOutput) SetSSECustomerAlgorithm(v string) *UploadPartCopyOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *UploadPartCopyOutput) SetSSECustomerKeyMD5(v string) *UploadPartCopyOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *UploadPartCopyOutput) SetSSEKMSKeyId(v string) *UploadPartCopyOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *UploadPartCopyOutput) SetServerSideEncryption(v string) *UploadPartCopyOutput { + s.ServerSideEncryption = &v + return s +} + +type UploadPartInput struct { + _ struct{} `type:"structure" payload:"Body"` + + // Object data. + Body io.ReadSeeker `type:"blob"` + + // Name of the bucket to which the multipart upload was initiated. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Size of the body in bytes. This parameter is useful when the size of the + // body cannot be determined automatically. + ContentLength *int64 `location:"header" locationName:"Content-Length" type:"long"` + + // The base64-encoded 128-bit MD5 digest of the part data. + ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"` + + // Object key for which the multipart upload was initiated. + // + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Part number of part being uploaded. This is a positive integer between 1 + // and 10,000. + // + // PartNumber is a required field + PartNumber *int64 `location:"querystring" locationName:"partNumber" type:"integer" required:"true"` + + // Confirms that the requester knows that she or he will be charged for the + // request. Bucket owners need not specify this parameter in their requests. + // Documentation on downloading objects from requester pays buckets can be found + // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html + RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + + // Specifies the algorithm to use to when encrypting the object (e.g., AES256). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // Specifies the customer-provided encryption key for Amazon S3 to use in encrypting + // data. This value is used to store the object and then it is discarded; Amazon + // does not store the encryption key. The key must be appropriate for use with + // the algorithm specified in the x-amz-server-side​-encryption​-customer-algorithm + // header. This must be the same encryption key specified in the initiate multipart + // upload request. + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. + // Amazon S3 uses this header for a message integrity check to ensure the encryption + // key was transmitted without error. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // Upload ID identifying the multipart upload whose part is being uploaded. + // + // UploadId is a required field + UploadId *string `location:"querystring" locationName:"uploadId" type:"string" required:"true"` +} + +// String returns the string representation +func (s UploadPartInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadPartInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UploadPartInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UploadPartInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.PartNumber == nil { + invalidParams.Add(request.NewErrParamRequired("PartNumber")) + } + if s.UploadId == nil { + invalidParams.Add(request.NewErrParamRequired("UploadId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBody sets the Body field's value. +func (s *UploadPartInput) SetBody(v io.ReadSeeker) *UploadPartInput { + s.Body = v + return s +} + +// SetBucket sets the Bucket field's value. +func (s *UploadPartInput) SetBucket(v string) *UploadPartInput { + s.Bucket = &v + return s +} + +func (s *UploadPartInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetContentLength sets the ContentLength field's value. +func (s *UploadPartInput) SetContentLength(v int64) *UploadPartInput { + s.ContentLength = &v + return s +} + +// SetContentMD5 sets the ContentMD5 field's value. +func (s *UploadPartInput) SetContentMD5(v string) *UploadPartInput { + s.ContentMD5 = &v + return s +} + +// SetKey sets the Key field's value. +func (s *UploadPartInput) SetKey(v string) *UploadPartInput { + s.Key = &v + return s +} + +// SetPartNumber sets the PartNumber field's value. +func (s *UploadPartInput) SetPartNumber(v int64) *UploadPartInput { + s.PartNumber = &v + return s +} + +// SetRequestPayer sets the RequestPayer field's value. +func (s *UploadPartInput) SetRequestPayer(v string) *UploadPartInput { + s.RequestPayer = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *UploadPartInput) SetSSECustomerAlgorithm(v string) *UploadPartInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *UploadPartInput) SetSSECustomerKey(v string) *UploadPartInput { + s.SSECustomerKey = &v + return s +} + +func (s *UploadPartInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *UploadPartInput) SetSSECustomerKeyMD5(v string) *UploadPartInput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetUploadId sets the UploadId field's value. +func (s *UploadPartInput) SetUploadId(v string) *UploadPartInput { + s.UploadId = &v + return s +} + +type UploadPartOutput struct { + _ struct{} `type:"structure"` + + // Entity tag for the uploaded object. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // If present, indicates that the requester was successfully charged for the + // request. + RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header confirming the encryption algorithm + // used. + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // If server-side encryption with a customer-provided encryption key was requested, + // the response will include this header to provide round trip message integrity + // verification of the customer-provided encryption key. + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` + + // If present, specifies the ID of the AWS Key Management Service (KMS) master + // encryption key that was used for the object. + SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string"` + + // The Server-side encryption algorithm used when storing this object in S3 + // (e.g., AES256, aws:kms). + ServerSideEncryption *string `location:"header" locationName:"x-amz-server-side-encryption" type:"string" enum:"ServerSideEncryption"` +} + +// String returns the string representation +func (s UploadPartOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UploadPartOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *UploadPartOutput) SetETag(v string) *UploadPartOutput { + s.ETag = &v + return s +} + +// SetRequestCharged sets the RequestCharged field's value. +func (s *UploadPartOutput) SetRequestCharged(v string) *UploadPartOutput { + s.RequestCharged = &v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *UploadPartOutput) SetSSECustomerAlgorithm(v string) *UploadPartOutput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *UploadPartOutput) SetSSECustomerKeyMD5(v string) *UploadPartOutput { + s.SSECustomerKeyMD5 = &v + return s +} + +// SetSSEKMSKeyId sets the SSEKMSKeyId field's value. +func (s *UploadPartOutput) SetSSEKMSKeyId(v string) *UploadPartOutput { + s.SSEKMSKeyId = &v + return s +} + +// SetServerSideEncryption sets the ServerSideEncryption field's value. +func (s *UploadPartOutput) SetServerSideEncryption(v string) *UploadPartOutput { + s.ServerSideEncryption = &v + return s +} + +type VersioningConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies whether MFA delete is enabled in the bucket versioning configuration. + // This element is only returned if the bucket has been configured with MFA + // delete. If the bucket has never been so configured, this element is not returned. + MFADelete *string `locationName:"MfaDelete" type:"string" enum:"MFADelete"` + + // The versioning state of the bucket. + Status *string `type:"string" enum:"BucketVersioningStatus"` +} + +// String returns the string representation +func (s VersioningConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VersioningConfiguration) GoString() string { + return s.String() +} + +// SetMFADelete sets the MFADelete field's value. +func (s *VersioningConfiguration) SetMFADelete(v string) *VersioningConfiguration { + s.MFADelete = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *VersioningConfiguration) SetStatus(v string) *VersioningConfiguration { + s.Status = &v + return s +} + +type WebsiteConfiguration struct { + _ struct{} `type:"structure"` + + ErrorDocument *ErrorDocument `type:"structure"` + + IndexDocument *IndexDocument `type:"structure"` + + RedirectAllRequestsTo *RedirectAllRequestsTo `type:"structure"` + + RoutingRules []*RoutingRule `locationNameList:"RoutingRule" type:"list"` +} + +// String returns the string representation +func (s WebsiteConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WebsiteConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *WebsiteConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "WebsiteConfiguration"} + if s.ErrorDocument != nil { + if err := s.ErrorDocument.Validate(); err != nil { + invalidParams.AddNested("ErrorDocument", err.(request.ErrInvalidParams)) + } + } + if s.IndexDocument != nil { + if err := s.IndexDocument.Validate(); err != nil { + invalidParams.AddNested("IndexDocument", err.(request.ErrInvalidParams)) + } + } + if s.RedirectAllRequestsTo != nil { + if err := s.RedirectAllRequestsTo.Validate(); err != nil { + invalidParams.AddNested("RedirectAllRequestsTo", err.(request.ErrInvalidParams)) + } + } + if s.RoutingRules != nil { + for i, v := range s.RoutingRules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RoutingRules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetErrorDocument sets the ErrorDocument field's value. +func (s *WebsiteConfiguration) SetErrorDocument(v *ErrorDocument) *WebsiteConfiguration { + s.ErrorDocument = v + return s +} + +// SetIndexDocument sets the IndexDocument field's value. +func (s *WebsiteConfiguration) SetIndexDocument(v *IndexDocument) *WebsiteConfiguration { + s.IndexDocument = v + return s +} + +// SetRedirectAllRequestsTo sets the RedirectAllRequestsTo field's value. +func (s *WebsiteConfiguration) SetRedirectAllRequestsTo(v *RedirectAllRequestsTo) *WebsiteConfiguration { + s.RedirectAllRequestsTo = v + return s +} + +// SetRoutingRules sets the RoutingRules field's value. +func (s *WebsiteConfiguration) SetRoutingRules(v []*RoutingRule) *WebsiteConfiguration { + s.RoutingRules = v + return s +} + +const ( + // AnalyticsS3ExportFileFormatCsv is a AnalyticsS3ExportFileFormat enum value + AnalyticsS3ExportFileFormatCsv = "CSV" +) + +const ( + // BucketAccelerateStatusEnabled is a BucketAccelerateStatus enum value + BucketAccelerateStatusEnabled = "Enabled" + + // BucketAccelerateStatusSuspended is a BucketAccelerateStatus enum value + BucketAccelerateStatusSuspended = "Suspended" +) + +const ( + // BucketCannedACLPrivate is a BucketCannedACL enum value + BucketCannedACLPrivate = "private" + + // BucketCannedACLPublicRead is a BucketCannedACL enum value + BucketCannedACLPublicRead = "public-read" + + // BucketCannedACLPublicReadWrite is a BucketCannedACL enum value + BucketCannedACLPublicReadWrite = "public-read-write" + + // BucketCannedACLAuthenticatedRead is a BucketCannedACL enum value + BucketCannedACLAuthenticatedRead = "authenticated-read" +) + +const ( + // BucketLocationConstraintEu is a BucketLocationConstraint enum value + BucketLocationConstraintEu = "EU" + + // BucketLocationConstraintEuWest1 is a BucketLocationConstraint enum value + BucketLocationConstraintEuWest1 = "eu-west-1" + + // BucketLocationConstraintUsWest1 is a BucketLocationConstraint enum value + BucketLocationConstraintUsWest1 = "us-west-1" + + // BucketLocationConstraintUsWest2 is a BucketLocationConstraint enum value + BucketLocationConstraintUsWest2 = "us-west-2" + + // BucketLocationConstraintApSouth1 is a BucketLocationConstraint enum value + BucketLocationConstraintApSouth1 = "ap-south-1" + + // BucketLocationConstraintApSoutheast1 is a BucketLocationConstraint enum value + BucketLocationConstraintApSoutheast1 = "ap-southeast-1" + + // BucketLocationConstraintApSoutheast2 is a BucketLocationConstraint enum value + BucketLocationConstraintApSoutheast2 = "ap-southeast-2" + + // BucketLocationConstraintApNortheast1 is a BucketLocationConstraint enum value + BucketLocationConstraintApNortheast1 = "ap-northeast-1" + + // BucketLocationConstraintSaEast1 is a BucketLocationConstraint enum value + BucketLocationConstraintSaEast1 = "sa-east-1" + + // BucketLocationConstraintCnNorth1 is a BucketLocationConstraint enum value + BucketLocationConstraintCnNorth1 = "cn-north-1" + + // BucketLocationConstraintEuCentral1 is a BucketLocationConstraint enum value + BucketLocationConstraintEuCentral1 = "eu-central-1" +) + +const ( + // BucketLogsPermissionFullControl is a BucketLogsPermission enum value + BucketLogsPermissionFullControl = "FULL_CONTROL" + + // BucketLogsPermissionRead is a BucketLogsPermission enum value + BucketLogsPermissionRead = "READ" + + // BucketLogsPermissionWrite is a BucketLogsPermission enum value + BucketLogsPermissionWrite = "WRITE" +) + +const ( + // BucketVersioningStatusEnabled is a BucketVersioningStatus enum value + BucketVersioningStatusEnabled = "Enabled" + + // BucketVersioningStatusSuspended is a BucketVersioningStatus enum value + BucketVersioningStatusSuspended = "Suspended" +) + +const ( + // CompressionTypeNone is a CompressionType enum value + CompressionTypeNone = "NONE" + + // CompressionTypeGzip is a CompressionType enum value + CompressionTypeGzip = "GZIP" +) + +// Requests Amazon S3 to encode the object keys in the response and specifies +// the encoding method to use. An object key may contain any Unicode character; +// however, XML 1.0 parser cannot parse some characters, such as characters +// with an ASCII value from 0 to 10. For characters that are not supported in +// XML 1.0, you can add this parameter to request that Amazon S3 encode the +// keys in the response. +const ( + // EncodingTypeUrl is a EncodingType enum value + EncodingTypeUrl = "url" +) + +// Bucket event for which to send notifications. +const ( + // EventS3ReducedRedundancyLostObject is a Event enum value + EventS3ReducedRedundancyLostObject = "s3:ReducedRedundancyLostObject" + + // EventS3ObjectCreated is a Event enum value + EventS3ObjectCreated = "s3:ObjectCreated:*" + + // EventS3ObjectCreatedPut is a Event enum value + EventS3ObjectCreatedPut = "s3:ObjectCreated:Put" + + // EventS3ObjectCreatedPost is a Event enum value + EventS3ObjectCreatedPost = "s3:ObjectCreated:Post" + + // EventS3ObjectCreatedCopy is a Event enum value + EventS3ObjectCreatedCopy = "s3:ObjectCreated:Copy" + + // EventS3ObjectCreatedCompleteMultipartUpload is a Event enum value + EventS3ObjectCreatedCompleteMultipartUpload = "s3:ObjectCreated:CompleteMultipartUpload" + + // EventS3ObjectRemoved is a Event enum value + EventS3ObjectRemoved = "s3:ObjectRemoved:*" + + // EventS3ObjectRemovedDelete is a Event enum value + EventS3ObjectRemovedDelete = "s3:ObjectRemoved:Delete" + + // EventS3ObjectRemovedDeleteMarkerCreated is a Event enum value + EventS3ObjectRemovedDeleteMarkerCreated = "s3:ObjectRemoved:DeleteMarkerCreated" +) + +const ( + // ExpirationStatusEnabled is a ExpirationStatus enum value + ExpirationStatusEnabled = "Enabled" + + // ExpirationStatusDisabled is a ExpirationStatus enum value + ExpirationStatusDisabled = "Disabled" +) + +const ( + // ExpressionTypeSql is a ExpressionType enum value + ExpressionTypeSql = "SQL" +) + +const ( + // FileHeaderInfoUse is a FileHeaderInfo enum value + FileHeaderInfoUse = "USE" + + // FileHeaderInfoIgnore is a FileHeaderInfo enum value + FileHeaderInfoIgnore = "IGNORE" + + // FileHeaderInfoNone is a FileHeaderInfo enum value + FileHeaderInfoNone = "NONE" +) + +const ( + // FilterRuleNamePrefix is a FilterRuleName enum value + FilterRuleNamePrefix = "prefix" + + // FilterRuleNameSuffix is a FilterRuleName enum value + FilterRuleNameSuffix = "suffix" +) + +const ( + // InventoryFormatCsv is a InventoryFormat enum value + InventoryFormatCsv = "CSV" + + // InventoryFormatOrc is a InventoryFormat enum value + InventoryFormatOrc = "ORC" +) + +const ( + // InventoryFrequencyDaily is a InventoryFrequency enum value + InventoryFrequencyDaily = "Daily" + + // InventoryFrequencyWeekly is a InventoryFrequency enum value + InventoryFrequencyWeekly = "Weekly" +) + +const ( + // InventoryIncludedObjectVersionsAll is a InventoryIncludedObjectVersions enum value + InventoryIncludedObjectVersionsAll = "All" + + // InventoryIncludedObjectVersionsCurrent is a InventoryIncludedObjectVersions enum value + InventoryIncludedObjectVersionsCurrent = "Current" +) + +const ( + // InventoryOptionalFieldSize is a InventoryOptionalField enum value + InventoryOptionalFieldSize = "Size" + + // InventoryOptionalFieldLastModifiedDate is a InventoryOptionalField enum value + InventoryOptionalFieldLastModifiedDate = "LastModifiedDate" + + // InventoryOptionalFieldStorageClass is a InventoryOptionalField enum value + InventoryOptionalFieldStorageClass = "StorageClass" + + // InventoryOptionalFieldEtag is a InventoryOptionalField enum value + InventoryOptionalFieldEtag = "ETag" + + // InventoryOptionalFieldIsMultipartUploaded is a InventoryOptionalField enum value + InventoryOptionalFieldIsMultipartUploaded = "IsMultipartUploaded" + + // InventoryOptionalFieldReplicationStatus is a InventoryOptionalField enum value + InventoryOptionalFieldReplicationStatus = "ReplicationStatus" + + // InventoryOptionalFieldEncryptionStatus is a InventoryOptionalField enum value + InventoryOptionalFieldEncryptionStatus = "EncryptionStatus" +) + +const ( + // JSONTypeDocument is a JSONType enum value + JSONTypeDocument = "DOCUMENT" + + // JSONTypeLines is a JSONType enum value + JSONTypeLines = "LINES" +) + +const ( + // MFADeleteEnabled is a MFADelete enum value + MFADeleteEnabled = "Enabled" + + // MFADeleteDisabled is a MFADelete enum value + MFADeleteDisabled = "Disabled" +) + +const ( + // MFADeleteStatusEnabled is a MFADeleteStatus enum value + MFADeleteStatusEnabled = "Enabled" + + // MFADeleteStatusDisabled is a MFADeleteStatus enum value + MFADeleteStatusDisabled = "Disabled" +) + +const ( + // MetadataDirectiveCopy is a MetadataDirective enum value + MetadataDirectiveCopy = "COPY" + + // MetadataDirectiveReplace is a MetadataDirective enum value + MetadataDirectiveReplace = "REPLACE" +) + +const ( + // ObjectCannedACLPrivate is a ObjectCannedACL enum value + ObjectCannedACLPrivate = "private" + + // ObjectCannedACLPublicRead is a ObjectCannedACL enum value + ObjectCannedACLPublicRead = "public-read" + + // ObjectCannedACLPublicReadWrite is a ObjectCannedACL enum value + ObjectCannedACLPublicReadWrite = "public-read-write" + + // ObjectCannedACLAuthenticatedRead is a ObjectCannedACL enum value + ObjectCannedACLAuthenticatedRead = "authenticated-read" + + // ObjectCannedACLAwsExecRead is a ObjectCannedACL enum value + ObjectCannedACLAwsExecRead = "aws-exec-read" + + // ObjectCannedACLBucketOwnerRead is a ObjectCannedACL enum value + ObjectCannedACLBucketOwnerRead = "bucket-owner-read" + + // ObjectCannedACLBucketOwnerFullControl is a ObjectCannedACL enum value + ObjectCannedACLBucketOwnerFullControl = "bucket-owner-full-control" +) + +const ( + // ObjectStorageClassStandard is a ObjectStorageClass enum value + ObjectStorageClassStandard = "STANDARD" + + // ObjectStorageClassReducedRedundancy is a ObjectStorageClass enum value + ObjectStorageClassReducedRedundancy = "REDUCED_REDUNDANCY" + + // ObjectStorageClassGlacier is a ObjectStorageClass enum value + ObjectStorageClassGlacier = "GLACIER" + + // ObjectStorageClassStandardIa is a ObjectStorageClass enum value + ObjectStorageClassStandardIa = "STANDARD_IA" + + // ObjectStorageClassOnezoneIa is a ObjectStorageClass enum value + ObjectStorageClassOnezoneIa = "ONEZONE_IA" +) + +const ( + // ObjectVersionStorageClassStandard is a ObjectVersionStorageClass enum value + ObjectVersionStorageClassStandard = "STANDARD" +) + +const ( + // OwnerOverrideDestination is a OwnerOverride enum value + OwnerOverrideDestination = "Destination" +) + +const ( + // PayerRequester is a Payer enum value + PayerRequester = "Requester" + + // PayerBucketOwner is a Payer enum value + PayerBucketOwner = "BucketOwner" +) + +const ( + // PermissionFullControl is a Permission enum value + PermissionFullControl = "FULL_CONTROL" + + // PermissionWrite is a Permission enum value + PermissionWrite = "WRITE" + + // PermissionWriteAcp is a Permission enum value + PermissionWriteAcp = "WRITE_ACP" + + // PermissionRead is a Permission enum value + PermissionRead = "READ" + + // PermissionReadAcp is a Permission enum value + PermissionReadAcp = "READ_ACP" +) + +const ( + // ProtocolHttp is a Protocol enum value + ProtocolHttp = "http" + + // ProtocolHttps is a Protocol enum value + ProtocolHttps = "https" +) + +const ( + // QuoteFieldsAlways is a QuoteFields enum value + QuoteFieldsAlways = "ALWAYS" + + // QuoteFieldsAsneeded is a QuoteFields enum value + QuoteFieldsAsneeded = "ASNEEDED" +) + +const ( + // ReplicationRuleStatusEnabled is a ReplicationRuleStatus enum value + ReplicationRuleStatusEnabled = "Enabled" + + // ReplicationRuleStatusDisabled is a ReplicationRuleStatus enum value + ReplicationRuleStatusDisabled = "Disabled" +) + +const ( + // ReplicationStatusComplete is a ReplicationStatus enum value + ReplicationStatusComplete = "COMPLETE" + + // ReplicationStatusPending is a ReplicationStatus enum value + ReplicationStatusPending = "PENDING" + + // ReplicationStatusFailed is a ReplicationStatus enum value + ReplicationStatusFailed = "FAILED" + + // ReplicationStatusReplica is a ReplicationStatus enum value + ReplicationStatusReplica = "REPLICA" +) + +// If present, indicates that the requester was successfully charged for the +// request. +const ( + // RequestChargedRequester is a RequestCharged enum value + RequestChargedRequester = "requester" +) + +// Confirms that the requester knows that she or he will be charged for the +// request. Bucket owners need not specify this parameter in their requests. +// Documentation on downloading objects from requester pays buckets can be found +// at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html +const ( + // RequestPayerRequester is a RequestPayer enum value + RequestPayerRequester = "requester" +) + +const ( + // RestoreRequestTypeSelect is a RestoreRequestType enum value + RestoreRequestTypeSelect = "SELECT" +) + +const ( + // ServerSideEncryptionAes256 is a ServerSideEncryption enum value + ServerSideEncryptionAes256 = "AES256" + + // ServerSideEncryptionAwsKms is a ServerSideEncryption enum value + ServerSideEncryptionAwsKms = "aws:kms" +) + +const ( + // SseKmsEncryptedObjectsStatusEnabled is a SseKmsEncryptedObjectsStatus enum value + SseKmsEncryptedObjectsStatusEnabled = "Enabled" + + // SseKmsEncryptedObjectsStatusDisabled is a SseKmsEncryptedObjectsStatus enum value + SseKmsEncryptedObjectsStatusDisabled = "Disabled" +) + +const ( + // StorageClassStandard is a StorageClass enum value + StorageClassStandard = "STANDARD" + + // StorageClassReducedRedundancy is a StorageClass enum value + StorageClassReducedRedundancy = "REDUCED_REDUNDANCY" + + // StorageClassStandardIa is a StorageClass enum value + StorageClassStandardIa = "STANDARD_IA" + + // StorageClassOnezoneIa is a StorageClass enum value + StorageClassOnezoneIa = "ONEZONE_IA" +) + +const ( + // StorageClassAnalysisSchemaVersionV1 is a StorageClassAnalysisSchemaVersion enum value + StorageClassAnalysisSchemaVersionV1 = "V_1" +) + +const ( + // TaggingDirectiveCopy is a TaggingDirective enum value + TaggingDirectiveCopy = "COPY" + + // TaggingDirectiveReplace is a TaggingDirective enum value + TaggingDirectiveReplace = "REPLACE" +) + +const ( + // TierStandard is a Tier enum value + TierStandard = "Standard" + + // TierBulk is a Tier enum value + TierBulk = "Bulk" + + // TierExpedited is a Tier enum value + TierExpedited = "Expedited" +) + +const ( + // TransitionStorageClassGlacier is a TransitionStorageClass enum value + TransitionStorageClassGlacier = "GLACIER" + + // TransitionStorageClassStandardIa is a TransitionStorageClass enum value + TransitionStorageClassStandardIa = "STANDARD_IA" + + // TransitionStorageClassOnezoneIa is a TransitionStorageClass enum value + TransitionStorageClassOnezoneIa = "ONEZONE_IA" +) + +const ( + // TypeCanonicalUser is a Type enum value + TypeCanonicalUser = "CanonicalUser" + + // TypeAmazonCustomerByEmail is a Type enum value + TypeAmazonCustomerByEmail = "AmazonCustomerByEmail" + + // TypeGroup is a Type enum value + TypeGroup = "Group" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/body_hash.go b/vendor/github.com/aws/aws-sdk-go/service/s3/body_hash.go new file mode 100644 index 00000000..5c8ce5cc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/body_hash.go @@ -0,0 +1,249 @@ +package s3 + +import ( + "bytes" + "crypto/md5" + "crypto/sha256" + "encoding/base64" + "encoding/hex" + "fmt" + "hash" + "io" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/sdkio" +) + +const ( + contentMD5Header = "Content-Md5" + contentSha256Header = "X-Amz-Content-Sha256" + amzTeHeader = "X-Amz-Te" + amzTxEncodingHeader = "X-Amz-Transfer-Encoding" + + appendMD5TxEncoding = "append-md5" +) + +// contentMD5 computes and sets the HTTP Content-MD5 header for requests that +// require it. +func contentMD5(r *request.Request) { + h := md5.New() + + if !aws.IsReaderSeekable(r.Body) { + if r.Config.Logger != nil { + r.Config.Logger.Log(fmt.Sprintf( + "Unable to compute Content-MD5 for unseekable body, S3.%s", + r.Operation.Name)) + } + return + } + + if _, err := copySeekableBody(h, r.Body); err != nil { + r.Error = awserr.New("ContentMD5", "failed to compute body MD5", err) + return + } + + // encode the md5 checksum in base64 and set the request header. + v := base64.StdEncoding.EncodeToString(h.Sum(nil)) + r.HTTPRequest.Header.Set(contentMD5Header, v) +} + +// computeBodyHashes will add Content MD5 and Content Sha256 hashes to the +// request. If the body is not seekable or S3DisableContentMD5Validation set +// this handler will be ignored. +func computeBodyHashes(r *request.Request) { + if aws.BoolValue(r.Config.S3DisableContentMD5Validation) { + return + } + if r.IsPresigned() { + return + } + if r.Error != nil || !aws.IsReaderSeekable(r.Body) { + return + } + + var md5Hash, sha256Hash hash.Hash + hashers := make([]io.Writer, 0, 2) + + // Determine upfront which hashes can be set without overriding user + // provide header data. + if v := r.HTTPRequest.Header.Get(contentMD5Header); len(v) == 0 { + md5Hash = md5.New() + hashers = append(hashers, md5Hash) + } + + if v := r.HTTPRequest.Header.Get(contentSha256Header); len(v) == 0 { + sha256Hash = sha256.New() + hashers = append(hashers, sha256Hash) + } + + // Create the destination writer based on the hashes that are not already + // provided by the user. + var dst io.Writer + switch len(hashers) { + case 0: + return + case 1: + dst = hashers[0] + default: + dst = io.MultiWriter(hashers...) + } + + if _, err := copySeekableBody(dst, r.Body); err != nil { + r.Error = awserr.New("BodyHashError", "failed to compute body hashes", err) + return + } + + // For the hashes created, set the associated headers that the user did not + // already provide. + if md5Hash != nil { + sum := make([]byte, md5.Size) + encoded := make([]byte, md5Base64EncLen) + + base64.StdEncoding.Encode(encoded, md5Hash.Sum(sum[0:0])) + r.HTTPRequest.Header[contentMD5Header] = []string{string(encoded)} + } + + if sha256Hash != nil { + encoded := make([]byte, sha256HexEncLen) + sum := make([]byte, sha256.Size) + + hex.Encode(encoded, sha256Hash.Sum(sum[0:0])) + r.HTTPRequest.Header[contentSha256Header] = []string{string(encoded)} + } +} + +const ( + md5Base64EncLen = (md5.Size + 2) / 3 * 4 // base64.StdEncoding.EncodedLen + sha256HexEncLen = sha256.Size * 2 // hex.EncodedLen +) + +func copySeekableBody(dst io.Writer, src io.ReadSeeker) (int64, error) { + curPos, err := src.Seek(0, sdkio.SeekCurrent) + if err != nil { + return 0, err + } + + // hash the body. seek back to the first position after reading to reset + // the body for transmission. copy errors may be assumed to be from the + // body. + n, err := io.Copy(dst, src) + if err != nil { + return n, err + } + + _, err = src.Seek(curPos, sdkio.SeekStart) + if err != nil { + return n, err + } + + return n, nil +} + +// Adds the x-amz-te: append_md5 header to the request. This requests the service +// responds with a trailing MD5 checksum. +// +// Will not ask for append MD5 if disabled, the request is presigned or, +// or the API operation does not support content MD5 validation. +func askForTxEncodingAppendMD5(r *request.Request) { + if aws.BoolValue(r.Config.S3DisableContentMD5Validation) { + return + } + if r.IsPresigned() { + return + } + r.HTTPRequest.Header.Set(amzTeHeader, appendMD5TxEncoding) +} + +func useMD5ValidationReader(r *request.Request) { + if r.Error != nil { + return + } + + if v := r.HTTPResponse.Header.Get(amzTxEncodingHeader); v != appendMD5TxEncoding { + return + } + + var bodyReader *io.ReadCloser + var contentLen int64 + switch tv := r.Data.(type) { + case *GetObjectOutput: + bodyReader = &tv.Body + contentLen = aws.Int64Value(tv.ContentLength) + // Update ContentLength hiden the trailing MD5 checksum. + tv.ContentLength = aws.Int64(contentLen - md5.Size) + tv.ContentRange = aws.String(r.HTTPResponse.Header.Get("X-Amz-Content-Range")) + default: + r.Error = awserr.New("ChecksumValidationError", + fmt.Sprintf("%s: %s header received on unsupported API, %s", + amzTxEncodingHeader, appendMD5TxEncoding, r.Operation.Name, + ), nil) + return + } + + if contentLen < md5.Size { + r.Error = awserr.New("ChecksumValidationError", + fmt.Sprintf("invalid Content-Length %d for %s %s", + contentLen, appendMD5TxEncoding, amzTxEncodingHeader, + ), nil) + return + } + + // Wrap and swap the response body reader with the validation reader. + *bodyReader = newMD5ValidationReader(*bodyReader, contentLen-md5.Size) +} + +type md5ValidationReader struct { + rawReader io.ReadCloser + payload io.Reader + hash hash.Hash + + payloadLen int64 + read int64 +} + +func newMD5ValidationReader(reader io.ReadCloser, payloadLen int64) *md5ValidationReader { + h := md5.New() + return &md5ValidationReader{ + rawReader: reader, + payload: io.TeeReader(&io.LimitedReader{R: reader, N: payloadLen}, h), + hash: h, + payloadLen: payloadLen, + } +} + +func (v *md5ValidationReader) Read(p []byte) (n int, err error) { + n, err = v.payload.Read(p) + if err != nil && err != io.EOF { + return n, err + } + + v.read += int64(n) + + if err == io.EOF { + if v.read != v.payloadLen { + return n, io.ErrUnexpectedEOF + } + expectSum := make([]byte, md5.Size) + actualSum := make([]byte, md5.Size) + if _, sumReadErr := io.ReadFull(v.rawReader, expectSum); sumReadErr != nil { + return n, sumReadErr + } + actualSum = v.hash.Sum(actualSum[0:0]) + if !bytes.Equal(expectSum, actualSum) { + return n, awserr.New("InvalidChecksum", + fmt.Sprintf("expected MD5 checksum %s, got %s", + hex.EncodeToString(expectSum), + hex.EncodeToString(actualSum), + ), + nil) + } + } + + return n, err +} + +func (v *md5ValidationReader) Close() error { + return v.rawReader.Close() +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/bucket_location.go b/vendor/github.com/aws/aws-sdk-go/service/s3/bucket_location.go new file mode 100644 index 00000000..bc68a46a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/bucket_location.go @@ -0,0 +1,106 @@ +package s3 + +import ( + "io/ioutil" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +var reBucketLocation = regexp.MustCompile(`>([^<>]+)<\/Location`) + +// NormalizeBucketLocation is a utility function which will update the +// passed in value to always be a region ID. Generally this would be used +// with GetBucketLocation API operation. +// +// Replaces empty string with "us-east-1", and "EU" with "eu-west-1". +// +// See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html +// for more information on the values that can be returned. +func NormalizeBucketLocation(loc string) string { + switch loc { + case "": + loc = "us-east-1" + case "EU": + loc = "eu-west-1" + } + + return loc +} + +// NormalizeBucketLocationHandler is a request handler which will update the +// GetBucketLocation's result LocationConstraint value to always be a region ID. +// +// Replaces empty string with "us-east-1", and "EU" with "eu-west-1". +// +// See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html +// for more information on the values that can be returned. +// +// req, result := svc.GetBucketLocationRequest(&s3.GetBucketLocationInput{ +// Bucket: aws.String(bucket), +// }) +// req.Handlers.Unmarshal.PushBackNamed(NormalizeBucketLocationHandler) +// err := req.Send() +var NormalizeBucketLocationHandler = request.NamedHandler{ + Name: "awssdk.s3.NormalizeBucketLocation", + Fn: func(req *request.Request) { + if req.Error != nil { + return + } + + out := req.Data.(*GetBucketLocationOutput) + loc := NormalizeBucketLocation(aws.StringValue(out.LocationConstraint)) + out.LocationConstraint = aws.String(loc) + }, +} + +// WithNormalizeBucketLocation is a request option which will update the +// GetBucketLocation's result LocationConstraint value to always be a region ID. +// +// Replaces empty string with "us-east-1", and "EU" with "eu-west-1". +// +// See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html +// for more information on the values that can be returned. +// +// result, err := svc.GetBucketLocationWithContext(ctx, +// &s3.GetBucketLocationInput{ +// Bucket: aws.String(bucket), +// }, +// s3.WithNormalizeBucketLocation, +// ) +func WithNormalizeBucketLocation(r *request.Request) { + r.Handlers.Unmarshal.PushBackNamed(NormalizeBucketLocationHandler) +} + +func buildGetBucketLocation(r *request.Request) { + if r.DataFilled() { + out := r.Data.(*GetBucketLocationOutput) + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = awserr.New("SerializationError", "failed reading response body", err) + return + } + + match := reBucketLocation.FindSubmatch(b) + if len(match) > 1 { + loc := string(match[1]) + out.LocationConstraint = aws.String(loc) + } + } +} + +func populateLocationConstraint(r *request.Request) { + if r.ParamsFilled() && aws.StringValue(r.Config.Region) != "us-east-1" { + in := r.Params.(*CreateBucketInput) + if in.CreateBucketConfiguration == nil { + r.Params = awsutil.CopyOf(r.Params) + in = r.Params.(*CreateBucketInput) + in.CreateBucketConfiguration = &CreateBucketConfiguration{ + LocationConstraint: r.Config.Region, + } + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go new file mode 100644 index 00000000..a55beab9 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go @@ -0,0 +1,70 @@ +package s3 + +import ( + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/request" +) + +func init() { + initClient = defaultInitClientFn + initRequest = defaultInitRequestFn +} + +func defaultInitClientFn(c *client.Client) { + // Support building custom endpoints based on config + c.Handlers.Build.PushFront(updateEndpointForS3Config) + + // Require SSL when using SSE keys + c.Handlers.Validate.PushBack(validateSSERequiresSSL) + c.Handlers.Build.PushBack(computeSSEKeys) + + // S3 uses custom error unmarshaling logic + c.Handlers.UnmarshalError.Clear() + c.Handlers.UnmarshalError.PushBack(unmarshalError) +} + +func defaultInitRequestFn(r *request.Request) { + // Add reuest handlers for specific platforms. + // e.g. 100-continue support for PUT requests using Go 1.6 + platformRequestHandlers(r) + + switch r.Operation.Name { + case opPutBucketCors, opPutBucketLifecycle, opPutBucketPolicy, + opPutBucketTagging, opDeleteObjects, opPutBucketLifecycleConfiguration, + opPutBucketReplication: + // These S3 operations require Content-MD5 to be set + r.Handlers.Build.PushBack(contentMD5) + case opGetBucketLocation: + // GetBucketLocation has custom parsing logic + r.Handlers.Unmarshal.PushFront(buildGetBucketLocation) + case opCreateBucket: + // Auto-populate LocationConstraint with current region + r.Handlers.Validate.PushFront(populateLocationConstraint) + case opCopyObject, opUploadPartCopy, opCompleteMultipartUpload: + r.Handlers.Unmarshal.PushFront(copyMultipartStatusOKUnmarhsalError) + case opPutObject, opUploadPart: + r.Handlers.Build.PushBack(computeBodyHashes) + // Disabled until #1837 root issue is resolved. + // case opGetObject: + // r.Handlers.Build.PushBack(askForTxEncodingAppendMD5) + // r.Handlers.Unmarshal.PushBack(useMD5ValidationReader) + } +} + +// bucketGetter is an accessor interface to grab the "Bucket" field from +// an S3 type. +type bucketGetter interface { + getBucket() string +} + +// sseCustomerKeyGetter is an accessor interface to grab the "SSECustomerKey" +// field from an S3 type. +type sseCustomerKeyGetter interface { + getSSECustomerKey() string +} + +// copySourceSSECustomerKeyGetter is an accessor interface to grab the +// "CopySourceSSECustomerKey" field from an S3 type. +type copySourceSSECustomerKeyGetter interface { + getCopySourceSSECustomerKey() string +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go b/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go new file mode 100644 index 00000000..0def0225 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go @@ -0,0 +1,26 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package s3 provides the client and types for making API +// requests to Amazon Simple Storage Service. +// +// See https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01 for more information on this service. +// +// See s3 package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/s3/ +// +// Using the Client +// +// To contact Amazon Simple Storage Service with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Simple Storage Service client S3 for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#New +package s3 diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go b/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go new file mode 100644 index 00000000..39b912c2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go @@ -0,0 +1,109 @@ +// Upload Managers +// +// The s3manager package's Uploader provides concurrent upload of content to S3 +// by taking advantage of S3's Multipart APIs. The Uploader also supports both +// io.Reader for streaming uploads, and will also take advantage of io.ReadSeeker +// for optimizations if the Body satisfies that type. Once the Uploader instance +// is created you can call Upload concurrently from multiple goroutines safely. +// +// // The session the S3 Uploader will use +// sess := session.Must(session.NewSession()) +// +// // Create an uploader with the session and default options +// uploader := s3manager.NewUploader(sess) +// +// f, err := os.Open(filename) +// if err != nil { +// return fmt.Errorf("failed to open file %q, %v", filename, err) +// } +// +// // Upload the file to S3. +// result, err := uploader.Upload(&s3manager.UploadInput{ +// Bucket: aws.String(myBucket), +// Key: aws.String(myString), +// Body: f, +// }) +// if err != nil { +// return fmt.Errorf("failed to upload file, %v", err) +// } +// fmt.Printf("file uploaded to, %s\n", aws.StringValue(result.Location)) +// +// See the s3manager package's Uploader type documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#Uploader +// +// Download Manager +// +// The s3manager package's Downloader provides concurrently downloading of Objects +// from S3. The Downloader will write S3 Object content with an io.WriterAt. +// Once the Downloader instance is created you can call Download concurrently from +// multiple goroutines safely. +// +// // The session the S3 Downloader will use +// sess := session.Must(session.NewSession()) +// +// // Create a downloader with the session and default options +// downloader := s3manager.NewDownloader(sess) +// +// // Create a file to write the S3 Object contents to. +// f, err := os.Create(filename) +// if err != nil { +// return fmt.Errorf("failed to create file %q, %v", filename, err) +// } +// +// // Write the contents of S3 Object to the file +// n, err := downloader.Download(f, &s3.GetObjectInput{ +// Bucket: aws.String(myBucket), +// Key: aws.String(myString), +// }) +// if err != nil { +// return fmt.Errorf("failed to download file, %v", err) +// } +// fmt.Printf("file downloaded, %d bytes\n", n) +// +// See the s3manager package's Downloader type documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#Downloader +// +// Get Bucket Region +// +// GetBucketRegion will attempt to get the region for a bucket using a region +// hint to determine which AWS partition to perform the query on. Use this utility +// to determine the region a bucket is in. +// +// sess := session.Must(session.NewSession()) +// +// bucket := "my-bucket" +// region, err := s3manager.GetBucketRegion(ctx, sess, bucket, "us-west-2") +// if err != nil { +// if aerr, ok := err.(awserr.Error); ok && aerr.Code() == "NotFound" { +// fmt.Fprintf(os.Stderr, "unable to find bucket %s's region not found\n", bucket) +// } +// return err +// } +// fmt.Printf("Bucket %s is in %s region\n", bucket, region) +// +// See the s3manager package's GetBucketRegion function documentation for more information +// https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#GetBucketRegion +// +// S3 Crypto Client +// +// The s3crypto package provides the tools to upload and download encrypted +// content from S3. The Encryption and Decryption clients can be used concurrently +// once the client is created. +// +// sess := session.Must(session.NewSession()) +// +// // Create the decryption client. +// svc := s3crypto.NewDecryptionClient(sess) +// +// // The object will be downloaded from S3 and decrypted locally. By metadata +// // about the object's encryption will instruct the decryption client how +// // decrypt the content of the object. By default KMS is used for keys. +// result, err := svc.GetObject(&s3.GetObjectInput { +// Bucket: aws.String(myBucket), +// Key: aws.String(myKey), +// }) +// +// See the s3crypto package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3crypto/ +// +package s3 diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/errors.go b/vendor/github.com/aws/aws-sdk-go/service/s3/errors.go new file mode 100644 index 00000000..931cb17b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/errors.go @@ -0,0 +1,48 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package s3 + +const ( + + // ErrCodeBucketAlreadyExists for service response error code + // "BucketAlreadyExists". + // + // The requested bucket name is not available. The bucket namespace is shared + // by all users of the system. Please select a different name and try again. + ErrCodeBucketAlreadyExists = "BucketAlreadyExists" + + // ErrCodeBucketAlreadyOwnedByYou for service response error code + // "BucketAlreadyOwnedByYou". + ErrCodeBucketAlreadyOwnedByYou = "BucketAlreadyOwnedByYou" + + // ErrCodeNoSuchBucket for service response error code + // "NoSuchBucket". + // + // The specified bucket does not exist. + ErrCodeNoSuchBucket = "NoSuchBucket" + + // ErrCodeNoSuchKey for service response error code + // "NoSuchKey". + // + // The specified key does not exist. + ErrCodeNoSuchKey = "NoSuchKey" + + // ErrCodeNoSuchUpload for service response error code + // "NoSuchUpload". + // + // The specified multipart upload does not exist. + ErrCodeNoSuchUpload = "NoSuchUpload" + + // ErrCodeObjectAlreadyInActiveTierError for service response error code + // "ObjectAlreadyInActiveTierError". + // + // This operation is not allowed against this storage tier + ErrCodeObjectAlreadyInActiveTierError = "ObjectAlreadyInActiveTierError" + + // ErrCodeObjectNotInActiveTierError for service response error code + // "ObjectNotInActiveTierError". + // + // The source object of the COPY operation is not in the active tier and is + // only stored in Amazon Glacier. + ErrCodeObjectNotInActiveTierError = "ObjectNotInActiveTierError" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/host_style_bucket.go b/vendor/github.com/aws/aws-sdk-go/service/s3/host_style_bucket.go new file mode 100644 index 00000000..a7fbc2de --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/host_style_bucket.go @@ -0,0 +1,155 @@ +package s3 + +import ( + "fmt" + "net/url" + "regexp" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +// an operationBlacklist is a list of operation names that should a +// request handler should not be executed with. +type operationBlacklist []string + +// Continue will return true of the Request's operation name is not +// in the blacklist. False otherwise. +func (b operationBlacklist) Continue(r *request.Request) bool { + for i := 0; i < len(b); i++ { + if b[i] == r.Operation.Name { + return false + } + } + return true +} + +var accelerateOpBlacklist = operationBlacklist{ + opListBuckets, opCreateBucket, opDeleteBucket, +} + +// Request handler to automatically add the bucket name to the endpoint domain +// if possible. This style of bucket is valid for all bucket names which are +// DNS compatible and do not contain "." +func updateEndpointForS3Config(r *request.Request) { + forceHostStyle := aws.BoolValue(r.Config.S3ForcePathStyle) + accelerate := aws.BoolValue(r.Config.S3UseAccelerate) + + if accelerate && accelerateOpBlacklist.Continue(r) { + if forceHostStyle { + if r.Config.Logger != nil { + r.Config.Logger.Log("ERROR: aws.Config.S3UseAccelerate is not compatible with aws.Config.S3ForcePathStyle, ignoring S3ForcePathStyle.") + } + } + updateEndpointForAccelerate(r) + } else if !forceHostStyle && r.Operation.Name != opGetBucketLocation { + updateEndpointForHostStyle(r) + } +} + +func updateEndpointForHostStyle(r *request.Request) { + bucket, ok := bucketNameFromReqParams(r.Params) + if !ok { + // Ignore operation requests if the bucketname was not provided + // if this is an input validation error the validation handler + // will report it. + return + } + + if !hostCompatibleBucketName(r.HTTPRequest.URL, bucket) { + // bucket name must be valid to put into the host + return + } + + moveBucketToHost(r.HTTPRequest.URL, bucket) +} + +var ( + accelElem = []byte("s3-accelerate.dualstack.") +) + +func updateEndpointForAccelerate(r *request.Request) { + bucket, ok := bucketNameFromReqParams(r.Params) + if !ok { + // Ignore operation requests if the bucketname was not provided + // if this is an input validation error the validation handler + // will report it. + return + } + + if !hostCompatibleBucketName(r.HTTPRequest.URL, bucket) { + r.Error = awserr.New("InvalidParameterException", + fmt.Sprintf("bucket name %s is not compatible with S3 Accelerate", bucket), + nil) + return + } + + parts := strings.Split(r.HTTPRequest.URL.Host, ".") + if len(parts) < 3 { + r.Error = awserr.New("InvalidParameterExecption", + fmt.Sprintf("unable to update endpoint host for S3 accelerate, hostname invalid, %s", + r.HTTPRequest.URL.Host), nil) + return + } + + if parts[0] == "s3" || strings.HasPrefix(parts[0], "s3-") { + parts[0] = "s3-accelerate" + } + for i := 1; i+1 < len(parts); i++ { + if parts[i] == aws.StringValue(r.Config.Region) { + parts = append(parts[:i], parts[i+1:]...) + break + } + } + + r.HTTPRequest.URL.Host = strings.Join(parts, ".") + + moveBucketToHost(r.HTTPRequest.URL, bucket) +} + +// Attempts to retrieve the bucket name from the request input parameters. +// If no bucket is found, or the field is empty "", false will be returned. +func bucketNameFromReqParams(params interface{}) (string, bool) { + if iface, ok := params.(bucketGetter); ok { + b := iface.getBucket() + return b, len(b) > 0 + } + + return "", false +} + +// hostCompatibleBucketName returns true if the request should +// put the bucket in the host. This is false if S3ForcePathStyle is +// explicitly set or if the bucket is not DNS compatible. +func hostCompatibleBucketName(u *url.URL, bucket string) bool { + // Bucket might be DNS compatible but dots in the hostname will fail + // certificate validation, so do not use host-style. + if u.Scheme == "https" && strings.Contains(bucket, ".") { + return false + } + + // if the bucket is DNS compatible + return dnsCompatibleBucketName(bucket) +} + +var reDomain = regexp.MustCompile(`^[a-z0-9][a-z0-9\.\-]{1,61}[a-z0-9]$`) +var reIPAddress = regexp.MustCompile(`^(\d+\.){3}\d+$`) + +// dnsCompatibleBucketName returns true if the bucket name is DNS compatible. +// Buckets created outside of the classic region MUST be DNS compatible. +func dnsCompatibleBucketName(bucket string) bool { + return reDomain.MatchString(bucket) && + !reIPAddress.MatchString(bucket) && + !strings.Contains(bucket, "..") +} + +// moveBucketToHost moves the bucket name from the URI path to URL host. +func moveBucketToHost(u *url.URL, bucket string) { + u.Host = bucket + "." + u.Host + u.Path = strings.Replace(u.Path, "/{Bucket}", "", -1) + if u.Path == "" { + u.Path = "/" + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/platform_handlers.go b/vendor/github.com/aws/aws-sdk-go/service/s3/platform_handlers.go new file mode 100644 index 00000000..8e6f3307 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/platform_handlers.go @@ -0,0 +1,8 @@ +// +build !go1.6 + +package s3 + +import "github.com/aws/aws-sdk-go/aws/request" + +func platformRequestHandlers(r *request.Request) { +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/platform_handlers_go1.6.go b/vendor/github.com/aws/aws-sdk-go/service/s3/platform_handlers_go1.6.go new file mode 100644 index 00000000..14d05f7b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/platform_handlers_go1.6.go @@ -0,0 +1,28 @@ +// +build go1.6 + +package s3 + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +func platformRequestHandlers(r *request.Request) { + if r.Operation.HTTPMethod == "PUT" { + // 100-Continue should only be used on put requests. + r.Handlers.Sign.PushBack(add100Continue) + } +} + +func add100Continue(r *request.Request) { + if aws.BoolValue(r.Config.S3Disable100Continue) { + return + } + if r.HTTPRequest.ContentLength < 1024*1024*2 { + // Ignore requests smaller than 2MB. This helps prevent delaying + // requests unnecessarily. + return + } + + r.HTTPRequest.Header.Set("Expect", "100-Continue") +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go b/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go new file mode 100644 index 00000000..28c30d97 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go @@ -0,0 +1,399 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package s3iface provides an interface to enable mocking the Amazon Simple Storage Service service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package s3iface + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/service/s3" +) + +// S3API provides an interface to enable mocking the +// s3.S3 service client's API operation, +// paginators, and waiters. This make unit testing your code that calls out +// to the SDK's service client's calls easier. +// +// The best way to use this interface is so the SDK's service client's calls +// can be stubbed out for unit testing your code with the SDK without needing +// to inject custom request handlers into the SDK's request pipeline. +// +// // myFunc uses an SDK service client to make a request to +// // Amazon Simple Storage Service. +// func myFunc(svc s3iface.S3API) bool { +// // Make svc.AbortMultipartUpload request +// } +// +// func main() { +// sess := session.New() +// svc := s3.New(sess) +// +// myFunc(svc) +// } +// +// In your _test.go file: +// +// // Define a mock struct to be used in your unit tests of myFunc. +// type mockS3Client struct { +// s3iface.S3API +// } +// func (m *mockS3Client) AbortMultipartUpload(input *s3.AbortMultipartUploadInput) (*s3.AbortMultipartUploadOutput, error) { +// // mock response/functionality +// } +// +// func TestMyFunc(t *testing.T) { +// // Setup Test +// mockSvc := &mockS3Client{} +// +// myfunc(mockSvc) +// +// // Verify myFunc's functionality +// } +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. Its suggested to use the pattern above for testing, or using +// tooling to generate mocks to satisfy the interfaces. +type S3API interface { + AbortMultipartUpload(*s3.AbortMultipartUploadInput) (*s3.AbortMultipartUploadOutput, error) + AbortMultipartUploadWithContext(aws.Context, *s3.AbortMultipartUploadInput, ...request.Option) (*s3.AbortMultipartUploadOutput, error) + AbortMultipartUploadRequest(*s3.AbortMultipartUploadInput) (*request.Request, *s3.AbortMultipartUploadOutput) + + CompleteMultipartUpload(*s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) + CompleteMultipartUploadWithContext(aws.Context, *s3.CompleteMultipartUploadInput, ...request.Option) (*s3.CompleteMultipartUploadOutput, error) + CompleteMultipartUploadRequest(*s3.CompleteMultipartUploadInput) (*request.Request, *s3.CompleteMultipartUploadOutput) + + CopyObject(*s3.CopyObjectInput) (*s3.CopyObjectOutput, error) + CopyObjectWithContext(aws.Context, *s3.CopyObjectInput, ...request.Option) (*s3.CopyObjectOutput, error) + CopyObjectRequest(*s3.CopyObjectInput) (*request.Request, *s3.CopyObjectOutput) + + CreateBucket(*s3.CreateBucketInput) (*s3.CreateBucketOutput, error) + CreateBucketWithContext(aws.Context, *s3.CreateBucketInput, ...request.Option) (*s3.CreateBucketOutput, error) + CreateBucketRequest(*s3.CreateBucketInput) (*request.Request, *s3.CreateBucketOutput) + + CreateMultipartUpload(*s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) + CreateMultipartUploadWithContext(aws.Context, *s3.CreateMultipartUploadInput, ...request.Option) (*s3.CreateMultipartUploadOutput, error) + CreateMultipartUploadRequest(*s3.CreateMultipartUploadInput) (*request.Request, *s3.CreateMultipartUploadOutput) + + DeleteBucket(*s3.DeleteBucketInput) (*s3.DeleteBucketOutput, error) + DeleteBucketWithContext(aws.Context, *s3.DeleteBucketInput, ...request.Option) (*s3.DeleteBucketOutput, error) + DeleteBucketRequest(*s3.DeleteBucketInput) (*request.Request, *s3.DeleteBucketOutput) + + DeleteBucketAnalyticsConfiguration(*s3.DeleteBucketAnalyticsConfigurationInput) (*s3.DeleteBucketAnalyticsConfigurationOutput, error) + DeleteBucketAnalyticsConfigurationWithContext(aws.Context, *s3.DeleteBucketAnalyticsConfigurationInput, ...request.Option) (*s3.DeleteBucketAnalyticsConfigurationOutput, error) + DeleteBucketAnalyticsConfigurationRequest(*s3.DeleteBucketAnalyticsConfigurationInput) (*request.Request, *s3.DeleteBucketAnalyticsConfigurationOutput) + + DeleteBucketCors(*s3.DeleteBucketCorsInput) (*s3.DeleteBucketCorsOutput, error) + DeleteBucketCorsWithContext(aws.Context, *s3.DeleteBucketCorsInput, ...request.Option) (*s3.DeleteBucketCorsOutput, error) + DeleteBucketCorsRequest(*s3.DeleteBucketCorsInput) (*request.Request, *s3.DeleteBucketCorsOutput) + + DeleteBucketEncryption(*s3.DeleteBucketEncryptionInput) (*s3.DeleteBucketEncryptionOutput, error) + DeleteBucketEncryptionWithContext(aws.Context, *s3.DeleteBucketEncryptionInput, ...request.Option) (*s3.DeleteBucketEncryptionOutput, error) + DeleteBucketEncryptionRequest(*s3.DeleteBucketEncryptionInput) (*request.Request, *s3.DeleteBucketEncryptionOutput) + + DeleteBucketInventoryConfiguration(*s3.DeleteBucketInventoryConfigurationInput) (*s3.DeleteBucketInventoryConfigurationOutput, error) + DeleteBucketInventoryConfigurationWithContext(aws.Context, *s3.DeleteBucketInventoryConfigurationInput, ...request.Option) (*s3.DeleteBucketInventoryConfigurationOutput, error) + DeleteBucketInventoryConfigurationRequest(*s3.DeleteBucketInventoryConfigurationInput) (*request.Request, *s3.DeleteBucketInventoryConfigurationOutput) + + DeleteBucketLifecycle(*s3.DeleteBucketLifecycleInput) (*s3.DeleteBucketLifecycleOutput, error) + DeleteBucketLifecycleWithContext(aws.Context, *s3.DeleteBucketLifecycleInput, ...request.Option) (*s3.DeleteBucketLifecycleOutput, error) + DeleteBucketLifecycleRequest(*s3.DeleteBucketLifecycleInput) (*request.Request, *s3.DeleteBucketLifecycleOutput) + + DeleteBucketMetricsConfiguration(*s3.DeleteBucketMetricsConfigurationInput) (*s3.DeleteBucketMetricsConfigurationOutput, error) + DeleteBucketMetricsConfigurationWithContext(aws.Context, *s3.DeleteBucketMetricsConfigurationInput, ...request.Option) (*s3.DeleteBucketMetricsConfigurationOutput, error) + DeleteBucketMetricsConfigurationRequest(*s3.DeleteBucketMetricsConfigurationInput) (*request.Request, *s3.DeleteBucketMetricsConfigurationOutput) + + DeleteBucketPolicy(*s3.DeleteBucketPolicyInput) (*s3.DeleteBucketPolicyOutput, error) + DeleteBucketPolicyWithContext(aws.Context, *s3.DeleteBucketPolicyInput, ...request.Option) (*s3.DeleteBucketPolicyOutput, error) + DeleteBucketPolicyRequest(*s3.DeleteBucketPolicyInput) (*request.Request, *s3.DeleteBucketPolicyOutput) + + DeleteBucketReplication(*s3.DeleteBucketReplicationInput) (*s3.DeleteBucketReplicationOutput, error) + DeleteBucketReplicationWithContext(aws.Context, *s3.DeleteBucketReplicationInput, ...request.Option) (*s3.DeleteBucketReplicationOutput, error) + DeleteBucketReplicationRequest(*s3.DeleteBucketReplicationInput) (*request.Request, *s3.DeleteBucketReplicationOutput) + + DeleteBucketTagging(*s3.DeleteBucketTaggingInput) (*s3.DeleteBucketTaggingOutput, error) + DeleteBucketTaggingWithContext(aws.Context, *s3.DeleteBucketTaggingInput, ...request.Option) (*s3.DeleteBucketTaggingOutput, error) + DeleteBucketTaggingRequest(*s3.DeleteBucketTaggingInput) (*request.Request, *s3.DeleteBucketTaggingOutput) + + DeleteBucketWebsite(*s3.DeleteBucketWebsiteInput) (*s3.DeleteBucketWebsiteOutput, error) + DeleteBucketWebsiteWithContext(aws.Context, *s3.DeleteBucketWebsiteInput, ...request.Option) (*s3.DeleteBucketWebsiteOutput, error) + DeleteBucketWebsiteRequest(*s3.DeleteBucketWebsiteInput) (*request.Request, *s3.DeleteBucketWebsiteOutput) + + DeleteObject(*s3.DeleteObjectInput) (*s3.DeleteObjectOutput, error) + DeleteObjectWithContext(aws.Context, *s3.DeleteObjectInput, ...request.Option) (*s3.DeleteObjectOutput, error) + DeleteObjectRequest(*s3.DeleteObjectInput) (*request.Request, *s3.DeleteObjectOutput) + + DeleteObjectTagging(*s3.DeleteObjectTaggingInput) (*s3.DeleteObjectTaggingOutput, error) + DeleteObjectTaggingWithContext(aws.Context, *s3.DeleteObjectTaggingInput, ...request.Option) (*s3.DeleteObjectTaggingOutput, error) + DeleteObjectTaggingRequest(*s3.DeleteObjectTaggingInput) (*request.Request, *s3.DeleteObjectTaggingOutput) + + DeleteObjects(*s3.DeleteObjectsInput) (*s3.DeleteObjectsOutput, error) + DeleteObjectsWithContext(aws.Context, *s3.DeleteObjectsInput, ...request.Option) (*s3.DeleteObjectsOutput, error) + DeleteObjectsRequest(*s3.DeleteObjectsInput) (*request.Request, *s3.DeleteObjectsOutput) + + GetBucketAccelerateConfiguration(*s3.GetBucketAccelerateConfigurationInput) (*s3.GetBucketAccelerateConfigurationOutput, error) + GetBucketAccelerateConfigurationWithContext(aws.Context, *s3.GetBucketAccelerateConfigurationInput, ...request.Option) (*s3.GetBucketAccelerateConfigurationOutput, error) + GetBucketAccelerateConfigurationRequest(*s3.GetBucketAccelerateConfigurationInput) (*request.Request, *s3.GetBucketAccelerateConfigurationOutput) + + GetBucketAcl(*s3.GetBucketAclInput) (*s3.GetBucketAclOutput, error) + GetBucketAclWithContext(aws.Context, *s3.GetBucketAclInput, ...request.Option) (*s3.GetBucketAclOutput, error) + GetBucketAclRequest(*s3.GetBucketAclInput) (*request.Request, *s3.GetBucketAclOutput) + + GetBucketAnalyticsConfiguration(*s3.GetBucketAnalyticsConfigurationInput) (*s3.GetBucketAnalyticsConfigurationOutput, error) + GetBucketAnalyticsConfigurationWithContext(aws.Context, *s3.GetBucketAnalyticsConfigurationInput, ...request.Option) (*s3.GetBucketAnalyticsConfigurationOutput, error) + GetBucketAnalyticsConfigurationRequest(*s3.GetBucketAnalyticsConfigurationInput) (*request.Request, *s3.GetBucketAnalyticsConfigurationOutput) + + GetBucketCors(*s3.GetBucketCorsInput) (*s3.GetBucketCorsOutput, error) + GetBucketCorsWithContext(aws.Context, *s3.GetBucketCorsInput, ...request.Option) (*s3.GetBucketCorsOutput, error) + GetBucketCorsRequest(*s3.GetBucketCorsInput) (*request.Request, *s3.GetBucketCorsOutput) + + GetBucketEncryption(*s3.GetBucketEncryptionInput) (*s3.GetBucketEncryptionOutput, error) + GetBucketEncryptionWithContext(aws.Context, *s3.GetBucketEncryptionInput, ...request.Option) (*s3.GetBucketEncryptionOutput, error) + GetBucketEncryptionRequest(*s3.GetBucketEncryptionInput) (*request.Request, *s3.GetBucketEncryptionOutput) + + GetBucketInventoryConfiguration(*s3.GetBucketInventoryConfigurationInput) (*s3.GetBucketInventoryConfigurationOutput, error) + GetBucketInventoryConfigurationWithContext(aws.Context, *s3.GetBucketInventoryConfigurationInput, ...request.Option) (*s3.GetBucketInventoryConfigurationOutput, error) + GetBucketInventoryConfigurationRequest(*s3.GetBucketInventoryConfigurationInput) (*request.Request, *s3.GetBucketInventoryConfigurationOutput) + + GetBucketLifecycle(*s3.GetBucketLifecycleInput) (*s3.GetBucketLifecycleOutput, error) + GetBucketLifecycleWithContext(aws.Context, *s3.GetBucketLifecycleInput, ...request.Option) (*s3.GetBucketLifecycleOutput, error) + GetBucketLifecycleRequest(*s3.GetBucketLifecycleInput) (*request.Request, *s3.GetBucketLifecycleOutput) + + GetBucketLifecycleConfiguration(*s3.GetBucketLifecycleConfigurationInput) (*s3.GetBucketLifecycleConfigurationOutput, error) + GetBucketLifecycleConfigurationWithContext(aws.Context, *s3.GetBucketLifecycleConfigurationInput, ...request.Option) (*s3.GetBucketLifecycleConfigurationOutput, error) + GetBucketLifecycleConfigurationRequest(*s3.GetBucketLifecycleConfigurationInput) (*request.Request, *s3.GetBucketLifecycleConfigurationOutput) + + GetBucketLocation(*s3.GetBucketLocationInput) (*s3.GetBucketLocationOutput, error) + GetBucketLocationWithContext(aws.Context, *s3.GetBucketLocationInput, ...request.Option) (*s3.GetBucketLocationOutput, error) + GetBucketLocationRequest(*s3.GetBucketLocationInput) (*request.Request, *s3.GetBucketLocationOutput) + + GetBucketLogging(*s3.GetBucketLoggingInput) (*s3.GetBucketLoggingOutput, error) + GetBucketLoggingWithContext(aws.Context, *s3.GetBucketLoggingInput, ...request.Option) (*s3.GetBucketLoggingOutput, error) + GetBucketLoggingRequest(*s3.GetBucketLoggingInput) (*request.Request, *s3.GetBucketLoggingOutput) + + GetBucketMetricsConfiguration(*s3.GetBucketMetricsConfigurationInput) (*s3.GetBucketMetricsConfigurationOutput, error) + GetBucketMetricsConfigurationWithContext(aws.Context, *s3.GetBucketMetricsConfigurationInput, ...request.Option) (*s3.GetBucketMetricsConfigurationOutput, error) + GetBucketMetricsConfigurationRequest(*s3.GetBucketMetricsConfigurationInput) (*request.Request, *s3.GetBucketMetricsConfigurationOutput) + + GetBucketNotification(*s3.GetBucketNotificationConfigurationRequest) (*s3.NotificationConfigurationDeprecated, error) + GetBucketNotificationWithContext(aws.Context, *s3.GetBucketNotificationConfigurationRequest, ...request.Option) (*s3.NotificationConfigurationDeprecated, error) + GetBucketNotificationRequest(*s3.GetBucketNotificationConfigurationRequest) (*request.Request, *s3.NotificationConfigurationDeprecated) + + GetBucketNotificationConfiguration(*s3.GetBucketNotificationConfigurationRequest) (*s3.NotificationConfiguration, error) + GetBucketNotificationConfigurationWithContext(aws.Context, *s3.GetBucketNotificationConfigurationRequest, ...request.Option) (*s3.NotificationConfiguration, error) + GetBucketNotificationConfigurationRequest(*s3.GetBucketNotificationConfigurationRequest) (*request.Request, *s3.NotificationConfiguration) + + GetBucketPolicy(*s3.GetBucketPolicyInput) (*s3.GetBucketPolicyOutput, error) + GetBucketPolicyWithContext(aws.Context, *s3.GetBucketPolicyInput, ...request.Option) (*s3.GetBucketPolicyOutput, error) + GetBucketPolicyRequest(*s3.GetBucketPolicyInput) (*request.Request, *s3.GetBucketPolicyOutput) + + GetBucketReplication(*s3.GetBucketReplicationInput) (*s3.GetBucketReplicationOutput, error) + GetBucketReplicationWithContext(aws.Context, *s3.GetBucketReplicationInput, ...request.Option) (*s3.GetBucketReplicationOutput, error) + GetBucketReplicationRequest(*s3.GetBucketReplicationInput) (*request.Request, *s3.GetBucketReplicationOutput) + + GetBucketRequestPayment(*s3.GetBucketRequestPaymentInput) (*s3.GetBucketRequestPaymentOutput, error) + GetBucketRequestPaymentWithContext(aws.Context, *s3.GetBucketRequestPaymentInput, ...request.Option) (*s3.GetBucketRequestPaymentOutput, error) + GetBucketRequestPaymentRequest(*s3.GetBucketRequestPaymentInput) (*request.Request, *s3.GetBucketRequestPaymentOutput) + + GetBucketTagging(*s3.GetBucketTaggingInput) (*s3.GetBucketTaggingOutput, error) + GetBucketTaggingWithContext(aws.Context, *s3.GetBucketTaggingInput, ...request.Option) (*s3.GetBucketTaggingOutput, error) + GetBucketTaggingRequest(*s3.GetBucketTaggingInput) (*request.Request, *s3.GetBucketTaggingOutput) + + GetBucketVersioning(*s3.GetBucketVersioningInput) (*s3.GetBucketVersioningOutput, error) + GetBucketVersioningWithContext(aws.Context, *s3.GetBucketVersioningInput, ...request.Option) (*s3.GetBucketVersioningOutput, error) + GetBucketVersioningRequest(*s3.GetBucketVersioningInput) (*request.Request, *s3.GetBucketVersioningOutput) + + GetBucketWebsite(*s3.GetBucketWebsiteInput) (*s3.GetBucketWebsiteOutput, error) + GetBucketWebsiteWithContext(aws.Context, *s3.GetBucketWebsiteInput, ...request.Option) (*s3.GetBucketWebsiteOutput, error) + GetBucketWebsiteRequest(*s3.GetBucketWebsiteInput) (*request.Request, *s3.GetBucketWebsiteOutput) + + GetObject(*s3.GetObjectInput) (*s3.GetObjectOutput, error) + GetObjectWithContext(aws.Context, *s3.GetObjectInput, ...request.Option) (*s3.GetObjectOutput, error) + GetObjectRequest(*s3.GetObjectInput) (*request.Request, *s3.GetObjectOutput) + + GetObjectAcl(*s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) + GetObjectAclWithContext(aws.Context, *s3.GetObjectAclInput, ...request.Option) (*s3.GetObjectAclOutput, error) + GetObjectAclRequest(*s3.GetObjectAclInput) (*request.Request, *s3.GetObjectAclOutput) + + GetObjectTagging(*s3.GetObjectTaggingInput) (*s3.GetObjectTaggingOutput, error) + GetObjectTaggingWithContext(aws.Context, *s3.GetObjectTaggingInput, ...request.Option) (*s3.GetObjectTaggingOutput, error) + GetObjectTaggingRequest(*s3.GetObjectTaggingInput) (*request.Request, *s3.GetObjectTaggingOutput) + + GetObjectTorrent(*s3.GetObjectTorrentInput) (*s3.GetObjectTorrentOutput, error) + GetObjectTorrentWithContext(aws.Context, *s3.GetObjectTorrentInput, ...request.Option) (*s3.GetObjectTorrentOutput, error) + GetObjectTorrentRequest(*s3.GetObjectTorrentInput) (*request.Request, *s3.GetObjectTorrentOutput) + + HeadBucket(*s3.HeadBucketInput) (*s3.HeadBucketOutput, error) + HeadBucketWithContext(aws.Context, *s3.HeadBucketInput, ...request.Option) (*s3.HeadBucketOutput, error) + HeadBucketRequest(*s3.HeadBucketInput) (*request.Request, *s3.HeadBucketOutput) + + HeadObject(*s3.HeadObjectInput) (*s3.HeadObjectOutput, error) + HeadObjectWithContext(aws.Context, *s3.HeadObjectInput, ...request.Option) (*s3.HeadObjectOutput, error) + HeadObjectRequest(*s3.HeadObjectInput) (*request.Request, *s3.HeadObjectOutput) + + ListBucketAnalyticsConfigurations(*s3.ListBucketAnalyticsConfigurationsInput) (*s3.ListBucketAnalyticsConfigurationsOutput, error) + ListBucketAnalyticsConfigurationsWithContext(aws.Context, *s3.ListBucketAnalyticsConfigurationsInput, ...request.Option) (*s3.ListBucketAnalyticsConfigurationsOutput, error) + ListBucketAnalyticsConfigurationsRequest(*s3.ListBucketAnalyticsConfigurationsInput) (*request.Request, *s3.ListBucketAnalyticsConfigurationsOutput) + + ListBucketInventoryConfigurations(*s3.ListBucketInventoryConfigurationsInput) (*s3.ListBucketInventoryConfigurationsOutput, error) + ListBucketInventoryConfigurationsWithContext(aws.Context, *s3.ListBucketInventoryConfigurationsInput, ...request.Option) (*s3.ListBucketInventoryConfigurationsOutput, error) + ListBucketInventoryConfigurationsRequest(*s3.ListBucketInventoryConfigurationsInput) (*request.Request, *s3.ListBucketInventoryConfigurationsOutput) + + ListBucketMetricsConfigurations(*s3.ListBucketMetricsConfigurationsInput) (*s3.ListBucketMetricsConfigurationsOutput, error) + ListBucketMetricsConfigurationsWithContext(aws.Context, *s3.ListBucketMetricsConfigurationsInput, ...request.Option) (*s3.ListBucketMetricsConfigurationsOutput, error) + ListBucketMetricsConfigurationsRequest(*s3.ListBucketMetricsConfigurationsInput) (*request.Request, *s3.ListBucketMetricsConfigurationsOutput) + + ListBuckets(*s3.ListBucketsInput) (*s3.ListBucketsOutput, error) + ListBucketsWithContext(aws.Context, *s3.ListBucketsInput, ...request.Option) (*s3.ListBucketsOutput, error) + ListBucketsRequest(*s3.ListBucketsInput) (*request.Request, *s3.ListBucketsOutput) + + ListMultipartUploads(*s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) + ListMultipartUploadsWithContext(aws.Context, *s3.ListMultipartUploadsInput, ...request.Option) (*s3.ListMultipartUploadsOutput, error) + ListMultipartUploadsRequest(*s3.ListMultipartUploadsInput) (*request.Request, *s3.ListMultipartUploadsOutput) + + ListMultipartUploadsPages(*s3.ListMultipartUploadsInput, func(*s3.ListMultipartUploadsOutput, bool) bool) error + ListMultipartUploadsPagesWithContext(aws.Context, *s3.ListMultipartUploadsInput, func(*s3.ListMultipartUploadsOutput, bool) bool, ...request.Option) error + + ListObjectVersions(*s3.ListObjectVersionsInput) (*s3.ListObjectVersionsOutput, error) + ListObjectVersionsWithContext(aws.Context, *s3.ListObjectVersionsInput, ...request.Option) (*s3.ListObjectVersionsOutput, error) + ListObjectVersionsRequest(*s3.ListObjectVersionsInput) (*request.Request, *s3.ListObjectVersionsOutput) + + ListObjectVersionsPages(*s3.ListObjectVersionsInput, func(*s3.ListObjectVersionsOutput, bool) bool) error + ListObjectVersionsPagesWithContext(aws.Context, *s3.ListObjectVersionsInput, func(*s3.ListObjectVersionsOutput, bool) bool, ...request.Option) error + + ListObjects(*s3.ListObjectsInput) (*s3.ListObjectsOutput, error) + ListObjectsWithContext(aws.Context, *s3.ListObjectsInput, ...request.Option) (*s3.ListObjectsOutput, error) + ListObjectsRequest(*s3.ListObjectsInput) (*request.Request, *s3.ListObjectsOutput) + + ListObjectsPages(*s3.ListObjectsInput, func(*s3.ListObjectsOutput, bool) bool) error + ListObjectsPagesWithContext(aws.Context, *s3.ListObjectsInput, func(*s3.ListObjectsOutput, bool) bool, ...request.Option) error + + ListObjectsV2(*s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) + ListObjectsV2WithContext(aws.Context, *s3.ListObjectsV2Input, ...request.Option) (*s3.ListObjectsV2Output, error) + ListObjectsV2Request(*s3.ListObjectsV2Input) (*request.Request, *s3.ListObjectsV2Output) + + ListObjectsV2Pages(*s3.ListObjectsV2Input, func(*s3.ListObjectsV2Output, bool) bool) error + ListObjectsV2PagesWithContext(aws.Context, *s3.ListObjectsV2Input, func(*s3.ListObjectsV2Output, bool) bool, ...request.Option) error + + ListParts(*s3.ListPartsInput) (*s3.ListPartsOutput, error) + ListPartsWithContext(aws.Context, *s3.ListPartsInput, ...request.Option) (*s3.ListPartsOutput, error) + ListPartsRequest(*s3.ListPartsInput) (*request.Request, *s3.ListPartsOutput) + + ListPartsPages(*s3.ListPartsInput, func(*s3.ListPartsOutput, bool) bool) error + ListPartsPagesWithContext(aws.Context, *s3.ListPartsInput, func(*s3.ListPartsOutput, bool) bool, ...request.Option) error + + PutBucketAccelerateConfiguration(*s3.PutBucketAccelerateConfigurationInput) (*s3.PutBucketAccelerateConfigurationOutput, error) + PutBucketAccelerateConfigurationWithContext(aws.Context, *s3.PutBucketAccelerateConfigurationInput, ...request.Option) (*s3.PutBucketAccelerateConfigurationOutput, error) + PutBucketAccelerateConfigurationRequest(*s3.PutBucketAccelerateConfigurationInput) (*request.Request, *s3.PutBucketAccelerateConfigurationOutput) + + PutBucketAcl(*s3.PutBucketAclInput) (*s3.PutBucketAclOutput, error) + PutBucketAclWithContext(aws.Context, *s3.PutBucketAclInput, ...request.Option) (*s3.PutBucketAclOutput, error) + PutBucketAclRequest(*s3.PutBucketAclInput) (*request.Request, *s3.PutBucketAclOutput) + + PutBucketAnalyticsConfiguration(*s3.PutBucketAnalyticsConfigurationInput) (*s3.PutBucketAnalyticsConfigurationOutput, error) + PutBucketAnalyticsConfigurationWithContext(aws.Context, *s3.PutBucketAnalyticsConfigurationInput, ...request.Option) (*s3.PutBucketAnalyticsConfigurationOutput, error) + PutBucketAnalyticsConfigurationRequest(*s3.PutBucketAnalyticsConfigurationInput) (*request.Request, *s3.PutBucketAnalyticsConfigurationOutput) + + PutBucketCors(*s3.PutBucketCorsInput) (*s3.PutBucketCorsOutput, error) + PutBucketCorsWithContext(aws.Context, *s3.PutBucketCorsInput, ...request.Option) (*s3.PutBucketCorsOutput, error) + PutBucketCorsRequest(*s3.PutBucketCorsInput) (*request.Request, *s3.PutBucketCorsOutput) + + PutBucketEncryption(*s3.PutBucketEncryptionInput) (*s3.PutBucketEncryptionOutput, error) + PutBucketEncryptionWithContext(aws.Context, *s3.PutBucketEncryptionInput, ...request.Option) (*s3.PutBucketEncryptionOutput, error) + PutBucketEncryptionRequest(*s3.PutBucketEncryptionInput) (*request.Request, *s3.PutBucketEncryptionOutput) + + PutBucketInventoryConfiguration(*s3.PutBucketInventoryConfigurationInput) (*s3.PutBucketInventoryConfigurationOutput, error) + PutBucketInventoryConfigurationWithContext(aws.Context, *s3.PutBucketInventoryConfigurationInput, ...request.Option) (*s3.PutBucketInventoryConfigurationOutput, error) + PutBucketInventoryConfigurationRequest(*s3.PutBucketInventoryConfigurationInput) (*request.Request, *s3.PutBucketInventoryConfigurationOutput) + + PutBucketLifecycle(*s3.PutBucketLifecycleInput) (*s3.PutBucketLifecycleOutput, error) + PutBucketLifecycleWithContext(aws.Context, *s3.PutBucketLifecycleInput, ...request.Option) (*s3.PutBucketLifecycleOutput, error) + PutBucketLifecycleRequest(*s3.PutBucketLifecycleInput) (*request.Request, *s3.PutBucketLifecycleOutput) + + PutBucketLifecycleConfiguration(*s3.PutBucketLifecycleConfigurationInput) (*s3.PutBucketLifecycleConfigurationOutput, error) + PutBucketLifecycleConfigurationWithContext(aws.Context, *s3.PutBucketLifecycleConfigurationInput, ...request.Option) (*s3.PutBucketLifecycleConfigurationOutput, error) + PutBucketLifecycleConfigurationRequest(*s3.PutBucketLifecycleConfigurationInput) (*request.Request, *s3.PutBucketLifecycleConfigurationOutput) + + PutBucketLogging(*s3.PutBucketLoggingInput) (*s3.PutBucketLoggingOutput, error) + PutBucketLoggingWithContext(aws.Context, *s3.PutBucketLoggingInput, ...request.Option) (*s3.PutBucketLoggingOutput, error) + PutBucketLoggingRequest(*s3.PutBucketLoggingInput) (*request.Request, *s3.PutBucketLoggingOutput) + + PutBucketMetricsConfiguration(*s3.PutBucketMetricsConfigurationInput) (*s3.PutBucketMetricsConfigurationOutput, error) + PutBucketMetricsConfigurationWithContext(aws.Context, *s3.PutBucketMetricsConfigurationInput, ...request.Option) (*s3.PutBucketMetricsConfigurationOutput, error) + PutBucketMetricsConfigurationRequest(*s3.PutBucketMetricsConfigurationInput) (*request.Request, *s3.PutBucketMetricsConfigurationOutput) + + PutBucketNotification(*s3.PutBucketNotificationInput) (*s3.PutBucketNotificationOutput, error) + PutBucketNotificationWithContext(aws.Context, *s3.PutBucketNotificationInput, ...request.Option) (*s3.PutBucketNotificationOutput, error) + PutBucketNotificationRequest(*s3.PutBucketNotificationInput) (*request.Request, *s3.PutBucketNotificationOutput) + + PutBucketNotificationConfiguration(*s3.PutBucketNotificationConfigurationInput) (*s3.PutBucketNotificationConfigurationOutput, error) + PutBucketNotificationConfigurationWithContext(aws.Context, *s3.PutBucketNotificationConfigurationInput, ...request.Option) (*s3.PutBucketNotificationConfigurationOutput, error) + PutBucketNotificationConfigurationRequest(*s3.PutBucketNotificationConfigurationInput) (*request.Request, *s3.PutBucketNotificationConfigurationOutput) + + PutBucketPolicy(*s3.PutBucketPolicyInput) (*s3.PutBucketPolicyOutput, error) + PutBucketPolicyWithContext(aws.Context, *s3.PutBucketPolicyInput, ...request.Option) (*s3.PutBucketPolicyOutput, error) + PutBucketPolicyRequest(*s3.PutBucketPolicyInput) (*request.Request, *s3.PutBucketPolicyOutput) + + PutBucketReplication(*s3.PutBucketReplicationInput) (*s3.PutBucketReplicationOutput, error) + PutBucketReplicationWithContext(aws.Context, *s3.PutBucketReplicationInput, ...request.Option) (*s3.PutBucketReplicationOutput, error) + PutBucketReplicationRequest(*s3.PutBucketReplicationInput) (*request.Request, *s3.PutBucketReplicationOutput) + + PutBucketRequestPayment(*s3.PutBucketRequestPaymentInput) (*s3.PutBucketRequestPaymentOutput, error) + PutBucketRequestPaymentWithContext(aws.Context, *s3.PutBucketRequestPaymentInput, ...request.Option) (*s3.PutBucketRequestPaymentOutput, error) + PutBucketRequestPaymentRequest(*s3.PutBucketRequestPaymentInput) (*request.Request, *s3.PutBucketRequestPaymentOutput) + + PutBucketTagging(*s3.PutBucketTaggingInput) (*s3.PutBucketTaggingOutput, error) + PutBucketTaggingWithContext(aws.Context, *s3.PutBucketTaggingInput, ...request.Option) (*s3.PutBucketTaggingOutput, error) + PutBucketTaggingRequest(*s3.PutBucketTaggingInput) (*request.Request, *s3.PutBucketTaggingOutput) + + PutBucketVersioning(*s3.PutBucketVersioningInput) (*s3.PutBucketVersioningOutput, error) + PutBucketVersioningWithContext(aws.Context, *s3.PutBucketVersioningInput, ...request.Option) (*s3.PutBucketVersioningOutput, error) + PutBucketVersioningRequest(*s3.PutBucketVersioningInput) (*request.Request, *s3.PutBucketVersioningOutput) + + PutBucketWebsite(*s3.PutBucketWebsiteInput) (*s3.PutBucketWebsiteOutput, error) + PutBucketWebsiteWithContext(aws.Context, *s3.PutBucketWebsiteInput, ...request.Option) (*s3.PutBucketWebsiteOutput, error) + PutBucketWebsiteRequest(*s3.PutBucketWebsiteInput) (*request.Request, *s3.PutBucketWebsiteOutput) + + PutObject(*s3.PutObjectInput) (*s3.PutObjectOutput, error) + PutObjectWithContext(aws.Context, *s3.PutObjectInput, ...request.Option) (*s3.PutObjectOutput, error) + PutObjectRequest(*s3.PutObjectInput) (*request.Request, *s3.PutObjectOutput) + + PutObjectAcl(*s3.PutObjectAclInput) (*s3.PutObjectAclOutput, error) + PutObjectAclWithContext(aws.Context, *s3.PutObjectAclInput, ...request.Option) (*s3.PutObjectAclOutput, error) + PutObjectAclRequest(*s3.PutObjectAclInput) (*request.Request, *s3.PutObjectAclOutput) + + PutObjectTagging(*s3.PutObjectTaggingInput) (*s3.PutObjectTaggingOutput, error) + PutObjectTaggingWithContext(aws.Context, *s3.PutObjectTaggingInput, ...request.Option) (*s3.PutObjectTaggingOutput, error) + PutObjectTaggingRequest(*s3.PutObjectTaggingInput) (*request.Request, *s3.PutObjectTaggingOutput) + + RestoreObject(*s3.RestoreObjectInput) (*s3.RestoreObjectOutput, error) + RestoreObjectWithContext(aws.Context, *s3.RestoreObjectInput, ...request.Option) (*s3.RestoreObjectOutput, error) + RestoreObjectRequest(*s3.RestoreObjectInput) (*request.Request, *s3.RestoreObjectOutput) + + UploadPart(*s3.UploadPartInput) (*s3.UploadPartOutput, error) + UploadPartWithContext(aws.Context, *s3.UploadPartInput, ...request.Option) (*s3.UploadPartOutput, error) + UploadPartRequest(*s3.UploadPartInput) (*request.Request, *s3.UploadPartOutput) + + UploadPartCopy(*s3.UploadPartCopyInput) (*s3.UploadPartCopyOutput, error) + UploadPartCopyWithContext(aws.Context, *s3.UploadPartCopyInput, ...request.Option) (*s3.UploadPartCopyOutput, error) + UploadPartCopyRequest(*s3.UploadPartCopyInput) (*request.Request, *s3.UploadPartCopyOutput) + + WaitUntilBucketExists(*s3.HeadBucketInput) error + WaitUntilBucketExistsWithContext(aws.Context, *s3.HeadBucketInput, ...request.WaiterOption) error + + WaitUntilBucketNotExists(*s3.HeadBucketInput) error + WaitUntilBucketNotExistsWithContext(aws.Context, *s3.HeadBucketInput, ...request.WaiterOption) error + + WaitUntilObjectExists(*s3.HeadObjectInput) error + WaitUntilObjectExistsWithContext(aws.Context, *s3.HeadObjectInput, ...request.WaiterOption) error + + WaitUntilObjectNotExists(*s3.HeadObjectInput) error + WaitUntilObjectNotExistsWithContext(aws.Context, *s3.HeadObjectInput, ...request.WaiterOption) error +} + +var _ S3API = (*s3.S3)(nil) diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/service.go b/vendor/github.com/aws/aws-sdk-go/service/s3/service.go new file mode 100644 index 00000000..614e477d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/service.go @@ -0,0 +1,93 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package s3 + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/restxml" +) + +// S3 provides the API operation methods for making requests to +// Amazon Simple Storage Service. See this package's package overview docs +// for details on the service. +// +// S3 methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type S3 struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "s3" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the S3 client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a S3 client from just a session. +// svc := s3.New(mySession) +// +// // Create a S3 client with additional configuration +// svc := s3.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *S3 { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *S3 { + svc := &S3{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2006-03-01", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(restxml.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(restxml.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(restxml.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(restxml.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a S3 operation and runs any +// custom request initialization. +func (c *S3) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/sse.go b/vendor/github.com/aws/aws-sdk-go/service/s3/sse.go new file mode 100644 index 00000000..8010c4fa --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/sse.go @@ -0,0 +1,54 @@ +package s3 + +import ( + "crypto/md5" + "encoding/base64" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +var errSSERequiresSSL = awserr.New("ConfigError", "cannot send SSE keys over HTTP.", nil) + +func validateSSERequiresSSL(r *request.Request) { + if r.HTTPRequest.URL.Scheme == "https" { + return + } + + if iface, ok := r.Params.(sseCustomerKeyGetter); ok { + if len(iface.getSSECustomerKey()) > 0 { + r.Error = errSSERequiresSSL + return + } + } + + if iface, ok := r.Params.(copySourceSSECustomerKeyGetter); ok { + if len(iface.getCopySourceSSECustomerKey()) > 0 { + r.Error = errSSERequiresSSL + return + } + } +} + +func computeSSEKeys(r *request.Request) { + headers := []string{ + "x-amz-server-side-encryption-customer-key", + "x-amz-copy-source-server-side-encryption-customer-key", + } + + for _, h := range headers { + md5h := h + "-md5" + if key := r.HTTPRequest.Header.Get(h); key != "" { + // Base64-encode the value + b64v := base64.StdEncoding.EncodeToString([]byte(key)) + r.HTTPRequest.Header.Set(h, b64v) + + // Add MD5 if it wasn't computed + if r.HTTPRequest.Header.Get(md5h) == "" { + sum := md5.Sum([]byte(key)) + b64sum := base64.StdEncoding.EncodeToString(sum[:]) + r.HTTPRequest.Header.Set(md5h, b64sum) + } + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go b/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go new file mode 100644 index 00000000..9f33efc6 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go @@ -0,0 +1,36 @@ +package s3 + +import ( + "bytes" + "io/ioutil" + "net/http" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/sdkio" +) + +func copyMultipartStatusOKUnmarhsalError(r *request.Request) { + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = awserr.New("SerializationError", "unable to read response body", err) + return + } + body := bytes.NewReader(b) + r.HTTPResponse.Body = ioutil.NopCloser(body) + defer body.Seek(0, sdkio.SeekStart) + + if body.Len() == 0 { + // If there is no body don't attempt to parse the body. + return + } + + unmarshalError(r) + if err, ok := r.Error.(awserr.Error); ok && err != nil { + if err.Code() == "SerializationError" { + r.Error = nil + return + } + r.HTTPResponse.StatusCode = http.StatusServiceUnavailable + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go b/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go new file mode 100644 index 00000000..bcca8627 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go @@ -0,0 +1,103 @@ +package s3 + +import ( + "encoding/xml" + "fmt" + "io" + "io/ioutil" + "net/http" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +type xmlErrorResponse struct { + XMLName xml.Name `xml:"Error"` + Code string `xml:"Code"` + Message string `xml:"Message"` +} + +func unmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + defer io.Copy(ioutil.Discard, r.HTTPResponse.Body) + + hostID := r.HTTPResponse.Header.Get("X-Amz-Id-2") + + // Bucket exists in a different region, and request needs + // to be made to the correct region. + if r.HTTPResponse.StatusCode == http.StatusMovedPermanently { + r.Error = requestFailure{ + RequestFailure: awserr.NewRequestFailure( + awserr.New("BucketRegionError", + fmt.Sprintf("incorrect region, the bucket is not in '%s' region", + aws.StringValue(r.Config.Region)), + nil), + r.HTTPResponse.StatusCode, + r.RequestID, + ), + hostID: hostID, + } + return + } + + var errCode, errMsg string + + // Attempt to parse error from body if it is known + resp := &xmlErrorResponse{} + err := xml.NewDecoder(r.HTTPResponse.Body).Decode(resp) + if err != nil && err != io.EOF { + errCode = "SerializationError" + errMsg = "failed to decode S3 XML error response" + } else { + errCode = resp.Code + errMsg = resp.Message + err = nil + } + + // Fallback to status code converted to message if still no error code + if len(errCode) == 0 { + statusText := http.StatusText(r.HTTPResponse.StatusCode) + errCode = strings.Replace(statusText, " ", "", -1) + errMsg = statusText + } + + r.Error = requestFailure{ + RequestFailure: awserr.NewRequestFailure( + awserr.New(errCode, errMsg, err), + r.HTTPResponse.StatusCode, + r.RequestID, + ), + hostID: hostID, + } +} + +// A RequestFailure provides access to the S3 Request ID and Host ID values +// returned from API operation errors. Getting the error as a string will +// return the formated error with the same information as awserr.RequestFailure, +// while also adding the HostID value from the response. +type RequestFailure interface { + awserr.RequestFailure + + // Host ID is the S3 Host ID needed for debug, and contacting support + HostID() string +} + +type requestFailure struct { + awserr.RequestFailure + + hostID string +} + +func (r requestFailure) Error() string { + extra := fmt.Sprintf("status code: %d, request id: %s, host id: %s", + r.StatusCode(), r.RequestID(), r.hostID) + return awserr.SprintError(r.Code(), r.Message(), extra, r.OrigErr()) +} +func (r requestFailure) String() string { + return r.Error() +} +func (r requestFailure) HostID() string { + return r.hostID +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/s3/waiters.go new file mode 100644 index 00000000..2596c694 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/waiters.go @@ -0,0 +1,214 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package s3 + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilBucketExists uses the Amazon S3 API operation +// HeadBucket to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *S3) WaitUntilBucketExists(input *HeadBucketInput) error { + return c.WaitUntilBucketExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilBucketExistsWithContext is an extended version of WaitUntilBucketExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) WaitUntilBucketExistsWithContext(ctx aws.Context, input *HeadBucketInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilBucketExists", + MaxAttempts: 20, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 200, + }, + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 301, + }, + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 403, + }, + { + State: request.RetryWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 404, + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *HeadBucketInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.HeadBucketRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilBucketNotExists uses the Amazon S3 API operation +// HeadBucket to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *S3) WaitUntilBucketNotExists(input *HeadBucketInput) error { + return c.WaitUntilBucketNotExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilBucketNotExistsWithContext is an extended version of WaitUntilBucketNotExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) WaitUntilBucketNotExistsWithContext(ctx aws.Context, input *HeadBucketInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilBucketNotExists", + MaxAttempts: 20, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 404, + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *HeadBucketInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.HeadBucketRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilObjectExists uses the Amazon S3 API operation +// HeadObject to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *S3) WaitUntilObjectExists(input *HeadObjectInput) error { + return c.WaitUntilObjectExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilObjectExistsWithContext is an extended version of WaitUntilObjectExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) WaitUntilObjectExistsWithContext(ctx aws.Context, input *HeadObjectInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilObjectExists", + MaxAttempts: 20, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 200, + }, + { + State: request.RetryWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 404, + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *HeadObjectInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.HeadObjectRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilObjectNotExists uses the Amazon S3 API operation +// HeadObject to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *S3) WaitUntilObjectNotExists(input *HeadObjectInput) error { + return c.WaitUntilObjectNotExistsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilObjectNotExistsWithContext is an extended version of WaitUntilObjectNotExists. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) WaitUntilObjectNotExistsWithContext(ctx aws.Context, input *HeadObjectInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilObjectNotExists", + MaxAttempts: 20, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.StatusWaiterMatch, + Expected: 404, + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *HeadObjectInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.HeadObjectRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go new file mode 100644 index 00000000..f47913f2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go @@ -0,0 +1,30242 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ssm + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opAddTagsToResource = "AddTagsToResource" + +// AddTagsToResourceRequest generates a "aws/request.Request" representing the +// client's request for the AddTagsToResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddTagsToResource for more information on using the AddTagsToResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddTagsToResourceRequest method. +// req, resp := client.AddTagsToResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/AddTagsToResource +func (c *SSM) AddTagsToResourceRequest(input *AddTagsToResourceInput) (req *request.Request, output *AddTagsToResourceOutput) { + op := &request.Operation{ + Name: opAddTagsToResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddTagsToResourceInput{} + } + + output = &AddTagsToResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddTagsToResource API operation for Amazon Simple Systems Manager (SSM). +// +// Adds or overwrites one or more tags for the specified resource. Tags are +// metadata that you can assign to your documents, managed instances, Maintenance +// Windows, Parameter Store parameters, and patch baselines. Tags enable you +// to categorize your resources in different ways, for example, by purpose, +// owner, or environment. Each tag consists of a key and an optional value, +// both of which you define. For example, you could define a set of tags for +// your account's managed instances that helps you track each instance's owner +// and stack level. For example: Key=Owner and Value=DbAdmin, SysAdmin, or Dev. +// Or Key=Stack and Value=Production, Pre-Production, or Test. +// +// Each resource can have a maximum of 50 tags. +// +// We recommend that you devise a set of tag keys that meets your needs for +// each resource type. Using a consistent set of tag keys makes it easier for +// you to manage your resources. You can search and filter the resources based +// on the tags you add. Tags don't have any semantic meaning to Amazon EC2 and +// are interpreted strictly as a string of characters. +// +// For more information about tags, see Tagging Your Amazon EC2 Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) +// in the Amazon EC2 User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation AddTagsToResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceType "InvalidResourceType" +// The resource type is not valid. For example, if you are attempting to tag +// an instance, the instance must be a registered, managed instance. +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeTooManyTagsError "TooManyTagsError" +// The Targets parameter includes too many tags. Remove one or more tags and +// try the command again. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/AddTagsToResource +func (c *SSM) AddTagsToResource(input *AddTagsToResourceInput) (*AddTagsToResourceOutput, error) { + req, out := c.AddTagsToResourceRequest(input) + return out, req.Send() +} + +// AddTagsToResourceWithContext is the same as AddTagsToResource with the addition of +// the ability to pass a context and additional request options. +// +// See AddTagsToResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) AddTagsToResourceWithContext(ctx aws.Context, input *AddTagsToResourceInput, opts ...request.Option) (*AddTagsToResourceOutput, error) { + req, out := c.AddTagsToResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCancelCommand = "CancelCommand" + +// CancelCommandRequest generates a "aws/request.Request" representing the +// client's request for the CancelCommand operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelCommand for more information on using the CancelCommand +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelCommandRequest method. +// req, resp := client.CancelCommandRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CancelCommand +func (c *SSM) CancelCommandRequest(input *CancelCommandInput) (req *request.Request, output *CancelCommandOutput) { + op := &request.Operation{ + Name: opCancelCommand, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelCommandInput{} + } + + output = &CancelCommandOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelCommand API operation for Amazon Simple Systems Manager (SSM). +// +// Attempts to cancel the command specified by the Command ID. There is no guarantee +// that the command will be terminated and the underlying process stopped. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CancelCommand for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidCommandId "InvalidCommandId" +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeDuplicateInstanceId "DuplicateInstanceId" +// You cannot specify an instance ID in more than one association. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CancelCommand +func (c *SSM) CancelCommand(input *CancelCommandInput) (*CancelCommandOutput, error) { + req, out := c.CancelCommandRequest(input) + return out, req.Send() +} + +// CancelCommandWithContext is the same as CancelCommand with the addition of +// the ability to pass a context and additional request options. +// +// See CancelCommand for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CancelCommandWithContext(ctx aws.Context, input *CancelCommandInput, opts ...request.Option) (*CancelCommandOutput, error) { + req, out := c.CancelCommandRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateActivation = "CreateActivation" + +// CreateActivationRequest generates a "aws/request.Request" representing the +// client's request for the CreateActivation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateActivation for more information on using the CreateActivation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateActivationRequest method. +// req, resp := client.CreateActivationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateActivation +func (c *SSM) CreateActivationRequest(input *CreateActivationInput) (req *request.Request, output *CreateActivationOutput) { + op := &request.Operation{ + Name: opCreateActivation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateActivationInput{} + } + + output = &CreateActivationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateActivation API operation for Amazon Simple Systems Manager (SSM). +// +// Registers your on-premises server or virtual machine with Amazon EC2 so that +// you can manage these resources using Run Command. An on-premises server or +// virtual machine that has been registered with EC2 is called a managed instance. +// For more information about activations, see Setting Up Systems Manager in +// Hybrid Environments (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreateActivation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateActivation +func (c *SSM) CreateActivation(input *CreateActivationInput) (*CreateActivationOutput, error) { + req, out := c.CreateActivationRequest(input) + return out, req.Send() +} + +// CreateActivationWithContext is the same as CreateActivation with the addition of +// the ability to pass a context and additional request options. +// +// See CreateActivation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreateActivationWithContext(ctx aws.Context, input *CreateActivationInput, opts ...request.Option) (*CreateActivationOutput, error) { + req, out := c.CreateActivationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateAssociation = "CreateAssociation" + +// CreateAssociationRequest generates a "aws/request.Request" representing the +// client's request for the CreateAssociation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateAssociation for more information on using the CreateAssociation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAssociationRequest method. +// req, resp := client.CreateAssociationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateAssociation +func (c *SSM) CreateAssociationRequest(input *CreateAssociationInput) (req *request.Request, output *CreateAssociationOutput) { + op := &request.Operation{ + Name: opCreateAssociation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateAssociationInput{} + } + + output = &CreateAssociationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateAssociation API operation for Amazon Simple Systems Manager (SSM). +// +// Associates the specified Systems Manager document with the specified instances +// or targets. +// +// When you associate a document with one or more instances using instance IDs +// or tags, the SSM Agent running on the instance processes the document and +// configures the instance as specified. +// +// If you associate a document with an instance that already has an associated +// document, the system throws the AssociationAlreadyExists exception. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreateAssociation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAssociationAlreadyExists "AssociationAlreadyExists" +// The specified association already exists. +// +// * ErrCodeAssociationLimitExceeded "AssociationLimitExceeded" +// You can have at most 2,000 active associations. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeUnsupportedPlatformType "UnsupportedPlatformType" +// The document does not support the platform type of the given instance ID(s). +// For example, you sent an document for a Windows instance to a Linux instance. +// +// * ErrCodeInvalidOutputLocation "InvalidOutputLocation" +// The output location is not valid or does not exist. +// +// * ErrCodeInvalidParameters "InvalidParameters" +// You must specify values for all required parameters in the Systems Manager +// document. You can only supply values to parameters defined in the Systems +// Manager document. +// +// * ErrCodeInvalidTarget "InvalidTarget" +// The target is not valid or does not exist. It might not be configured for +// EC2 Systems Manager or you might not have permission to perform the operation. +// +// * ErrCodeInvalidSchedule "InvalidSchedule" +// The schedule is invalid. Verify your cron or rate expression and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateAssociation +func (c *SSM) CreateAssociation(input *CreateAssociationInput) (*CreateAssociationOutput, error) { + req, out := c.CreateAssociationRequest(input) + return out, req.Send() +} + +// CreateAssociationWithContext is the same as CreateAssociation with the addition of +// the ability to pass a context and additional request options. +// +// See CreateAssociation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreateAssociationWithContext(ctx aws.Context, input *CreateAssociationInput, opts ...request.Option) (*CreateAssociationOutput, error) { + req, out := c.CreateAssociationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateAssociationBatch = "CreateAssociationBatch" + +// CreateAssociationBatchRequest generates a "aws/request.Request" representing the +// client's request for the CreateAssociationBatch operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateAssociationBatch for more information on using the CreateAssociationBatch +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAssociationBatchRequest method. +// req, resp := client.CreateAssociationBatchRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateAssociationBatch +func (c *SSM) CreateAssociationBatchRequest(input *CreateAssociationBatchInput) (req *request.Request, output *CreateAssociationBatchOutput) { + op := &request.Operation{ + Name: opCreateAssociationBatch, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateAssociationBatchInput{} + } + + output = &CreateAssociationBatchOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateAssociationBatch API operation for Amazon Simple Systems Manager (SSM). +// +// Associates the specified Systems Manager document with the specified instances +// or targets. +// +// When you associate a document with one or more instances using instance IDs +// or tags, the SSM Agent running on the instance processes the document and +// configures the instance as specified. +// +// If you associate a document with an instance that already has an associated +// document, the system throws the AssociationAlreadyExists exception. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreateAssociationBatch for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidParameters "InvalidParameters" +// You must specify values for all required parameters in the Systems Manager +// document. You can only supply values to parameters defined in the Systems +// Manager document. +// +// * ErrCodeDuplicateInstanceId "DuplicateInstanceId" +// You cannot specify an instance ID in more than one association. +// +// * ErrCodeAssociationLimitExceeded "AssociationLimitExceeded" +// You can have at most 2,000 active associations. +// +// * ErrCodeUnsupportedPlatformType "UnsupportedPlatformType" +// The document does not support the platform type of the given instance ID(s). +// For example, you sent an document for a Windows instance to a Linux instance. +// +// * ErrCodeInvalidOutputLocation "InvalidOutputLocation" +// The output location is not valid or does not exist. +// +// * ErrCodeInvalidTarget "InvalidTarget" +// The target is not valid or does not exist. It might not be configured for +// EC2 Systems Manager or you might not have permission to perform the operation. +// +// * ErrCodeInvalidSchedule "InvalidSchedule" +// The schedule is invalid. Verify your cron or rate expression and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateAssociationBatch +func (c *SSM) CreateAssociationBatch(input *CreateAssociationBatchInput) (*CreateAssociationBatchOutput, error) { + req, out := c.CreateAssociationBatchRequest(input) + return out, req.Send() +} + +// CreateAssociationBatchWithContext is the same as CreateAssociationBatch with the addition of +// the ability to pass a context and additional request options. +// +// See CreateAssociationBatch for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreateAssociationBatchWithContext(ctx aws.Context, input *CreateAssociationBatchInput, opts ...request.Option) (*CreateAssociationBatchOutput, error) { + req, out := c.CreateAssociationBatchRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateDocument = "CreateDocument" + +// CreateDocumentRequest generates a "aws/request.Request" representing the +// client's request for the CreateDocument operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateDocument for more information on using the CreateDocument +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateDocumentRequest method. +// req, resp := client.CreateDocumentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateDocument +func (c *SSM) CreateDocumentRequest(input *CreateDocumentInput) (req *request.Request, output *CreateDocumentOutput) { + op := &request.Operation{ + Name: opCreateDocument, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateDocumentInput{} + } + + output = &CreateDocumentOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateDocument API operation for Amazon Simple Systems Manager (SSM). +// +// Creates a Systems Manager document. +// +// After you create a document, you can use CreateAssociation to associate it +// with one or more running instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreateDocument for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDocumentAlreadyExists "DocumentAlreadyExists" +// The specified document already exists. +// +// * ErrCodeMaxDocumentSizeExceeded "MaxDocumentSizeExceeded" +// The size limit of a document is 64 KB. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocumentContent "InvalidDocumentContent" +// The content for the document is not valid. +// +// * ErrCodeDocumentLimitExceeded "DocumentLimitExceeded" +// You can have at most 200 active Systems Manager documents. +// +// * ErrCodeInvalidDocumentSchemaVersion "InvalidDocumentSchemaVersion" +// The version of the document schema is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateDocument +func (c *SSM) CreateDocument(input *CreateDocumentInput) (*CreateDocumentOutput, error) { + req, out := c.CreateDocumentRequest(input) + return out, req.Send() +} + +// CreateDocumentWithContext is the same as CreateDocument with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDocument for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreateDocumentWithContext(ctx aws.Context, input *CreateDocumentInput, opts ...request.Option) (*CreateDocumentOutput, error) { + req, out := c.CreateDocumentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateMaintenanceWindow = "CreateMaintenanceWindow" + +// CreateMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the CreateMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateMaintenanceWindow for more information on using the CreateMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateMaintenanceWindowRequest method. +// req, resp := client.CreateMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateMaintenanceWindow +func (c *SSM) CreateMaintenanceWindowRequest(input *CreateMaintenanceWindowInput) (req *request.Request, output *CreateMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opCreateMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateMaintenanceWindowInput{} + } + + output = &CreateMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Creates a new Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreateMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" +// Error returned when an idempotent operation is retried and the parameters +// don't match the original call to the API with the same idempotency token. +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Error returned when the caller has exceeded the default resource limits. +// For example, too many Maintenance Windows or Patch baselines have been created. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateMaintenanceWindow +func (c *SSM) CreateMaintenanceWindow(input *CreateMaintenanceWindowInput) (*CreateMaintenanceWindowOutput, error) { + req, out := c.CreateMaintenanceWindowRequest(input) + return out, req.Send() +} + +// CreateMaintenanceWindowWithContext is the same as CreateMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See CreateMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreateMaintenanceWindowWithContext(ctx aws.Context, input *CreateMaintenanceWindowInput, opts ...request.Option) (*CreateMaintenanceWindowOutput, error) { + req, out := c.CreateMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreatePatchBaseline = "CreatePatchBaseline" + +// CreatePatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the CreatePatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreatePatchBaseline for more information on using the CreatePatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreatePatchBaselineRequest method. +// req, resp := client.CreatePatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreatePatchBaseline +func (c *SSM) CreatePatchBaselineRequest(input *CreatePatchBaselineInput) (req *request.Request, output *CreatePatchBaselineOutput) { + op := &request.Operation{ + Name: opCreatePatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreatePatchBaselineInput{} + } + + output = &CreatePatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreatePatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Creates a patch baseline. +// +// For information about valid key and value pairs in PatchFilters for each +// supported operating system type, see PatchFilter (http://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreatePatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" +// Error returned when an idempotent operation is retried and the parameters +// don't match the original call to the API with the same idempotency token. +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Error returned when the caller has exceeded the default resource limits. +// For example, too many Maintenance Windows or Patch baselines have been created. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreatePatchBaseline +func (c *SSM) CreatePatchBaseline(input *CreatePatchBaselineInput) (*CreatePatchBaselineOutput, error) { + req, out := c.CreatePatchBaselineRequest(input) + return out, req.Send() +} + +// CreatePatchBaselineWithContext is the same as CreatePatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See CreatePatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreatePatchBaselineWithContext(ctx aws.Context, input *CreatePatchBaselineInput, opts ...request.Option) (*CreatePatchBaselineOutput, error) { + req, out := c.CreatePatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateResourceDataSync = "CreateResourceDataSync" + +// CreateResourceDataSyncRequest generates a "aws/request.Request" representing the +// client's request for the CreateResourceDataSync operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateResourceDataSync for more information on using the CreateResourceDataSync +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateResourceDataSyncRequest method. +// req, resp := client.CreateResourceDataSyncRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateResourceDataSync +func (c *SSM) CreateResourceDataSyncRequest(input *CreateResourceDataSyncInput) (req *request.Request, output *CreateResourceDataSyncOutput) { + op := &request.Operation{ + Name: opCreateResourceDataSync, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateResourceDataSyncInput{} + } + + output = &CreateResourceDataSyncOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateResourceDataSync API operation for Amazon Simple Systems Manager (SSM). +// +// Creates a resource data sync configuration to a single bucket in Amazon S3. +// This is an asynchronous operation that returns immediately. After a successful +// initial sync is completed, the system continuously syncs data to the Amazon +// S3 bucket. To check the status of the sync, use the ListResourceDataSync. +// +// By default, data is not encrypted in Amazon S3. We strongly recommend that +// you enable encryption in Amazon S3 to ensure secure data storage. We also +// recommend that you secure access to the Amazon S3 bucket by creating a restrictive +// bucket policy. To view an example of a restrictive Amazon S3 bucket policy +// for Resource Data Sync, see Configuring Resource Data Sync for Inventory +// (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-configuring.html#sysman-inventory-datasync). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CreateResourceDataSync for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeResourceDataSyncCountExceededException "ResourceDataSyncCountExceededException" +// You have exceeded the allowed maximum sync configurations. +// +// * ErrCodeResourceDataSyncAlreadyExistsException "ResourceDataSyncAlreadyExistsException" +// A sync configuration with the same name already exists. +// +// * ErrCodeResourceDataSyncInvalidConfigurationException "ResourceDataSyncInvalidConfigurationException" +// The specified sync configuration is invalid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CreateResourceDataSync +func (c *SSM) CreateResourceDataSync(input *CreateResourceDataSyncInput) (*CreateResourceDataSyncOutput, error) { + req, out := c.CreateResourceDataSyncRequest(input) + return out, req.Send() +} + +// CreateResourceDataSyncWithContext is the same as CreateResourceDataSync with the addition of +// the ability to pass a context and additional request options. +// +// See CreateResourceDataSync for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CreateResourceDataSyncWithContext(ctx aws.Context, input *CreateResourceDataSyncInput, opts ...request.Option) (*CreateResourceDataSyncOutput, error) { + req, out := c.CreateResourceDataSyncRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteActivation = "DeleteActivation" + +// DeleteActivationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteActivation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteActivation for more information on using the DeleteActivation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteActivationRequest method. +// req, resp := client.DeleteActivationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteActivation +func (c *SSM) DeleteActivationRequest(input *DeleteActivationInput) (req *request.Request, output *DeleteActivationOutput) { + op := &request.Operation{ + Name: opDeleteActivation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteActivationInput{} + } + + output = &DeleteActivationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteActivation API operation for Amazon Simple Systems Manager (SSM). +// +// Deletes an activation. You are not required to delete an activation. If you +// delete an activation, you can no longer use it to register additional managed +// instances. Deleting an activation does not de-register managed instances. +// You must manually de-register managed instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteActivation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidActivationId "InvalidActivationId" +// The activation ID is not valid. Verify the you entered the correct ActivationId +// or ActivationCode and try again. +// +// * ErrCodeInvalidActivation "InvalidActivation" +// The activation is not valid. The activation might have been deleted, or the +// ActivationId and the ActivationCode do not match. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteActivation +func (c *SSM) DeleteActivation(input *DeleteActivationInput) (*DeleteActivationOutput, error) { + req, out := c.DeleteActivationRequest(input) + return out, req.Send() +} + +// DeleteActivationWithContext is the same as DeleteActivation with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteActivation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteActivationWithContext(ctx aws.Context, input *DeleteActivationInput, opts ...request.Option) (*DeleteActivationOutput, error) { + req, out := c.DeleteActivationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteAssociation = "DeleteAssociation" + +// DeleteAssociationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAssociation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAssociation for more information on using the DeleteAssociation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAssociationRequest method. +// req, resp := client.DeleteAssociationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteAssociation +func (c *SSM) DeleteAssociationRequest(input *DeleteAssociationInput) (req *request.Request, output *DeleteAssociationOutput) { + op := &request.Operation{ + Name: opDeleteAssociation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAssociationInput{} + } + + output = &DeleteAssociationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteAssociation API operation for Amazon Simple Systems Manager (SSM). +// +// Disassociates the specified Systems Manager document from the specified instance. +// +// When you disassociate a document from an instance, it does not change the +// configuration of the instance. To change the configuration state of an instance +// after you disassociate a document, you must create a new document with the +// desired configuration and associate it with the instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteAssociation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteAssociation +func (c *SSM) DeleteAssociation(input *DeleteAssociationInput) (*DeleteAssociationOutput, error) { + req, out := c.DeleteAssociationRequest(input) + return out, req.Send() +} + +// DeleteAssociationWithContext is the same as DeleteAssociation with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAssociation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteAssociationWithContext(ctx aws.Context, input *DeleteAssociationInput, opts ...request.Option) (*DeleteAssociationOutput, error) { + req, out := c.DeleteAssociationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteDocument = "DeleteDocument" + +// DeleteDocumentRequest generates a "aws/request.Request" representing the +// client's request for the DeleteDocument operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteDocument for more information on using the DeleteDocument +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteDocumentRequest method. +// req, resp := client.DeleteDocumentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteDocument +func (c *SSM) DeleteDocumentRequest(input *DeleteDocumentInput) (req *request.Request, output *DeleteDocumentOutput) { + op := &request.Operation{ + Name: opDeleteDocument, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteDocumentInput{} + } + + output = &DeleteDocumentOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteDocument API operation for Amazon Simple Systems Manager (SSM). +// +// Deletes the Systems Manager document and all instance associations to the +// document. +// +// Before you delete the document, we recommend that you use DeleteAssociation +// to disassociate all instances that are associated with the document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteDocument for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentOperation "InvalidDocumentOperation" +// You attempted to delete a document while it is still shared. You must stop +// sharing the document before you can delete it. +// +// * ErrCodeAssociatedInstances "AssociatedInstances" +// You must disassociate a document from all instances before you can delete +// it. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteDocument +func (c *SSM) DeleteDocument(input *DeleteDocumentInput) (*DeleteDocumentOutput, error) { + req, out := c.DeleteDocumentRequest(input) + return out, req.Send() +} + +// DeleteDocumentWithContext is the same as DeleteDocument with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDocument for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteDocumentWithContext(ctx aws.Context, input *DeleteDocumentInput, opts ...request.Option) (*DeleteDocumentOutput, error) { + req, out := c.DeleteDocumentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteInventory = "DeleteInventory" + +// DeleteInventoryRequest generates a "aws/request.Request" representing the +// client's request for the DeleteInventory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteInventory for more information on using the DeleteInventory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteInventoryRequest method. +// req, resp := client.DeleteInventoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteInventory +func (c *SSM) DeleteInventoryRequest(input *DeleteInventoryInput) (req *request.Request, output *DeleteInventoryOutput) { + op := &request.Operation{ + Name: opDeleteInventory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteInventoryInput{} + } + + output = &DeleteInventoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteInventory API operation for Amazon Simple Systems Manager (SSM). +// +// Delete a custom inventory type, or the data associated with a custom Inventory +// type. Deleting a custom inventory type is also referred to as deleting a +// custom inventory schema. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteInventory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidTypeNameException "InvalidTypeNameException" +// The parameter type name is not valid. +// +// * ErrCodeInvalidOptionException "InvalidOptionException" +// The delete inventory option specified is not valid. Verify the option and +// try again. +// +// * ErrCodeInvalidDeleteInventoryParametersException "InvalidDeleteInventoryParametersException" +// One or more of the parameters specified for the delete operation is not valid. +// Verify all parameters and try again. +// +// * ErrCodeInvalidInventoryRequestException "InvalidInventoryRequestException" +// The request is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteInventory +func (c *SSM) DeleteInventory(input *DeleteInventoryInput) (*DeleteInventoryOutput, error) { + req, out := c.DeleteInventoryRequest(input) + return out, req.Send() +} + +// DeleteInventoryWithContext is the same as DeleteInventory with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInventory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteInventoryWithContext(ctx aws.Context, input *DeleteInventoryInput, opts ...request.Option) (*DeleteInventoryOutput, error) { + req, out := c.DeleteInventoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteMaintenanceWindow = "DeleteMaintenanceWindow" + +// DeleteMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the DeleteMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteMaintenanceWindow for more information on using the DeleteMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteMaintenanceWindowRequest method. +// req, resp := client.DeleteMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteMaintenanceWindow +func (c *SSM) DeleteMaintenanceWindowRequest(input *DeleteMaintenanceWindowInput) (req *request.Request, output *DeleteMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opDeleteMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteMaintenanceWindowInput{} + } + + output = &DeleteMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Deletes a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteMaintenanceWindow +func (c *SSM) DeleteMaintenanceWindow(input *DeleteMaintenanceWindowInput) (*DeleteMaintenanceWindowOutput, error) { + req, out := c.DeleteMaintenanceWindowRequest(input) + return out, req.Send() +} + +// DeleteMaintenanceWindowWithContext is the same as DeleteMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteMaintenanceWindowWithContext(ctx aws.Context, input *DeleteMaintenanceWindowInput, opts ...request.Option) (*DeleteMaintenanceWindowOutput, error) { + req, out := c.DeleteMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteParameter = "DeleteParameter" + +// DeleteParameterRequest generates a "aws/request.Request" representing the +// client's request for the DeleteParameter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteParameter for more information on using the DeleteParameter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteParameterRequest method. +// req, resp := client.DeleteParameterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteParameter +func (c *SSM) DeleteParameterRequest(input *DeleteParameterInput) (req *request.Request, output *DeleteParameterOutput) { + op := &request.Operation{ + Name: opDeleteParameter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteParameterInput{} + } + + output = &DeleteParameterOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteParameter API operation for Amazon Simple Systems Manager (SSM). +// +// Delete a parameter from the system. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteParameter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeParameterNotFound "ParameterNotFound" +// The parameter could not be found. Verify the name and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteParameter +func (c *SSM) DeleteParameter(input *DeleteParameterInput) (*DeleteParameterOutput, error) { + req, out := c.DeleteParameterRequest(input) + return out, req.Send() +} + +// DeleteParameterWithContext is the same as DeleteParameter with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteParameter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteParameterWithContext(ctx aws.Context, input *DeleteParameterInput, opts ...request.Option) (*DeleteParameterOutput, error) { + req, out := c.DeleteParameterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteParameters = "DeleteParameters" + +// DeleteParametersRequest generates a "aws/request.Request" representing the +// client's request for the DeleteParameters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteParameters for more information on using the DeleteParameters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteParametersRequest method. +// req, resp := client.DeleteParametersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteParameters +func (c *SSM) DeleteParametersRequest(input *DeleteParametersInput) (req *request.Request, output *DeleteParametersOutput) { + op := &request.Operation{ + Name: opDeleteParameters, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteParametersInput{} + } + + output = &DeleteParametersOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteParameters API operation for Amazon Simple Systems Manager (SSM). +// +// Delete a list of parameters. This API is used to delete parameters by using +// the Amazon EC2 console. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteParameters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteParameters +func (c *SSM) DeleteParameters(input *DeleteParametersInput) (*DeleteParametersOutput, error) { + req, out := c.DeleteParametersRequest(input) + return out, req.Send() +} + +// DeleteParametersWithContext is the same as DeleteParameters with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteParameters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteParametersWithContext(ctx aws.Context, input *DeleteParametersInput, opts ...request.Option) (*DeleteParametersOutput, error) { + req, out := c.DeleteParametersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeletePatchBaseline = "DeletePatchBaseline" + +// DeletePatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the DeletePatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePatchBaseline for more information on using the DeletePatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePatchBaselineRequest method. +// req, resp := client.DeletePatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeletePatchBaseline +func (c *SSM) DeletePatchBaselineRequest(input *DeletePatchBaselineInput) (req *request.Request, output *DeletePatchBaselineOutput) { + op := &request.Operation{ + Name: opDeletePatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeletePatchBaselineInput{} + } + + output = &DeletePatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeletePatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Deletes a patch baseline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeletePatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// Error returned if an attempt is made to delete a patch baseline that is registered +// for a patch group. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeletePatchBaseline +func (c *SSM) DeletePatchBaseline(input *DeletePatchBaselineInput) (*DeletePatchBaselineOutput, error) { + req, out := c.DeletePatchBaselineRequest(input) + return out, req.Send() +} + +// DeletePatchBaselineWithContext is the same as DeletePatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeletePatchBaselineWithContext(ctx aws.Context, input *DeletePatchBaselineInput, opts ...request.Option) (*DeletePatchBaselineOutput, error) { + req, out := c.DeletePatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteResourceDataSync = "DeleteResourceDataSync" + +// DeleteResourceDataSyncRequest generates a "aws/request.Request" representing the +// client's request for the DeleteResourceDataSync operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteResourceDataSync for more information on using the DeleteResourceDataSync +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteResourceDataSyncRequest method. +// req, resp := client.DeleteResourceDataSyncRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteResourceDataSync +func (c *SSM) DeleteResourceDataSyncRequest(input *DeleteResourceDataSyncInput) (req *request.Request, output *DeleteResourceDataSyncOutput) { + op := &request.Operation{ + Name: opDeleteResourceDataSync, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteResourceDataSyncInput{} + } + + output = &DeleteResourceDataSyncOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteResourceDataSync API operation for Amazon Simple Systems Manager (SSM). +// +// Deletes a Resource Data Sync configuration. After the configuration is deleted, +// changes to inventory data on managed instances are no longer synced with +// the target Amazon S3 bucket. Deleting a sync configuration does not delete +// data in the target Amazon S3 bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteResourceDataSync for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeResourceDataSyncNotFoundException "ResourceDataSyncNotFoundException" +// The specified sync name was not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteResourceDataSync +func (c *SSM) DeleteResourceDataSync(input *DeleteResourceDataSyncInput) (*DeleteResourceDataSyncOutput, error) { + req, out := c.DeleteResourceDataSyncRequest(input) + return out, req.Send() +} + +// DeleteResourceDataSyncWithContext is the same as DeleteResourceDataSync with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteResourceDataSync for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteResourceDataSyncWithContext(ctx aws.Context, input *DeleteResourceDataSyncInput, opts ...request.Option) (*DeleteResourceDataSyncOutput, error) { + req, out := c.DeleteResourceDataSyncRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeregisterManagedInstance = "DeregisterManagedInstance" + +// DeregisterManagedInstanceRequest generates a "aws/request.Request" representing the +// client's request for the DeregisterManagedInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeregisterManagedInstance for more information on using the DeregisterManagedInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeregisterManagedInstanceRequest method. +// req, resp := client.DeregisterManagedInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterManagedInstance +func (c *SSM) DeregisterManagedInstanceRequest(input *DeregisterManagedInstanceInput) (req *request.Request, output *DeregisterManagedInstanceOutput) { + op := &request.Operation{ + Name: opDeregisterManagedInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeregisterManagedInstanceInput{} + } + + output = &DeregisterManagedInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeregisterManagedInstance API operation for Amazon Simple Systems Manager (SSM). +// +// Removes the server or virtual machine from the list of registered servers. +// You can reregister the instance again at any time. If you don't plan to use +// Run Command on the server, we suggest uninstalling the SSM Agent first. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeregisterManagedInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterManagedInstance +func (c *SSM) DeregisterManagedInstance(input *DeregisterManagedInstanceInput) (*DeregisterManagedInstanceOutput, error) { + req, out := c.DeregisterManagedInstanceRequest(input) + return out, req.Send() +} + +// DeregisterManagedInstanceWithContext is the same as DeregisterManagedInstance with the addition of +// the ability to pass a context and additional request options. +// +// See DeregisterManagedInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeregisterManagedInstanceWithContext(ctx aws.Context, input *DeregisterManagedInstanceInput, opts ...request.Option) (*DeregisterManagedInstanceOutput, error) { + req, out := c.DeregisterManagedInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeregisterPatchBaselineForPatchGroup = "DeregisterPatchBaselineForPatchGroup" + +// DeregisterPatchBaselineForPatchGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeregisterPatchBaselineForPatchGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeregisterPatchBaselineForPatchGroup for more information on using the DeregisterPatchBaselineForPatchGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeregisterPatchBaselineForPatchGroupRequest method. +// req, resp := client.DeregisterPatchBaselineForPatchGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterPatchBaselineForPatchGroup +func (c *SSM) DeregisterPatchBaselineForPatchGroupRequest(input *DeregisterPatchBaselineForPatchGroupInput) (req *request.Request, output *DeregisterPatchBaselineForPatchGroupOutput) { + op := &request.Operation{ + Name: opDeregisterPatchBaselineForPatchGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeregisterPatchBaselineForPatchGroupInput{} + } + + output = &DeregisterPatchBaselineForPatchGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeregisterPatchBaselineForPatchGroup API operation for Amazon Simple Systems Manager (SSM). +// +// Removes a patch group from a patch baseline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeregisterPatchBaselineForPatchGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterPatchBaselineForPatchGroup +func (c *SSM) DeregisterPatchBaselineForPatchGroup(input *DeregisterPatchBaselineForPatchGroupInput) (*DeregisterPatchBaselineForPatchGroupOutput, error) { + req, out := c.DeregisterPatchBaselineForPatchGroupRequest(input) + return out, req.Send() +} + +// DeregisterPatchBaselineForPatchGroupWithContext is the same as DeregisterPatchBaselineForPatchGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DeregisterPatchBaselineForPatchGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeregisterPatchBaselineForPatchGroupWithContext(ctx aws.Context, input *DeregisterPatchBaselineForPatchGroupInput, opts ...request.Option) (*DeregisterPatchBaselineForPatchGroupOutput, error) { + req, out := c.DeregisterPatchBaselineForPatchGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeregisterTargetFromMaintenanceWindow = "DeregisterTargetFromMaintenanceWindow" + +// DeregisterTargetFromMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the DeregisterTargetFromMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeregisterTargetFromMaintenanceWindow for more information on using the DeregisterTargetFromMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeregisterTargetFromMaintenanceWindowRequest method. +// req, resp := client.DeregisterTargetFromMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterTargetFromMaintenanceWindow +func (c *SSM) DeregisterTargetFromMaintenanceWindowRequest(input *DeregisterTargetFromMaintenanceWindowInput) (req *request.Request, output *DeregisterTargetFromMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opDeregisterTargetFromMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeregisterTargetFromMaintenanceWindowInput{} + } + + output = &DeregisterTargetFromMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeregisterTargetFromMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Removes a target from a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeregisterTargetFromMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeTargetInUseException "TargetInUseException" +// You specified the Safe option for the DeregisterTargetFromMaintenanceWindow +// operation, but the target is still referenced in a task. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterTargetFromMaintenanceWindow +func (c *SSM) DeregisterTargetFromMaintenanceWindow(input *DeregisterTargetFromMaintenanceWindowInput) (*DeregisterTargetFromMaintenanceWindowOutput, error) { + req, out := c.DeregisterTargetFromMaintenanceWindowRequest(input) + return out, req.Send() +} + +// DeregisterTargetFromMaintenanceWindowWithContext is the same as DeregisterTargetFromMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See DeregisterTargetFromMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeregisterTargetFromMaintenanceWindowWithContext(ctx aws.Context, input *DeregisterTargetFromMaintenanceWindowInput, opts ...request.Option) (*DeregisterTargetFromMaintenanceWindowOutput, error) { + req, out := c.DeregisterTargetFromMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeregisterTaskFromMaintenanceWindow = "DeregisterTaskFromMaintenanceWindow" + +// DeregisterTaskFromMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the DeregisterTaskFromMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeregisterTaskFromMaintenanceWindow for more information on using the DeregisterTaskFromMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeregisterTaskFromMaintenanceWindowRequest method. +// req, resp := client.DeregisterTaskFromMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterTaskFromMaintenanceWindow +func (c *SSM) DeregisterTaskFromMaintenanceWindowRequest(input *DeregisterTaskFromMaintenanceWindowInput) (req *request.Request, output *DeregisterTaskFromMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opDeregisterTaskFromMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeregisterTaskFromMaintenanceWindowInput{} + } + + output = &DeregisterTaskFromMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeregisterTaskFromMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Removes a task from a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeregisterTaskFromMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeregisterTaskFromMaintenanceWindow +func (c *SSM) DeregisterTaskFromMaintenanceWindow(input *DeregisterTaskFromMaintenanceWindowInput) (*DeregisterTaskFromMaintenanceWindowOutput, error) { + req, out := c.DeregisterTaskFromMaintenanceWindowRequest(input) + return out, req.Send() +} + +// DeregisterTaskFromMaintenanceWindowWithContext is the same as DeregisterTaskFromMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See DeregisterTaskFromMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeregisterTaskFromMaintenanceWindowWithContext(ctx aws.Context, input *DeregisterTaskFromMaintenanceWindowInput, opts ...request.Option) (*DeregisterTaskFromMaintenanceWindowOutput, error) { + req, out := c.DeregisterTaskFromMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeActivations = "DescribeActivations" + +// DescribeActivationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeActivations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeActivations for more information on using the DescribeActivations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeActivationsRequest method. +// req, resp := client.DescribeActivationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeActivations +func (c *SSM) DescribeActivationsRequest(input *DescribeActivationsInput) (req *request.Request, output *DescribeActivationsOutput) { + op := &request.Operation{ + Name: opDescribeActivations, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeActivationsInput{} + } + + output = &DescribeActivationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeActivations API operation for Amazon Simple Systems Manager (SSM). +// +// Details about the activation, including: the date and time the activation +// was created, the expiration date, the IAM role assigned to the instances +// in the activation, and the number of instances activated by this registration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeActivations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeActivations +func (c *SSM) DescribeActivations(input *DescribeActivationsInput) (*DescribeActivationsOutput, error) { + req, out := c.DescribeActivationsRequest(input) + return out, req.Send() +} + +// DescribeActivationsWithContext is the same as DescribeActivations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeActivations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeActivationsWithContext(ctx aws.Context, input *DescribeActivationsInput, opts ...request.Option) (*DescribeActivationsOutput, error) { + req, out := c.DescribeActivationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeActivationsPages iterates over the pages of a DescribeActivations operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeActivations method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeActivations operation. +// pageNum := 0 +// err := client.DescribeActivationsPages(params, +// func(page *DescribeActivationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) DescribeActivationsPages(input *DescribeActivationsInput, fn func(*DescribeActivationsOutput, bool) bool) error { + return c.DescribeActivationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeActivationsPagesWithContext same as DescribeActivationsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeActivationsPagesWithContext(ctx aws.Context, input *DescribeActivationsInput, fn func(*DescribeActivationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeActivationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeActivationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeActivationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeAssociation = "DescribeAssociation" + +// DescribeAssociationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAssociation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAssociation for more information on using the DescribeAssociation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAssociationRequest method. +// req, resp := client.DescribeAssociationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAssociation +func (c *SSM) DescribeAssociationRequest(input *DescribeAssociationInput) (req *request.Request, output *DescribeAssociationOutput) { + op := &request.Operation{ + Name: opDescribeAssociation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAssociationInput{} + } + + output = &DescribeAssociationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAssociation API operation for Amazon Simple Systems Manager (SSM). +// +// Describes the association for the specified target or instance. If you created +// the association by using the Targets parameter, then you must retrieve the +// association by using the association ID. If you created the association by +// specifying an instance ID and a Systems Manager document, then you retrieve +// the association by specifying the document name and the instance ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeAssociation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// * ErrCodeInvalidAssociationVersion "InvalidAssociationVersion" +// The version you specified is not valid. Use ListAssociationVersions to view +// all versions of an association according to the association ID. Or, use the +// $LATEST parameter to view the latest version of the association. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAssociation +func (c *SSM) DescribeAssociation(input *DescribeAssociationInput) (*DescribeAssociationOutput, error) { + req, out := c.DescribeAssociationRequest(input) + return out, req.Send() +} + +// DescribeAssociationWithContext is the same as DescribeAssociation with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAssociation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeAssociationWithContext(ctx aws.Context, input *DescribeAssociationInput, opts ...request.Option) (*DescribeAssociationOutput, error) { + req, out := c.DescribeAssociationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeAutomationExecutions = "DescribeAutomationExecutions" + +// DescribeAutomationExecutionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAutomationExecutions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAutomationExecutions for more information on using the DescribeAutomationExecutions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAutomationExecutionsRequest method. +// req, resp := client.DescribeAutomationExecutionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAutomationExecutions +func (c *SSM) DescribeAutomationExecutionsRequest(input *DescribeAutomationExecutionsInput) (req *request.Request, output *DescribeAutomationExecutionsOutput) { + op := &request.Operation{ + Name: opDescribeAutomationExecutions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAutomationExecutionsInput{} + } + + output = &DescribeAutomationExecutionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAutomationExecutions API operation for Amazon Simple Systems Manager (SSM). +// +// Provides details about all active and terminated Automation executions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeAutomationExecutions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidFilterValue "InvalidFilterValue" +// The filter value is not valid. Verify the value and try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAutomationExecutions +func (c *SSM) DescribeAutomationExecutions(input *DescribeAutomationExecutionsInput) (*DescribeAutomationExecutionsOutput, error) { + req, out := c.DescribeAutomationExecutionsRequest(input) + return out, req.Send() +} + +// DescribeAutomationExecutionsWithContext is the same as DescribeAutomationExecutions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAutomationExecutions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeAutomationExecutionsWithContext(ctx aws.Context, input *DescribeAutomationExecutionsInput, opts ...request.Option) (*DescribeAutomationExecutionsOutput, error) { + req, out := c.DescribeAutomationExecutionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeAutomationStepExecutions = "DescribeAutomationStepExecutions" + +// DescribeAutomationStepExecutionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAutomationStepExecutions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAutomationStepExecutions for more information on using the DescribeAutomationStepExecutions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAutomationStepExecutionsRequest method. +// req, resp := client.DescribeAutomationStepExecutionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAutomationStepExecutions +func (c *SSM) DescribeAutomationStepExecutionsRequest(input *DescribeAutomationStepExecutionsInput) (req *request.Request, output *DescribeAutomationStepExecutionsOutput) { + op := &request.Operation{ + Name: opDescribeAutomationStepExecutions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAutomationStepExecutionsInput{} + } + + output = &DescribeAutomationStepExecutionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAutomationStepExecutions API operation for Amazon Simple Systems Manager (SSM). +// +// Information about all active and terminated step executions in an Automation +// workflow. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeAutomationStepExecutions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAutomationExecutionNotFoundException "AutomationExecutionNotFoundException" +// There is no automation execution information for the requested automation +// execution ID. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidFilterValue "InvalidFilterValue" +// The filter value is not valid. Verify the value and try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAutomationStepExecutions +func (c *SSM) DescribeAutomationStepExecutions(input *DescribeAutomationStepExecutionsInput) (*DescribeAutomationStepExecutionsOutput, error) { + req, out := c.DescribeAutomationStepExecutionsRequest(input) + return out, req.Send() +} + +// DescribeAutomationStepExecutionsWithContext is the same as DescribeAutomationStepExecutions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAutomationStepExecutions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeAutomationStepExecutionsWithContext(ctx aws.Context, input *DescribeAutomationStepExecutionsInput, opts ...request.Option) (*DescribeAutomationStepExecutionsOutput, error) { + req, out := c.DescribeAutomationStepExecutionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeAvailablePatches = "DescribeAvailablePatches" + +// DescribeAvailablePatchesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAvailablePatches operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAvailablePatches for more information on using the DescribeAvailablePatches +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAvailablePatchesRequest method. +// req, resp := client.DescribeAvailablePatchesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAvailablePatches +func (c *SSM) DescribeAvailablePatchesRequest(input *DescribeAvailablePatchesInput) (req *request.Request, output *DescribeAvailablePatchesOutput) { + op := &request.Operation{ + Name: opDescribeAvailablePatches, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAvailablePatchesInput{} + } + + output = &DescribeAvailablePatchesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAvailablePatches API operation for Amazon Simple Systems Manager (SSM). +// +// Lists all patches that could possibly be included in a patch baseline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeAvailablePatches for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAvailablePatches +func (c *SSM) DescribeAvailablePatches(input *DescribeAvailablePatchesInput) (*DescribeAvailablePatchesOutput, error) { + req, out := c.DescribeAvailablePatchesRequest(input) + return out, req.Send() +} + +// DescribeAvailablePatchesWithContext is the same as DescribeAvailablePatches with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAvailablePatches for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeAvailablePatchesWithContext(ctx aws.Context, input *DescribeAvailablePatchesInput, opts ...request.Option) (*DescribeAvailablePatchesOutput, error) { + req, out := c.DescribeAvailablePatchesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDocument = "DescribeDocument" + +// DescribeDocumentRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDocument operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDocument for more information on using the DescribeDocument +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDocumentRequest method. +// req, resp := client.DescribeDocumentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeDocument +func (c *SSM) DescribeDocumentRequest(input *DescribeDocumentInput) (req *request.Request, output *DescribeDocumentOutput) { + op := &request.Operation{ + Name: opDescribeDocument, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeDocumentInput{} + } + + output = &DescribeDocumentOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDocument API operation for Amazon Simple Systems Manager (SSM). +// +// Describes the specified Systems Manager document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeDocument for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeDocument +func (c *SSM) DescribeDocument(input *DescribeDocumentInput) (*DescribeDocumentOutput, error) { + req, out := c.DescribeDocumentRequest(input) + return out, req.Send() +} + +// DescribeDocumentWithContext is the same as DescribeDocument with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDocument for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeDocumentWithContext(ctx aws.Context, input *DescribeDocumentInput, opts ...request.Option) (*DescribeDocumentOutput, error) { + req, out := c.DescribeDocumentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDocumentPermission = "DescribeDocumentPermission" + +// DescribeDocumentPermissionRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDocumentPermission operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDocumentPermission for more information on using the DescribeDocumentPermission +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDocumentPermissionRequest method. +// req, resp := client.DescribeDocumentPermissionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeDocumentPermission +func (c *SSM) DescribeDocumentPermissionRequest(input *DescribeDocumentPermissionInput) (req *request.Request, output *DescribeDocumentPermissionOutput) { + op := &request.Operation{ + Name: opDescribeDocumentPermission, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeDocumentPermissionInput{} + } + + output = &DescribeDocumentPermissionOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDocumentPermission API operation for Amazon Simple Systems Manager (SSM). +// +// Describes the permissions for a Systems Manager document. If you created +// the document, you are the owner. If a document is shared, it can either be +// shared privately (by specifying a user's AWS account ID) or publicly (All). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeDocumentPermission for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidPermissionType "InvalidPermissionType" +// The permission type is not supported. Share is the only supported permission +// type. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeDocumentPermission +func (c *SSM) DescribeDocumentPermission(input *DescribeDocumentPermissionInput) (*DescribeDocumentPermissionOutput, error) { + req, out := c.DescribeDocumentPermissionRequest(input) + return out, req.Send() +} + +// DescribeDocumentPermissionWithContext is the same as DescribeDocumentPermission with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDocumentPermission for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeDocumentPermissionWithContext(ctx aws.Context, input *DescribeDocumentPermissionInput, opts ...request.Option) (*DescribeDocumentPermissionOutput, error) { + req, out := c.DescribeDocumentPermissionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeEffectiveInstanceAssociations = "DescribeEffectiveInstanceAssociations" + +// DescribeEffectiveInstanceAssociationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeEffectiveInstanceAssociations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeEffectiveInstanceAssociations for more information on using the DescribeEffectiveInstanceAssociations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeEffectiveInstanceAssociationsRequest method. +// req, resp := client.DescribeEffectiveInstanceAssociationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeEffectiveInstanceAssociations +func (c *SSM) DescribeEffectiveInstanceAssociationsRequest(input *DescribeEffectiveInstanceAssociationsInput) (req *request.Request, output *DescribeEffectiveInstanceAssociationsOutput) { + op := &request.Operation{ + Name: opDescribeEffectiveInstanceAssociations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeEffectiveInstanceAssociationsInput{} + } + + output = &DescribeEffectiveInstanceAssociationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeEffectiveInstanceAssociations API operation for Amazon Simple Systems Manager (SSM). +// +// All associations for the instance(s). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeEffectiveInstanceAssociations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeEffectiveInstanceAssociations +func (c *SSM) DescribeEffectiveInstanceAssociations(input *DescribeEffectiveInstanceAssociationsInput) (*DescribeEffectiveInstanceAssociationsOutput, error) { + req, out := c.DescribeEffectiveInstanceAssociationsRequest(input) + return out, req.Send() +} + +// DescribeEffectiveInstanceAssociationsWithContext is the same as DescribeEffectiveInstanceAssociations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeEffectiveInstanceAssociations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeEffectiveInstanceAssociationsWithContext(ctx aws.Context, input *DescribeEffectiveInstanceAssociationsInput, opts ...request.Option) (*DescribeEffectiveInstanceAssociationsOutput, error) { + req, out := c.DescribeEffectiveInstanceAssociationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeEffectivePatchesForPatchBaseline = "DescribeEffectivePatchesForPatchBaseline" + +// DescribeEffectivePatchesForPatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the DescribeEffectivePatchesForPatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeEffectivePatchesForPatchBaseline for more information on using the DescribeEffectivePatchesForPatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeEffectivePatchesForPatchBaselineRequest method. +// req, resp := client.DescribeEffectivePatchesForPatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeEffectivePatchesForPatchBaseline +func (c *SSM) DescribeEffectivePatchesForPatchBaselineRequest(input *DescribeEffectivePatchesForPatchBaselineInput) (req *request.Request, output *DescribeEffectivePatchesForPatchBaselineOutput) { + op := &request.Operation{ + Name: opDescribeEffectivePatchesForPatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeEffectivePatchesForPatchBaselineInput{} + } + + output = &DescribeEffectivePatchesForPatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeEffectivePatchesForPatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the current effective patches (the patch and the approval state) +// for the specified patch baseline. Note that this API applies only to Windows +// patch baselines. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeEffectivePatchesForPatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeUnsupportedOperatingSystem "UnsupportedOperatingSystem" +// The operating systems you specified is not supported, or the operation is +// not supported for the operating system. Valid operating systems include: +// Windows, AmazonLinux, RedhatEnterpriseLinux, and Ubuntu. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeEffectivePatchesForPatchBaseline +func (c *SSM) DescribeEffectivePatchesForPatchBaseline(input *DescribeEffectivePatchesForPatchBaselineInput) (*DescribeEffectivePatchesForPatchBaselineOutput, error) { + req, out := c.DescribeEffectivePatchesForPatchBaselineRequest(input) + return out, req.Send() +} + +// DescribeEffectivePatchesForPatchBaselineWithContext is the same as DescribeEffectivePatchesForPatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeEffectivePatchesForPatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeEffectivePatchesForPatchBaselineWithContext(ctx aws.Context, input *DescribeEffectivePatchesForPatchBaselineInput, opts ...request.Option) (*DescribeEffectivePatchesForPatchBaselineOutput, error) { + req, out := c.DescribeEffectivePatchesForPatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstanceAssociationsStatus = "DescribeInstanceAssociationsStatus" + +// DescribeInstanceAssociationsStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInstanceAssociationsStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInstanceAssociationsStatus for more information on using the DescribeInstanceAssociationsStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInstanceAssociationsStatusRequest method. +// req, resp := client.DescribeInstanceAssociationsStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstanceAssociationsStatus +func (c *SSM) DescribeInstanceAssociationsStatusRequest(input *DescribeInstanceAssociationsStatusInput) (req *request.Request, output *DescribeInstanceAssociationsStatusOutput) { + op := &request.Operation{ + Name: opDescribeInstanceAssociationsStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstanceAssociationsStatusInput{} + } + + output = &DescribeInstanceAssociationsStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInstanceAssociationsStatus API operation for Amazon Simple Systems Manager (SSM). +// +// The status of the associations for the instance(s). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInstanceAssociationsStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstanceAssociationsStatus +func (c *SSM) DescribeInstanceAssociationsStatus(input *DescribeInstanceAssociationsStatusInput) (*DescribeInstanceAssociationsStatusOutput, error) { + req, out := c.DescribeInstanceAssociationsStatusRequest(input) + return out, req.Send() +} + +// DescribeInstanceAssociationsStatusWithContext is the same as DescribeInstanceAssociationsStatus with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceAssociationsStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInstanceAssociationsStatusWithContext(ctx aws.Context, input *DescribeInstanceAssociationsStatusInput, opts ...request.Option) (*DescribeInstanceAssociationsStatusOutput, error) { + req, out := c.DescribeInstanceAssociationsStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstanceInformation = "DescribeInstanceInformation" + +// DescribeInstanceInformationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInstanceInformation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInstanceInformation for more information on using the DescribeInstanceInformation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInstanceInformationRequest method. +// req, resp := client.DescribeInstanceInformationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstanceInformation +func (c *SSM) DescribeInstanceInformationRequest(input *DescribeInstanceInformationInput) (req *request.Request, output *DescribeInstanceInformationOutput) { + op := &request.Operation{ + Name: opDescribeInstanceInformation, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeInstanceInformationInput{} + } + + output = &DescribeInstanceInformationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInstanceInformation API operation for Amazon Simple Systems Manager (SSM). +// +// Describes one or more of your instances. You can use this to get information +// about instances like the operating system platform, the SSM Agent version +// (Linux), status etc. If you specify one or more instance IDs, it returns +// information for those instances. If you do not specify instance IDs, it returns +// information for all your instances. If you specify an instance ID that is +// not valid or an instance that you do not own, you receive an error. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInstanceInformation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInvalidInstanceInformationFilterValue "InvalidInstanceInformationFilterValue" +// The specified filter value is not valid. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstanceInformation +func (c *SSM) DescribeInstanceInformation(input *DescribeInstanceInformationInput) (*DescribeInstanceInformationOutput, error) { + req, out := c.DescribeInstanceInformationRequest(input) + return out, req.Send() +} + +// DescribeInstanceInformationWithContext is the same as DescribeInstanceInformation with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceInformation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInstanceInformationWithContext(ctx aws.Context, input *DescribeInstanceInformationInput, opts ...request.Option) (*DescribeInstanceInformationOutput, error) { + req, out := c.DescribeInstanceInformationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeInstanceInformationPages iterates over the pages of a DescribeInstanceInformation operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeInstanceInformation method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeInstanceInformation operation. +// pageNum := 0 +// err := client.DescribeInstanceInformationPages(params, +// func(page *DescribeInstanceInformationOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) DescribeInstanceInformationPages(input *DescribeInstanceInformationInput, fn func(*DescribeInstanceInformationOutput, bool) bool) error { + return c.DescribeInstanceInformationPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeInstanceInformationPagesWithContext same as DescribeInstanceInformationPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInstanceInformationPagesWithContext(ctx aws.Context, input *DescribeInstanceInformationInput, fn func(*DescribeInstanceInformationOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeInstanceInformationInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeInstanceInformationRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeInstanceInformationOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeInstancePatchStates = "DescribeInstancePatchStates" + +// DescribeInstancePatchStatesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInstancePatchStates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInstancePatchStates for more information on using the DescribeInstancePatchStates +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInstancePatchStatesRequest method. +// req, resp := client.DescribeInstancePatchStatesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstancePatchStates +func (c *SSM) DescribeInstancePatchStatesRequest(input *DescribeInstancePatchStatesInput) (req *request.Request, output *DescribeInstancePatchStatesOutput) { + op := &request.Operation{ + Name: opDescribeInstancePatchStates, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstancePatchStatesInput{} + } + + output = &DescribeInstancePatchStatesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInstancePatchStates API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the high-level patch state of one or more instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInstancePatchStates for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstancePatchStates +func (c *SSM) DescribeInstancePatchStates(input *DescribeInstancePatchStatesInput) (*DescribeInstancePatchStatesOutput, error) { + req, out := c.DescribeInstancePatchStatesRequest(input) + return out, req.Send() +} + +// DescribeInstancePatchStatesWithContext is the same as DescribeInstancePatchStates with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstancePatchStates for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInstancePatchStatesWithContext(ctx aws.Context, input *DescribeInstancePatchStatesInput, opts ...request.Option) (*DescribeInstancePatchStatesOutput, error) { + req, out := c.DescribeInstancePatchStatesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstancePatchStatesForPatchGroup = "DescribeInstancePatchStatesForPatchGroup" + +// DescribeInstancePatchStatesForPatchGroupRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInstancePatchStatesForPatchGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInstancePatchStatesForPatchGroup for more information on using the DescribeInstancePatchStatesForPatchGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInstancePatchStatesForPatchGroupRequest method. +// req, resp := client.DescribeInstancePatchStatesForPatchGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstancePatchStatesForPatchGroup +func (c *SSM) DescribeInstancePatchStatesForPatchGroupRequest(input *DescribeInstancePatchStatesForPatchGroupInput) (req *request.Request, output *DescribeInstancePatchStatesForPatchGroupOutput) { + op := &request.Operation{ + Name: opDescribeInstancePatchStatesForPatchGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstancePatchStatesForPatchGroupInput{} + } + + output = &DescribeInstancePatchStatesForPatchGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInstancePatchStatesForPatchGroup API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the high-level patch state for the instances in the specified patch +// group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInstancePatchStatesForPatchGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstancePatchStatesForPatchGroup +func (c *SSM) DescribeInstancePatchStatesForPatchGroup(input *DescribeInstancePatchStatesForPatchGroupInput) (*DescribeInstancePatchStatesForPatchGroupOutput, error) { + req, out := c.DescribeInstancePatchStatesForPatchGroupRequest(input) + return out, req.Send() +} + +// DescribeInstancePatchStatesForPatchGroupWithContext is the same as DescribeInstancePatchStatesForPatchGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstancePatchStatesForPatchGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInstancePatchStatesForPatchGroupWithContext(ctx aws.Context, input *DescribeInstancePatchStatesForPatchGroupInput, opts ...request.Option) (*DescribeInstancePatchStatesForPatchGroupOutput, error) { + req, out := c.DescribeInstancePatchStatesForPatchGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstancePatches = "DescribeInstancePatches" + +// DescribeInstancePatchesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInstancePatches operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInstancePatches for more information on using the DescribeInstancePatches +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInstancePatchesRequest method. +// req, resp := client.DescribeInstancePatchesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstancePatches +func (c *SSM) DescribeInstancePatchesRequest(input *DescribeInstancePatchesInput) (req *request.Request, output *DescribeInstancePatchesOutput) { + op := &request.Operation{ + Name: opDescribeInstancePatches, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstancePatchesInput{} + } + + output = &DescribeInstancePatchesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInstancePatches API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves information about the patches on the specified instance and their +// state relative to the patch baseline being used for the instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInstancePatches for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInstancePatches +func (c *SSM) DescribeInstancePatches(input *DescribeInstancePatchesInput) (*DescribeInstancePatchesOutput, error) { + req, out := c.DescribeInstancePatchesRequest(input) + return out, req.Send() +} + +// DescribeInstancePatchesWithContext is the same as DescribeInstancePatches with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstancePatches for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInstancePatchesWithContext(ctx aws.Context, input *DescribeInstancePatchesInput, opts ...request.Option) (*DescribeInstancePatchesOutput, error) { + req, out := c.DescribeInstancePatchesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInventoryDeletions = "DescribeInventoryDeletions" + +// DescribeInventoryDeletionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInventoryDeletions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInventoryDeletions for more information on using the DescribeInventoryDeletions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInventoryDeletionsRequest method. +// req, resp := client.DescribeInventoryDeletionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInventoryDeletions +func (c *SSM) DescribeInventoryDeletionsRequest(input *DescribeInventoryDeletionsInput) (req *request.Request, output *DescribeInventoryDeletionsOutput) { + op := &request.Operation{ + Name: opDescribeInventoryDeletions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInventoryDeletionsInput{} + } + + output = &DescribeInventoryDeletionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInventoryDeletions API operation for Amazon Simple Systems Manager (SSM). +// +// Describes a specific delete inventory operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInventoryDeletions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDeletionIdException "InvalidDeletionIdException" +// The ID specified for the delete operation does not exist or is not valide. +// Verify the ID and try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInventoryDeletions +func (c *SSM) DescribeInventoryDeletions(input *DescribeInventoryDeletionsInput) (*DescribeInventoryDeletionsOutput, error) { + req, out := c.DescribeInventoryDeletionsRequest(input) + return out, req.Send() +} + +// DescribeInventoryDeletionsWithContext is the same as DescribeInventoryDeletions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInventoryDeletions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInventoryDeletionsWithContext(ctx aws.Context, input *DescribeInventoryDeletionsInput, opts ...request.Option) (*DescribeInventoryDeletionsOutput, error) { + req, out := c.DescribeInventoryDeletionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceWindowExecutionTaskInvocations = "DescribeMaintenanceWindowExecutionTaskInvocations" + +// DescribeMaintenanceWindowExecutionTaskInvocationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowExecutionTaskInvocations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowExecutionTaskInvocations for more information on using the DescribeMaintenanceWindowExecutionTaskInvocations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowExecutionTaskInvocationsRequest method. +// req, resp := client.DescribeMaintenanceWindowExecutionTaskInvocationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowExecutionTaskInvocations +func (c *SSM) DescribeMaintenanceWindowExecutionTaskInvocationsRequest(input *DescribeMaintenanceWindowExecutionTaskInvocationsInput) (req *request.Request, output *DescribeMaintenanceWindowExecutionTaskInvocationsOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowExecutionTaskInvocations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowExecutionTaskInvocationsInput{} + } + + output = &DescribeMaintenanceWindowExecutionTaskInvocationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowExecutionTaskInvocations API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the individual task executions (one per target) for a particular +// task executed as part of a Maintenance Window execution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowExecutionTaskInvocations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowExecutionTaskInvocations +func (c *SSM) DescribeMaintenanceWindowExecutionTaskInvocations(input *DescribeMaintenanceWindowExecutionTaskInvocationsInput) (*DescribeMaintenanceWindowExecutionTaskInvocationsOutput, error) { + req, out := c.DescribeMaintenanceWindowExecutionTaskInvocationsRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowExecutionTaskInvocationsWithContext is the same as DescribeMaintenanceWindowExecutionTaskInvocations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowExecutionTaskInvocations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowExecutionTaskInvocationsWithContext(ctx aws.Context, input *DescribeMaintenanceWindowExecutionTaskInvocationsInput, opts ...request.Option) (*DescribeMaintenanceWindowExecutionTaskInvocationsOutput, error) { + req, out := c.DescribeMaintenanceWindowExecutionTaskInvocationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceWindowExecutionTasks = "DescribeMaintenanceWindowExecutionTasks" + +// DescribeMaintenanceWindowExecutionTasksRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowExecutionTasks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowExecutionTasks for more information on using the DescribeMaintenanceWindowExecutionTasks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowExecutionTasksRequest method. +// req, resp := client.DescribeMaintenanceWindowExecutionTasksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowExecutionTasks +func (c *SSM) DescribeMaintenanceWindowExecutionTasksRequest(input *DescribeMaintenanceWindowExecutionTasksInput) (req *request.Request, output *DescribeMaintenanceWindowExecutionTasksOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowExecutionTasks, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowExecutionTasksInput{} + } + + output = &DescribeMaintenanceWindowExecutionTasksOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowExecutionTasks API operation for Amazon Simple Systems Manager (SSM). +// +// For a given Maintenance Window execution, lists the tasks that were executed. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowExecutionTasks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowExecutionTasks +func (c *SSM) DescribeMaintenanceWindowExecutionTasks(input *DescribeMaintenanceWindowExecutionTasksInput) (*DescribeMaintenanceWindowExecutionTasksOutput, error) { + req, out := c.DescribeMaintenanceWindowExecutionTasksRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowExecutionTasksWithContext is the same as DescribeMaintenanceWindowExecutionTasks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowExecutionTasks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowExecutionTasksWithContext(ctx aws.Context, input *DescribeMaintenanceWindowExecutionTasksInput, opts ...request.Option) (*DescribeMaintenanceWindowExecutionTasksOutput, error) { + req, out := c.DescribeMaintenanceWindowExecutionTasksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceWindowExecutions = "DescribeMaintenanceWindowExecutions" + +// DescribeMaintenanceWindowExecutionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowExecutions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowExecutions for more information on using the DescribeMaintenanceWindowExecutions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowExecutionsRequest method. +// req, resp := client.DescribeMaintenanceWindowExecutionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowExecutions +func (c *SSM) DescribeMaintenanceWindowExecutionsRequest(input *DescribeMaintenanceWindowExecutionsInput) (req *request.Request, output *DescribeMaintenanceWindowExecutionsOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowExecutions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowExecutionsInput{} + } + + output = &DescribeMaintenanceWindowExecutionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowExecutions API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the executions of a Maintenance Window. This includes information about +// when the Maintenance Window was scheduled to be active, and information about +// tasks registered and run with the Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowExecutions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowExecutions +func (c *SSM) DescribeMaintenanceWindowExecutions(input *DescribeMaintenanceWindowExecutionsInput) (*DescribeMaintenanceWindowExecutionsOutput, error) { + req, out := c.DescribeMaintenanceWindowExecutionsRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowExecutionsWithContext is the same as DescribeMaintenanceWindowExecutions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowExecutions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowExecutionsWithContext(ctx aws.Context, input *DescribeMaintenanceWindowExecutionsInput, opts ...request.Option) (*DescribeMaintenanceWindowExecutionsOutput, error) { + req, out := c.DescribeMaintenanceWindowExecutionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceWindowTargets = "DescribeMaintenanceWindowTargets" + +// DescribeMaintenanceWindowTargetsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowTargets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowTargets for more information on using the DescribeMaintenanceWindowTargets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowTargetsRequest method. +// req, resp := client.DescribeMaintenanceWindowTargetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowTargets +func (c *SSM) DescribeMaintenanceWindowTargetsRequest(input *DescribeMaintenanceWindowTargetsInput) (req *request.Request, output *DescribeMaintenanceWindowTargetsOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowTargets, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowTargetsInput{} + } + + output = &DescribeMaintenanceWindowTargetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowTargets API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the targets registered with the Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowTargets for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowTargets +func (c *SSM) DescribeMaintenanceWindowTargets(input *DescribeMaintenanceWindowTargetsInput) (*DescribeMaintenanceWindowTargetsOutput, error) { + req, out := c.DescribeMaintenanceWindowTargetsRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowTargetsWithContext is the same as DescribeMaintenanceWindowTargets with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowTargets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowTargetsWithContext(ctx aws.Context, input *DescribeMaintenanceWindowTargetsInput, opts ...request.Option) (*DescribeMaintenanceWindowTargetsOutput, error) { + req, out := c.DescribeMaintenanceWindowTargetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceWindowTasks = "DescribeMaintenanceWindowTasks" + +// DescribeMaintenanceWindowTasksRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowTasks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowTasks for more information on using the DescribeMaintenanceWindowTasks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowTasksRequest method. +// req, resp := client.DescribeMaintenanceWindowTasksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowTasks +func (c *SSM) DescribeMaintenanceWindowTasksRequest(input *DescribeMaintenanceWindowTasksInput) (req *request.Request, output *DescribeMaintenanceWindowTasksOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowTasks, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowTasksInput{} + } + + output = &DescribeMaintenanceWindowTasksOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowTasks API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the tasks in a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowTasks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowTasks +func (c *SSM) DescribeMaintenanceWindowTasks(input *DescribeMaintenanceWindowTasksInput) (*DescribeMaintenanceWindowTasksOutput, error) { + req, out := c.DescribeMaintenanceWindowTasksRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowTasksWithContext is the same as DescribeMaintenanceWindowTasks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowTasks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowTasksWithContext(ctx aws.Context, input *DescribeMaintenanceWindowTasksInput, opts ...request.Option) (*DescribeMaintenanceWindowTasksOutput, error) { + req, out := c.DescribeMaintenanceWindowTasksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceWindows = "DescribeMaintenanceWindows" + +// DescribeMaintenanceWindowsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindows operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindows for more information on using the DescribeMaintenanceWindows +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowsRequest method. +// req, resp := client.DescribeMaintenanceWindowsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindows +func (c *SSM) DescribeMaintenanceWindowsRequest(input *DescribeMaintenanceWindowsInput) (req *request.Request, output *DescribeMaintenanceWindowsOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindows, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowsInput{} + } + + output = &DescribeMaintenanceWindowsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindows API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the Maintenance Windows in an AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindows for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindows +func (c *SSM) DescribeMaintenanceWindows(input *DescribeMaintenanceWindowsInput) (*DescribeMaintenanceWindowsOutput, error) { + req, out := c.DescribeMaintenanceWindowsRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowsWithContext is the same as DescribeMaintenanceWindows with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindows for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowsWithContext(ctx aws.Context, input *DescribeMaintenanceWindowsInput, opts ...request.Option) (*DescribeMaintenanceWindowsOutput, error) { + req, out := c.DescribeMaintenanceWindowsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeParameters = "DescribeParameters" + +// DescribeParametersRequest generates a "aws/request.Request" representing the +// client's request for the DescribeParameters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeParameters for more information on using the DescribeParameters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeParametersRequest method. +// req, resp := client.DescribeParametersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeParameters +func (c *SSM) DescribeParametersRequest(input *DescribeParametersInput) (req *request.Request, output *DescribeParametersOutput) { + op := &request.Operation{ + Name: opDescribeParameters, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeParametersInput{} + } + + output = &DescribeParametersOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeParameters API operation for Amazon Simple Systems Manager (SSM). +// +// Get information about a parameter. +// +// Request results are returned on a best-effort basis. If you specify MaxResults +// in the request, the response includes information up to the limit specified. +// The number of items returned, however, can be between zero and the value +// of MaxResults. If the service reaches an internal limit while processing +// the results, it stops the operation and returns the matching values up to +// that point and a NextToken. You can specify the NextToken in a subsequent +// call to get the next set of results. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeParameters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidFilterOption "InvalidFilterOption" +// The specified filter option is not valid. Valid options are Equals and BeginsWith. +// For Path filter, valid options are Recursive and OneLevel. +// +// * ErrCodeInvalidFilterValue "InvalidFilterValue" +// The filter value is not valid. Verify the value and try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeParameters +func (c *SSM) DescribeParameters(input *DescribeParametersInput) (*DescribeParametersOutput, error) { + req, out := c.DescribeParametersRequest(input) + return out, req.Send() +} + +// DescribeParametersWithContext is the same as DescribeParameters with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeParameters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeParametersWithContext(ctx aws.Context, input *DescribeParametersInput, opts ...request.Option) (*DescribeParametersOutput, error) { + req, out := c.DescribeParametersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeParametersPages iterates over the pages of a DescribeParameters operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeParameters method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeParameters operation. +// pageNum := 0 +// err := client.DescribeParametersPages(params, +// func(page *DescribeParametersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) DescribeParametersPages(input *DescribeParametersInput, fn func(*DescribeParametersOutput, bool) bool) error { + return c.DescribeParametersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeParametersPagesWithContext same as DescribeParametersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeParametersPagesWithContext(ctx aws.Context, input *DescribeParametersInput, fn func(*DescribeParametersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeParametersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeParametersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeParametersOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribePatchBaselines = "DescribePatchBaselines" + +// DescribePatchBaselinesRequest generates a "aws/request.Request" representing the +// client's request for the DescribePatchBaselines operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribePatchBaselines for more information on using the DescribePatchBaselines +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribePatchBaselinesRequest method. +// req, resp := client.DescribePatchBaselinesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribePatchBaselines +func (c *SSM) DescribePatchBaselinesRequest(input *DescribePatchBaselinesInput) (req *request.Request, output *DescribePatchBaselinesOutput) { + op := &request.Operation{ + Name: opDescribePatchBaselines, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribePatchBaselinesInput{} + } + + output = &DescribePatchBaselinesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribePatchBaselines API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the patch baselines in your AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribePatchBaselines for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribePatchBaselines +func (c *SSM) DescribePatchBaselines(input *DescribePatchBaselinesInput) (*DescribePatchBaselinesOutput, error) { + req, out := c.DescribePatchBaselinesRequest(input) + return out, req.Send() +} + +// DescribePatchBaselinesWithContext is the same as DescribePatchBaselines with the addition of +// the ability to pass a context and additional request options. +// +// See DescribePatchBaselines for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribePatchBaselinesWithContext(ctx aws.Context, input *DescribePatchBaselinesInput, opts ...request.Option) (*DescribePatchBaselinesOutput, error) { + req, out := c.DescribePatchBaselinesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribePatchGroupState = "DescribePatchGroupState" + +// DescribePatchGroupStateRequest generates a "aws/request.Request" representing the +// client's request for the DescribePatchGroupState operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribePatchGroupState for more information on using the DescribePatchGroupState +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribePatchGroupStateRequest method. +// req, resp := client.DescribePatchGroupStateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribePatchGroupState +func (c *SSM) DescribePatchGroupStateRequest(input *DescribePatchGroupStateInput) (req *request.Request, output *DescribePatchGroupStateOutput) { + op := &request.Operation{ + Name: opDescribePatchGroupState, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribePatchGroupStateInput{} + } + + output = &DescribePatchGroupStateOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribePatchGroupState API operation for Amazon Simple Systems Manager (SSM). +// +// Returns high-level aggregated patch compliance state for a patch group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribePatchGroupState for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribePatchGroupState +func (c *SSM) DescribePatchGroupState(input *DescribePatchGroupStateInput) (*DescribePatchGroupStateOutput, error) { + req, out := c.DescribePatchGroupStateRequest(input) + return out, req.Send() +} + +// DescribePatchGroupStateWithContext is the same as DescribePatchGroupState with the addition of +// the ability to pass a context and additional request options. +// +// See DescribePatchGroupState for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribePatchGroupStateWithContext(ctx aws.Context, input *DescribePatchGroupStateInput, opts ...request.Option) (*DescribePatchGroupStateOutput, error) { + req, out := c.DescribePatchGroupStateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribePatchGroups = "DescribePatchGroups" + +// DescribePatchGroupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribePatchGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribePatchGroups for more information on using the DescribePatchGroups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribePatchGroupsRequest method. +// req, resp := client.DescribePatchGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribePatchGroups +func (c *SSM) DescribePatchGroupsRequest(input *DescribePatchGroupsInput) (req *request.Request, output *DescribePatchGroupsOutput) { + op := &request.Operation{ + Name: opDescribePatchGroups, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribePatchGroupsInput{} + } + + output = &DescribePatchGroupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribePatchGroups API operation for Amazon Simple Systems Manager (SSM). +// +// Lists all patch groups that have been registered with patch baselines. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribePatchGroups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribePatchGroups +func (c *SSM) DescribePatchGroups(input *DescribePatchGroupsInput) (*DescribePatchGroupsOutput, error) { + req, out := c.DescribePatchGroupsRequest(input) + return out, req.Send() +} + +// DescribePatchGroupsWithContext is the same as DescribePatchGroups with the addition of +// the ability to pass a context and additional request options. +// +// See DescribePatchGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribePatchGroupsWithContext(ctx aws.Context, input *DescribePatchGroupsInput, opts ...request.Option) (*DescribePatchGroupsOutput, error) { + req, out := c.DescribePatchGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAutomationExecution = "GetAutomationExecution" + +// GetAutomationExecutionRequest generates a "aws/request.Request" representing the +// client's request for the GetAutomationExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAutomationExecution for more information on using the GetAutomationExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAutomationExecutionRequest method. +// req, resp := client.GetAutomationExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetAutomationExecution +func (c *SSM) GetAutomationExecutionRequest(input *GetAutomationExecutionInput) (req *request.Request, output *GetAutomationExecutionOutput) { + op := &request.Operation{ + Name: opGetAutomationExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetAutomationExecutionInput{} + } + + output = &GetAutomationExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAutomationExecution API operation for Amazon Simple Systems Manager (SSM). +// +// Get detailed information about a particular Automation execution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetAutomationExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAutomationExecutionNotFoundException "AutomationExecutionNotFoundException" +// There is no automation execution information for the requested automation +// execution ID. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetAutomationExecution +func (c *SSM) GetAutomationExecution(input *GetAutomationExecutionInput) (*GetAutomationExecutionOutput, error) { + req, out := c.GetAutomationExecutionRequest(input) + return out, req.Send() +} + +// GetAutomationExecutionWithContext is the same as GetAutomationExecution with the addition of +// the ability to pass a context and additional request options. +// +// See GetAutomationExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetAutomationExecutionWithContext(ctx aws.Context, input *GetAutomationExecutionInput, opts ...request.Option) (*GetAutomationExecutionOutput, error) { + req, out := c.GetAutomationExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCommandInvocation = "GetCommandInvocation" + +// GetCommandInvocationRequest generates a "aws/request.Request" representing the +// client's request for the GetCommandInvocation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCommandInvocation for more information on using the GetCommandInvocation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCommandInvocationRequest method. +// req, resp := client.GetCommandInvocationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetCommandInvocation +func (c *SSM) GetCommandInvocationRequest(input *GetCommandInvocationInput) (req *request.Request, output *GetCommandInvocationOutput) { + op := &request.Operation{ + Name: opGetCommandInvocation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetCommandInvocationInput{} + } + + output = &GetCommandInvocationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCommandInvocation API operation for Amazon Simple Systems Manager (SSM). +// +// Returns detailed information about command execution for an invocation or +// plugin. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetCommandInvocation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidCommandId "InvalidCommandId" +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidPluginName "InvalidPluginName" +// The plugin name is not valid. +// +// * ErrCodeInvocationDoesNotExist "InvocationDoesNotExist" +// The command ID and instance ID you specified did not match any invocations. +// Verify the command ID adn the instance ID and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetCommandInvocation +func (c *SSM) GetCommandInvocation(input *GetCommandInvocationInput) (*GetCommandInvocationOutput, error) { + req, out := c.GetCommandInvocationRequest(input) + return out, req.Send() +} + +// GetCommandInvocationWithContext is the same as GetCommandInvocation with the addition of +// the ability to pass a context and additional request options. +// +// See GetCommandInvocation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetCommandInvocationWithContext(ctx aws.Context, input *GetCommandInvocationInput, opts ...request.Option) (*GetCommandInvocationOutput, error) { + req, out := c.GetCommandInvocationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetDefaultPatchBaseline = "GetDefaultPatchBaseline" + +// GetDefaultPatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the GetDefaultPatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDefaultPatchBaseline for more information on using the GetDefaultPatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDefaultPatchBaselineRequest method. +// req, resp := client.GetDefaultPatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDefaultPatchBaseline +func (c *SSM) GetDefaultPatchBaselineRequest(input *GetDefaultPatchBaselineInput) (req *request.Request, output *GetDefaultPatchBaselineOutput) { + op := &request.Operation{ + Name: opGetDefaultPatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDefaultPatchBaselineInput{} + } + + output = &GetDefaultPatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDefaultPatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the default patch baseline. Note that Systems Manager supports +// creating multiple default patch baselines. For example, you can create a +// default patch baseline for each operating system. +// +// If you do not specify an operating system value, the default patch baseline +// for Windows is returned. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetDefaultPatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDefaultPatchBaseline +func (c *SSM) GetDefaultPatchBaseline(input *GetDefaultPatchBaselineInput) (*GetDefaultPatchBaselineOutput, error) { + req, out := c.GetDefaultPatchBaselineRequest(input) + return out, req.Send() +} + +// GetDefaultPatchBaselineWithContext is the same as GetDefaultPatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See GetDefaultPatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetDefaultPatchBaselineWithContext(ctx aws.Context, input *GetDefaultPatchBaselineInput, opts ...request.Option) (*GetDefaultPatchBaselineOutput, error) { + req, out := c.GetDefaultPatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetDeployablePatchSnapshotForInstance = "GetDeployablePatchSnapshotForInstance" + +// GetDeployablePatchSnapshotForInstanceRequest generates a "aws/request.Request" representing the +// client's request for the GetDeployablePatchSnapshotForInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDeployablePatchSnapshotForInstance for more information on using the GetDeployablePatchSnapshotForInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDeployablePatchSnapshotForInstanceRequest method. +// req, resp := client.GetDeployablePatchSnapshotForInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDeployablePatchSnapshotForInstance +func (c *SSM) GetDeployablePatchSnapshotForInstanceRequest(input *GetDeployablePatchSnapshotForInstanceInput) (req *request.Request, output *GetDeployablePatchSnapshotForInstanceOutput) { + op := &request.Operation{ + Name: opGetDeployablePatchSnapshotForInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDeployablePatchSnapshotForInstanceInput{} + } + + output = &GetDeployablePatchSnapshotForInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDeployablePatchSnapshotForInstance API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the current snapshot for the patch baseline the instance uses. +// This API is primarily used by the AWS-RunPatchBaseline Systems Manager document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetDeployablePatchSnapshotForInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeUnsupportedOperatingSystem "UnsupportedOperatingSystem" +// The operating systems you specified is not supported, or the operation is +// not supported for the operating system. Valid operating systems include: +// Windows, AmazonLinux, RedhatEnterpriseLinux, and Ubuntu. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDeployablePatchSnapshotForInstance +func (c *SSM) GetDeployablePatchSnapshotForInstance(input *GetDeployablePatchSnapshotForInstanceInput) (*GetDeployablePatchSnapshotForInstanceOutput, error) { + req, out := c.GetDeployablePatchSnapshotForInstanceRequest(input) + return out, req.Send() +} + +// GetDeployablePatchSnapshotForInstanceWithContext is the same as GetDeployablePatchSnapshotForInstance with the addition of +// the ability to pass a context and additional request options. +// +// See GetDeployablePatchSnapshotForInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetDeployablePatchSnapshotForInstanceWithContext(ctx aws.Context, input *GetDeployablePatchSnapshotForInstanceInput, opts ...request.Option) (*GetDeployablePatchSnapshotForInstanceOutput, error) { + req, out := c.GetDeployablePatchSnapshotForInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetDocument = "GetDocument" + +// GetDocumentRequest generates a "aws/request.Request" representing the +// client's request for the GetDocument operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDocument for more information on using the GetDocument +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDocumentRequest method. +// req, resp := client.GetDocumentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDocument +func (c *SSM) GetDocumentRequest(input *GetDocumentInput) (req *request.Request, output *GetDocumentOutput) { + op := &request.Operation{ + Name: opGetDocument, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDocumentInput{} + } + + output = &GetDocumentOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDocument API operation for Amazon Simple Systems Manager (SSM). +// +// Gets the contents of the specified Systems Manager document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetDocument for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDocument +func (c *SSM) GetDocument(input *GetDocumentInput) (*GetDocumentOutput, error) { + req, out := c.GetDocumentRequest(input) + return out, req.Send() +} + +// GetDocumentWithContext is the same as GetDocument with the addition of +// the ability to pass a context and additional request options. +// +// See GetDocument for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetDocumentWithContext(ctx aws.Context, input *GetDocumentInput, opts ...request.Option) (*GetDocumentOutput, error) { + req, out := c.GetDocumentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetInventory = "GetInventory" + +// GetInventoryRequest generates a "aws/request.Request" representing the +// client's request for the GetInventory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetInventory for more information on using the GetInventory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetInventoryRequest method. +// req, resp := client.GetInventoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetInventory +func (c *SSM) GetInventoryRequest(input *GetInventoryInput) (req *request.Request, output *GetInventoryOutput) { + op := &request.Operation{ + Name: opGetInventory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetInventoryInput{} + } + + output = &GetInventoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetInventory API operation for Amazon Simple Systems Manager (SSM). +// +// Query inventory information. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetInventory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInvalidTypeNameException "InvalidTypeNameException" +// The parameter type name is not valid. +// +// * ErrCodeInvalidResultAttributeException "InvalidResultAttributeException" +// The specified inventory item result attribute is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetInventory +func (c *SSM) GetInventory(input *GetInventoryInput) (*GetInventoryOutput, error) { + req, out := c.GetInventoryRequest(input) + return out, req.Send() +} + +// GetInventoryWithContext is the same as GetInventory with the addition of +// the ability to pass a context and additional request options. +// +// See GetInventory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetInventoryWithContext(ctx aws.Context, input *GetInventoryInput, opts ...request.Option) (*GetInventoryOutput, error) { + req, out := c.GetInventoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetInventorySchema = "GetInventorySchema" + +// GetInventorySchemaRequest generates a "aws/request.Request" representing the +// client's request for the GetInventorySchema operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetInventorySchema for more information on using the GetInventorySchema +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetInventorySchemaRequest method. +// req, resp := client.GetInventorySchemaRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetInventorySchema +func (c *SSM) GetInventorySchemaRequest(input *GetInventorySchemaInput) (req *request.Request, output *GetInventorySchemaOutput) { + op := &request.Operation{ + Name: opGetInventorySchema, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetInventorySchemaInput{} + } + + output = &GetInventorySchemaOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetInventorySchema API operation for Amazon Simple Systems Manager (SSM). +// +// Return a list of inventory type names for the account, or return a list of +// attribute names for a specific Inventory item type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetInventorySchema for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidTypeNameException "InvalidTypeNameException" +// The parameter type name is not valid. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetInventorySchema +func (c *SSM) GetInventorySchema(input *GetInventorySchemaInput) (*GetInventorySchemaOutput, error) { + req, out := c.GetInventorySchemaRequest(input) + return out, req.Send() +} + +// GetInventorySchemaWithContext is the same as GetInventorySchema with the addition of +// the ability to pass a context and additional request options. +// +// See GetInventorySchema for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetInventorySchemaWithContext(ctx aws.Context, input *GetInventorySchemaInput, opts ...request.Option) (*GetInventorySchemaOutput, error) { + req, out := c.GetInventorySchemaRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetMaintenanceWindow = "GetMaintenanceWindow" + +// GetMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the GetMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMaintenanceWindow for more information on using the GetMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMaintenanceWindowRequest method. +// req, resp := client.GetMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindow +func (c *SSM) GetMaintenanceWindowRequest(input *GetMaintenanceWindowInput) (req *request.Request, output *GetMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opGetMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMaintenanceWindowInput{} + } + + output = &GetMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindow +func (c *SSM) GetMaintenanceWindow(input *GetMaintenanceWindowInput) (*GetMaintenanceWindowOutput, error) { + req, out := c.GetMaintenanceWindowRequest(input) + return out, req.Send() +} + +// GetMaintenanceWindowWithContext is the same as GetMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See GetMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetMaintenanceWindowWithContext(ctx aws.Context, input *GetMaintenanceWindowInput, opts ...request.Option) (*GetMaintenanceWindowOutput, error) { + req, out := c.GetMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetMaintenanceWindowExecution = "GetMaintenanceWindowExecution" + +// GetMaintenanceWindowExecutionRequest generates a "aws/request.Request" representing the +// client's request for the GetMaintenanceWindowExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMaintenanceWindowExecution for more information on using the GetMaintenanceWindowExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMaintenanceWindowExecutionRequest method. +// req, resp := client.GetMaintenanceWindowExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowExecution +func (c *SSM) GetMaintenanceWindowExecutionRequest(input *GetMaintenanceWindowExecutionInput) (req *request.Request, output *GetMaintenanceWindowExecutionOutput) { + op := &request.Operation{ + Name: opGetMaintenanceWindowExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMaintenanceWindowExecutionInput{} + } + + output = &GetMaintenanceWindowExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMaintenanceWindowExecution API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves details about a specific task executed as part of a Maintenance +// Window execution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetMaintenanceWindowExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowExecution +func (c *SSM) GetMaintenanceWindowExecution(input *GetMaintenanceWindowExecutionInput) (*GetMaintenanceWindowExecutionOutput, error) { + req, out := c.GetMaintenanceWindowExecutionRequest(input) + return out, req.Send() +} + +// GetMaintenanceWindowExecutionWithContext is the same as GetMaintenanceWindowExecution with the addition of +// the ability to pass a context and additional request options. +// +// See GetMaintenanceWindowExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetMaintenanceWindowExecutionWithContext(ctx aws.Context, input *GetMaintenanceWindowExecutionInput, opts ...request.Option) (*GetMaintenanceWindowExecutionOutput, error) { + req, out := c.GetMaintenanceWindowExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetMaintenanceWindowExecutionTask = "GetMaintenanceWindowExecutionTask" + +// GetMaintenanceWindowExecutionTaskRequest generates a "aws/request.Request" representing the +// client's request for the GetMaintenanceWindowExecutionTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMaintenanceWindowExecutionTask for more information on using the GetMaintenanceWindowExecutionTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMaintenanceWindowExecutionTaskRequest method. +// req, resp := client.GetMaintenanceWindowExecutionTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowExecutionTask +func (c *SSM) GetMaintenanceWindowExecutionTaskRequest(input *GetMaintenanceWindowExecutionTaskInput) (req *request.Request, output *GetMaintenanceWindowExecutionTaskOutput) { + op := &request.Operation{ + Name: opGetMaintenanceWindowExecutionTask, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMaintenanceWindowExecutionTaskInput{} + } + + output = &GetMaintenanceWindowExecutionTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMaintenanceWindowExecutionTask API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the details about a specific task executed as part of a Maintenance +// Window execution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetMaintenanceWindowExecutionTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowExecutionTask +func (c *SSM) GetMaintenanceWindowExecutionTask(input *GetMaintenanceWindowExecutionTaskInput) (*GetMaintenanceWindowExecutionTaskOutput, error) { + req, out := c.GetMaintenanceWindowExecutionTaskRequest(input) + return out, req.Send() +} + +// GetMaintenanceWindowExecutionTaskWithContext is the same as GetMaintenanceWindowExecutionTask with the addition of +// the ability to pass a context and additional request options. +// +// See GetMaintenanceWindowExecutionTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetMaintenanceWindowExecutionTaskWithContext(ctx aws.Context, input *GetMaintenanceWindowExecutionTaskInput, opts ...request.Option) (*GetMaintenanceWindowExecutionTaskOutput, error) { + req, out := c.GetMaintenanceWindowExecutionTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetMaintenanceWindowExecutionTaskInvocation = "GetMaintenanceWindowExecutionTaskInvocation" + +// GetMaintenanceWindowExecutionTaskInvocationRequest generates a "aws/request.Request" representing the +// client's request for the GetMaintenanceWindowExecutionTaskInvocation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMaintenanceWindowExecutionTaskInvocation for more information on using the GetMaintenanceWindowExecutionTaskInvocation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMaintenanceWindowExecutionTaskInvocationRequest method. +// req, resp := client.GetMaintenanceWindowExecutionTaskInvocationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowExecutionTaskInvocation +func (c *SSM) GetMaintenanceWindowExecutionTaskInvocationRequest(input *GetMaintenanceWindowExecutionTaskInvocationInput) (req *request.Request, output *GetMaintenanceWindowExecutionTaskInvocationOutput) { + op := &request.Operation{ + Name: opGetMaintenanceWindowExecutionTaskInvocation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMaintenanceWindowExecutionTaskInvocationInput{} + } + + output = &GetMaintenanceWindowExecutionTaskInvocationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMaintenanceWindowExecutionTaskInvocation API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves a task invocation. A task invocation is a specific task executing +// on a specific target. Maintenance Windows report status for all invocations. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetMaintenanceWindowExecutionTaskInvocation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowExecutionTaskInvocation +func (c *SSM) GetMaintenanceWindowExecutionTaskInvocation(input *GetMaintenanceWindowExecutionTaskInvocationInput) (*GetMaintenanceWindowExecutionTaskInvocationOutput, error) { + req, out := c.GetMaintenanceWindowExecutionTaskInvocationRequest(input) + return out, req.Send() +} + +// GetMaintenanceWindowExecutionTaskInvocationWithContext is the same as GetMaintenanceWindowExecutionTaskInvocation with the addition of +// the ability to pass a context and additional request options. +// +// See GetMaintenanceWindowExecutionTaskInvocation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetMaintenanceWindowExecutionTaskInvocationWithContext(ctx aws.Context, input *GetMaintenanceWindowExecutionTaskInvocationInput, opts ...request.Option) (*GetMaintenanceWindowExecutionTaskInvocationOutput, error) { + req, out := c.GetMaintenanceWindowExecutionTaskInvocationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetMaintenanceWindowTask = "GetMaintenanceWindowTask" + +// GetMaintenanceWindowTaskRequest generates a "aws/request.Request" representing the +// client's request for the GetMaintenanceWindowTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMaintenanceWindowTask for more information on using the GetMaintenanceWindowTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMaintenanceWindowTaskRequest method. +// req, resp := client.GetMaintenanceWindowTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowTask +func (c *SSM) GetMaintenanceWindowTaskRequest(input *GetMaintenanceWindowTaskInput) (req *request.Request, output *GetMaintenanceWindowTaskOutput) { + op := &request.Operation{ + Name: opGetMaintenanceWindowTask, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMaintenanceWindowTaskInput{} + } + + output = &GetMaintenanceWindowTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMaintenanceWindowTask API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the tasks in a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetMaintenanceWindowTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetMaintenanceWindowTask +func (c *SSM) GetMaintenanceWindowTask(input *GetMaintenanceWindowTaskInput) (*GetMaintenanceWindowTaskOutput, error) { + req, out := c.GetMaintenanceWindowTaskRequest(input) + return out, req.Send() +} + +// GetMaintenanceWindowTaskWithContext is the same as GetMaintenanceWindowTask with the addition of +// the ability to pass a context and additional request options. +// +// See GetMaintenanceWindowTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetMaintenanceWindowTaskWithContext(ctx aws.Context, input *GetMaintenanceWindowTaskInput, opts ...request.Option) (*GetMaintenanceWindowTaskOutput, error) { + req, out := c.GetMaintenanceWindowTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetParameter = "GetParameter" + +// GetParameterRequest generates a "aws/request.Request" representing the +// client's request for the GetParameter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetParameter for more information on using the GetParameter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetParameterRequest method. +// req, resp := client.GetParameterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParameter +func (c *SSM) GetParameterRequest(input *GetParameterInput) (req *request.Request, output *GetParameterOutput) { + op := &request.Operation{ + Name: opGetParameter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetParameterInput{} + } + + output = &GetParameterOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetParameter API operation for Amazon Simple Systems Manager (SSM). +// +// Get information about a parameter by using the parameter name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetParameter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidKeyId "InvalidKeyId" +// The query key ID is not valid. +// +// * ErrCodeParameterNotFound "ParameterNotFound" +// The parameter could not be found. Verify the name and try again. +// +// * ErrCodeParameterVersionNotFound "ParameterVersionNotFound" +// The specified parameter version was not found. Verify the parameter name +// and version, and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParameter +func (c *SSM) GetParameter(input *GetParameterInput) (*GetParameterOutput, error) { + req, out := c.GetParameterRequest(input) + return out, req.Send() +} + +// GetParameterWithContext is the same as GetParameter with the addition of +// the ability to pass a context and additional request options. +// +// See GetParameter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetParameterWithContext(ctx aws.Context, input *GetParameterInput, opts ...request.Option) (*GetParameterOutput, error) { + req, out := c.GetParameterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetParameterHistory = "GetParameterHistory" + +// GetParameterHistoryRequest generates a "aws/request.Request" representing the +// client's request for the GetParameterHistory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetParameterHistory for more information on using the GetParameterHistory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetParameterHistoryRequest method. +// req, resp := client.GetParameterHistoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParameterHistory +func (c *SSM) GetParameterHistoryRequest(input *GetParameterHistoryInput) (req *request.Request, output *GetParameterHistoryOutput) { + op := &request.Operation{ + Name: opGetParameterHistory, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetParameterHistoryInput{} + } + + output = &GetParameterHistoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetParameterHistory API operation for Amazon Simple Systems Manager (SSM). +// +// Query a list of all parameters used by the AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetParameterHistory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeParameterNotFound "ParameterNotFound" +// The parameter could not be found. Verify the name and try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInvalidKeyId "InvalidKeyId" +// The query key ID is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParameterHistory +func (c *SSM) GetParameterHistory(input *GetParameterHistoryInput) (*GetParameterHistoryOutput, error) { + req, out := c.GetParameterHistoryRequest(input) + return out, req.Send() +} + +// GetParameterHistoryWithContext is the same as GetParameterHistory with the addition of +// the ability to pass a context and additional request options. +// +// See GetParameterHistory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetParameterHistoryWithContext(ctx aws.Context, input *GetParameterHistoryInput, opts ...request.Option) (*GetParameterHistoryOutput, error) { + req, out := c.GetParameterHistoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetParameterHistoryPages iterates over the pages of a GetParameterHistory operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetParameterHistory method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetParameterHistory operation. +// pageNum := 0 +// err := client.GetParameterHistoryPages(params, +// func(page *GetParameterHistoryOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) GetParameterHistoryPages(input *GetParameterHistoryInput, fn func(*GetParameterHistoryOutput, bool) bool) error { + return c.GetParameterHistoryPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetParameterHistoryPagesWithContext same as GetParameterHistoryPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetParameterHistoryPagesWithContext(ctx aws.Context, input *GetParameterHistoryInput, fn func(*GetParameterHistoryOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetParameterHistoryInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetParameterHistoryRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetParameterHistoryOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetParameters = "GetParameters" + +// GetParametersRequest generates a "aws/request.Request" representing the +// client's request for the GetParameters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetParameters for more information on using the GetParameters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetParametersRequest method. +// req, resp := client.GetParametersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParameters +func (c *SSM) GetParametersRequest(input *GetParametersInput) (req *request.Request, output *GetParametersOutput) { + op := &request.Operation{ + Name: opGetParameters, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetParametersInput{} + } + + output = &GetParametersOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetParameters API operation for Amazon Simple Systems Manager (SSM). +// +// Get details of a parameter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetParameters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidKeyId "InvalidKeyId" +// The query key ID is not valid. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParameters +func (c *SSM) GetParameters(input *GetParametersInput) (*GetParametersOutput, error) { + req, out := c.GetParametersRequest(input) + return out, req.Send() +} + +// GetParametersWithContext is the same as GetParameters with the addition of +// the ability to pass a context and additional request options. +// +// See GetParameters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetParametersWithContext(ctx aws.Context, input *GetParametersInput, opts ...request.Option) (*GetParametersOutput, error) { + req, out := c.GetParametersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetParametersByPath = "GetParametersByPath" + +// GetParametersByPathRequest generates a "aws/request.Request" representing the +// client's request for the GetParametersByPath operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetParametersByPath for more information on using the GetParametersByPath +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetParametersByPathRequest method. +// req, resp := client.GetParametersByPathRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParametersByPath +func (c *SSM) GetParametersByPathRequest(input *GetParametersByPathInput) (req *request.Request, output *GetParametersByPathOutput) { + op := &request.Operation{ + Name: opGetParametersByPath, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetParametersByPathInput{} + } + + output = &GetParametersByPathOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetParametersByPath API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieve parameters in a specific hierarchy. For more information, see Working +// with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html). +// +// Request results are returned on a best-effort basis. If you specify MaxResults +// in the request, the response includes information up to the limit specified. +// The number of items returned, however, can be between zero and the value +// of MaxResults. If the service reaches an internal limit while processing +// the results, it stops the operation and returns the matching values up to +// that point and a NextToken. You can specify the NextToken in a subsequent +// call to get the next set of results. +// +// This API action doesn't support filtering by tags. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetParametersByPath for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidFilterOption "InvalidFilterOption" +// The specified filter option is not valid. Valid options are Equals and BeginsWith. +// For Path filter, valid options are Recursive and OneLevel. +// +// * ErrCodeInvalidFilterValue "InvalidFilterValue" +// The filter value is not valid. Verify the value and try again. +// +// * ErrCodeInvalidKeyId "InvalidKeyId" +// The query key ID is not valid. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetParametersByPath +func (c *SSM) GetParametersByPath(input *GetParametersByPathInput) (*GetParametersByPathOutput, error) { + req, out := c.GetParametersByPathRequest(input) + return out, req.Send() +} + +// GetParametersByPathWithContext is the same as GetParametersByPath with the addition of +// the ability to pass a context and additional request options. +// +// See GetParametersByPath for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetParametersByPathWithContext(ctx aws.Context, input *GetParametersByPathInput, opts ...request.Option) (*GetParametersByPathOutput, error) { + req, out := c.GetParametersByPathRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetParametersByPathPages iterates over the pages of a GetParametersByPath operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetParametersByPath method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetParametersByPath operation. +// pageNum := 0 +// err := client.GetParametersByPathPages(params, +// func(page *GetParametersByPathOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) GetParametersByPathPages(input *GetParametersByPathInput, fn func(*GetParametersByPathOutput, bool) bool) error { + return c.GetParametersByPathPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetParametersByPathPagesWithContext same as GetParametersByPathPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetParametersByPathPagesWithContext(ctx aws.Context, input *GetParametersByPathInput, fn func(*GetParametersByPathOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetParametersByPathInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetParametersByPathRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetParametersByPathOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetPatchBaseline = "GetPatchBaseline" + +// GetPatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the GetPatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPatchBaseline for more information on using the GetPatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPatchBaselineRequest method. +// req, resp := client.GetPatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetPatchBaseline +func (c *SSM) GetPatchBaselineRequest(input *GetPatchBaselineInput) (req *request.Request, output *GetPatchBaselineOutput) { + op := &request.Operation{ + Name: opGetPatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPatchBaselineInput{} + } + + output = &GetPatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves information about a patch baseline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetPatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetPatchBaseline +func (c *SSM) GetPatchBaseline(input *GetPatchBaselineInput) (*GetPatchBaselineOutput, error) { + req, out := c.GetPatchBaselineRequest(input) + return out, req.Send() +} + +// GetPatchBaselineWithContext is the same as GetPatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See GetPatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetPatchBaselineWithContext(ctx aws.Context, input *GetPatchBaselineInput, opts ...request.Option) (*GetPatchBaselineOutput, error) { + req, out := c.GetPatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetPatchBaselineForPatchGroup = "GetPatchBaselineForPatchGroup" + +// GetPatchBaselineForPatchGroupRequest generates a "aws/request.Request" representing the +// client's request for the GetPatchBaselineForPatchGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPatchBaselineForPatchGroup for more information on using the GetPatchBaselineForPatchGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPatchBaselineForPatchGroupRequest method. +// req, resp := client.GetPatchBaselineForPatchGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetPatchBaselineForPatchGroup +func (c *SSM) GetPatchBaselineForPatchGroupRequest(input *GetPatchBaselineForPatchGroupInput) (req *request.Request, output *GetPatchBaselineForPatchGroupOutput) { + op := &request.Operation{ + Name: opGetPatchBaselineForPatchGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPatchBaselineForPatchGroupInput{} + } + + output = &GetPatchBaselineForPatchGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPatchBaselineForPatchGroup API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the patch baseline that should be used for the specified patch +// group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetPatchBaselineForPatchGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetPatchBaselineForPatchGroup +func (c *SSM) GetPatchBaselineForPatchGroup(input *GetPatchBaselineForPatchGroupInput) (*GetPatchBaselineForPatchGroupOutput, error) { + req, out := c.GetPatchBaselineForPatchGroupRequest(input) + return out, req.Send() +} + +// GetPatchBaselineForPatchGroupWithContext is the same as GetPatchBaselineForPatchGroup with the addition of +// the ability to pass a context and additional request options. +// +// See GetPatchBaselineForPatchGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetPatchBaselineForPatchGroupWithContext(ctx aws.Context, input *GetPatchBaselineForPatchGroupInput, opts ...request.Option) (*GetPatchBaselineForPatchGroupOutput, error) { + req, out := c.GetPatchBaselineForPatchGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListAssociationVersions = "ListAssociationVersions" + +// ListAssociationVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListAssociationVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAssociationVersions for more information on using the ListAssociationVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAssociationVersionsRequest method. +// req, resp := client.ListAssociationVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListAssociationVersions +func (c *SSM) ListAssociationVersionsRequest(input *ListAssociationVersionsInput) (req *request.Request, output *ListAssociationVersionsOutput) { + op := &request.Operation{ + Name: opListAssociationVersions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListAssociationVersionsInput{} + } + + output = &ListAssociationVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAssociationVersions API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves all versions of an association for a specific association ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListAssociationVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListAssociationVersions +func (c *SSM) ListAssociationVersions(input *ListAssociationVersionsInput) (*ListAssociationVersionsOutput, error) { + req, out := c.ListAssociationVersionsRequest(input) + return out, req.Send() +} + +// ListAssociationVersionsWithContext is the same as ListAssociationVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListAssociationVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListAssociationVersionsWithContext(ctx aws.Context, input *ListAssociationVersionsInput, opts ...request.Option) (*ListAssociationVersionsOutput, error) { + req, out := c.ListAssociationVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListAssociations = "ListAssociations" + +// ListAssociationsRequest generates a "aws/request.Request" representing the +// client's request for the ListAssociations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAssociations for more information on using the ListAssociations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAssociationsRequest method. +// req, resp := client.ListAssociationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListAssociations +func (c *SSM) ListAssociationsRequest(input *ListAssociationsInput) (req *request.Request, output *ListAssociationsOutput) { + op := &request.Operation{ + Name: opListAssociations, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListAssociationsInput{} + } + + output = &ListAssociationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAssociations API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the associations for the specified Systems Manager document or instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListAssociations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListAssociations +func (c *SSM) ListAssociations(input *ListAssociationsInput) (*ListAssociationsOutput, error) { + req, out := c.ListAssociationsRequest(input) + return out, req.Send() +} + +// ListAssociationsWithContext is the same as ListAssociations with the addition of +// the ability to pass a context and additional request options. +// +// See ListAssociations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListAssociationsWithContext(ctx aws.Context, input *ListAssociationsInput, opts ...request.Option) (*ListAssociationsOutput, error) { + req, out := c.ListAssociationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListAssociationsPages iterates over the pages of a ListAssociations operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListAssociations method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListAssociations operation. +// pageNum := 0 +// err := client.ListAssociationsPages(params, +// func(page *ListAssociationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) ListAssociationsPages(input *ListAssociationsInput, fn func(*ListAssociationsOutput, bool) bool) error { + return c.ListAssociationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListAssociationsPagesWithContext same as ListAssociationsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListAssociationsPagesWithContext(ctx aws.Context, input *ListAssociationsInput, fn func(*ListAssociationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListAssociationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListAssociationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListAssociationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListCommandInvocations = "ListCommandInvocations" + +// ListCommandInvocationsRequest generates a "aws/request.Request" representing the +// client's request for the ListCommandInvocations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListCommandInvocations for more information on using the ListCommandInvocations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListCommandInvocationsRequest method. +// req, resp := client.ListCommandInvocationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListCommandInvocations +func (c *SSM) ListCommandInvocationsRequest(input *ListCommandInvocationsInput) (req *request.Request, output *ListCommandInvocationsOutput) { + op := &request.Operation{ + Name: opListCommandInvocations, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListCommandInvocationsInput{} + } + + output = &ListCommandInvocationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListCommandInvocations API operation for Amazon Simple Systems Manager (SSM). +// +// An invocation is copy of a command sent to a specific instance. A command +// can apply to one or more instances. A command invocation applies to one instance. +// For example, if a user executes SendCommand against three instances, then +// a command invocation is created for each requested instance ID. ListCommandInvocations +// provide status about command execution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListCommandInvocations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidCommandId "InvalidCommandId" +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListCommandInvocations +func (c *SSM) ListCommandInvocations(input *ListCommandInvocationsInput) (*ListCommandInvocationsOutput, error) { + req, out := c.ListCommandInvocationsRequest(input) + return out, req.Send() +} + +// ListCommandInvocationsWithContext is the same as ListCommandInvocations with the addition of +// the ability to pass a context and additional request options. +// +// See ListCommandInvocations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListCommandInvocationsWithContext(ctx aws.Context, input *ListCommandInvocationsInput, opts ...request.Option) (*ListCommandInvocationsOutput, error) { + req, out := c.ListCommandInvocationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListCommandInvocationsPages iterates over the pages of a ListCommandInvocations operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListCommandInvocations method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListCommandInvocations operation. +// pageNum := 0 +// err := client.ListCommandInvocationsPages(params, +// func(page *ListCommandInvocationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) ListCommandInvocationsPages(input *ListCommandInvocationsInput, fn func(*ListCommandInvocationsOutput, bool) bool) error { + return c.ListCommandInvocationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListCommandInvocationsPagesWithContext same as ListCommandInvocationsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListCommandInvocationsPagesWithContext(ctx aws.Context, input *ListCommandInvocationsInput, fn func(*ListCommandInvocationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListCommandInvocationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListCommandInvocationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListCommandInvocationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListCommands = "ListCommands" + +// ListCommandsRequest generates a "aws/request.Request" representing the +// client's request for the ListCommands operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListCommands for more information on using the ListCommands +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListCommandsRequest method. +// req, resp := client.ListCommandsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListCommands +func (c *SSM) ListCommandsRequest(input *ListCommandsInput) (req *request.Request, output *ListCommandsOutput) { + op := &request.Operation{ + Name: opListCommands, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListCommandsInput{} + } + + output = &ListCommandsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListCommands API operation for Amazon Simple Systems Manager (SSM). +// +// Lists the commands requested by users of the AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListCommands for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidCommandId "InvalidCommandId" +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListCommands +func (c *SSM) ListCommands(input *ListCommandsInput) (*ListCommandsOutput, error) { + req, out := c.ListCommandsRequest(input) + return out, req.Send() +} + +// ListCommandsWithContext is the same as ListCommands with the addition of +// the ability to pass a context and additional request options. +// +// See ListCommands for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListCommandsWithContext(ctx aws.Context, input *ListCommandsInput, opts ...request.Option) (*ListCommandsOutput, error) { + req, out := c.ListCommandsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListCommandsPages iterates over the pages of a ListCommands operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListCommands method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListCommands operation. +// pageNum := 0 +// err := client.ListCommandsPages(params, +// func(page *ListCommandsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) ListCommandsPages(input *ListCommandsInput, fn func(*ListCommandsOutput, bool) bool) error { + return c.ListCommandsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListCommandsPagesWithContext same as ListCommandsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListCommandsPagesWithContext(ctx aws.Context, input *ListCommandsInput, fn func(*ListCommandsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListCommandsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListCommandsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListCommandsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListComplianceItems = "ListComplianceItems" + +// ListComplianceItemsRequest generates a "aws/request.Request" representing the +// client's request for the ListComplianceItems operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListComplianceItems for more information on using the ListComplianceItems +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListComplianceItemsRequest method. +// req, resp := client.ListComplianceItemsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListComplianceItems +func (c *SSM) ListComplianceItemsRequest(input *ListComplianceItemsInput) (req *request.Request, output *ListComplianceItemsOutput) { + op := &request.Operation{ + Name: opListComplianceItems, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListComplianceItemsInput{} + } + + output = &ListComplianceItemsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListComplianceItems API operation for Amazon Simple Systems Manager (SSM). +// +// For a specified resource ID, this API action returns a list of compliance +// statuses for different resource types. Currently, you can only specify one +// resource ID per call. List results depend on the criteria specified in the +// filter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListComplianceItems for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceType "InvalidResourceType" +// The resource type is not valid. For example, if you are attempting to tag +// an instance, the instance must be a registered, managed instance. +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListComplianceItems +func (c *SSM) ListComplianceItems(input *ListComplianceItemsInput) (*ListComplianceItemsOutput, error) { + req, out := c.ListComplianceItemsRequest(input) + return out, req.Send() +} + +// ListComplianceItemsWithContext is the same as ListComplianceItems with the addition of +// the ability to pass a context and additional request options. +// +// See ListComplianceItems for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListComplianceItemsWithContext(ctx aws.Context, input *ListComplianceItemsInput, opts ...request.Option) (*ListComplianceItemsOutput, error) { + req, out := c.ListComplianceItemsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListComplianceSummaries = "ListComplianceSummaries" + +// ListComplianceSummariesRequest generates a "aws/request.Request" representing the +// client's request for the ListComplianceSummaries operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListComplianceSummaries for more information on using the ListComplianceSummaries +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListComplianceSummariesRequest method. +// req, resp := client.ListComplianceSummariesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListComplianceSummaries +func (c *SSM) ListComplianceSummariesRequest(input *ListComplianceSummariesInput) (req *request.Request, output *ListComplianceSummariesOutput) { + op := &request.Operation{ + Name: opListComplianceSummaries, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListComplianceSummariesInput{} + } + + output = &ListComplianceSummariesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListComplianceSummaries API operation for Amazon Simple Systems Manager (SSM). +// +// Returns a summary count of compliant and non-compliant resources for a compliance +// type. For example, this call can return State Manager associations, patches, +// or custom compliance types according to the filter criteria that you specify. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListComplianceSummaries for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListComplianceSummaries +func (c *SSM) ListComplianceSummaries(input *ListComplianceSummariesInput) (*ListComplianceSummariesOutput, error) { + req, out := c.ListComplianceSummariesRequest(input) + return out, req.Send() +} + +// ListComplianceSummariesWithContext is the same as ListComplianceSummaries with the addition of +// the ability to pass a context and additional request options. +// +// See ListComplianceSummaries for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListComplianceSummariesWithContext(ctx aws.Context, input *ListComplianceSummariesInput, opts ...request.Option) (*ListComplianceSummariesOutput, error) { + req, out := c.ListComplianceSummariesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListDocumentVersions = "ListDocumentVersions" + +// ListDocumentVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListDocumentVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListDocumentVersions for more information on using the ListDocumentVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListDocumentVersionsRequest method. +// req, resp := client.ListDocumentVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListDocumentVersions +func (c *SSM) ListDocumentVersionsRequest(input *ListDocumentVersionsInput) (req *request.Request, output *ListDocumentVersionsOutput) { + op := &request.Operation{ + Name: opListDocumentVersions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListDocumentVersionsInput{} + } + + output = &ListDocumentVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListDocumentVersions API operation for Amazon Simple Systems Manager (SSM). +// +// List all versions for a document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListDocumentVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListDocumentVersions +func (c *SSM) ListDocumentVersions(input *ListDocumentVersionsInput) (*ListDocumentVersionsOutput, error) { + req, out := c.ListDocumentVersionsRequest(input) + return out, req.Send() +} + +// ListDocumentVersionsWithContext is the same as ListDocumentVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListDocumentVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListDocumentVersionsWithContext(ctx aws.Context, input *ListDocumentVersionsInput, opts ...request.Option) (*ListDocumentVersionsOutput, error) { + req, out := c.ListDocumentVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListDocuments = "ListDocuments" + +// ListDocumentsRequest generates a "aws/request.Request" representing the +// client's request for the ListDocuments operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListDocuments for more information on using the ListDocuments +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListDocumentsRequest method. +// req, resp := client.ListDocumentsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListDocuments +func (c *SSM) ListDocumentsRequest(input *ListDocumentsInput) (req *request.Request, output *ListDocumentsOutput) { + op := &request.Operation{ + Name: opListDocuments, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListDocumentsInput{} + } + + output = &ListDocumentsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListDocuments API operation for Amazon Simple Systems Manager (SSM). +// +// Describes one or more of your Systems Manager documents. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListDocuments for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListDocuments +func (c *SSM) ListDocuments(input *ListDocumentsInput) (*ListDocumentsOutput, error) { + req, out := c.ListDocumentsRequest(input) + return out, req.Send() +} + +// ListDocumentsWithContext is the same as ListDocuments with the addition of +// the ability to pass a context and additional request options. +// +// See ListDocuments for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListDocumentsWithContext(ctx aws.Context, input *ListDocumentsInput, opts ...request.Option) (*ListDocumentsOutput, error) { + req, out := c.ListDocumentsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListDocumentsPages iterates over the pages of a ListDocuments operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDocuments method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDocuments operation. +// pageNum := 0 +// err := client.ListDocumentsPages(params, +// func(page *ListDocumentsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SSM) ListDocumentsPages(input *ListDocumentsInput, fn func(*ListDocumentsOutput, bool) bool) error { + return c.ListDocumentsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListDocumentsPagesWithContext same as ListDocumentsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListDocumentsPagesWithContext(ctx aws.Context, input *ListDocumentsInput, fn func(*ListDocumentsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListDocumentsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListDocumentsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListDocumentsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListInventoryEntries = "ListInventoryEntries" + +// ListInventoryEntriesRequest generates a "aws/request.Request" representing the +// client's request for the ListInventoryEntries operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListInventoryEntries for more information on using the ListInventoryEntries +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListInventoryEntriesRequest method. +// req, resp := client.ListInventoryEntriesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListInventoryEntries +func (c *SSM) ListInventoryEntriesRequest(input *ListInventoryEntriesInput) (req *request.Request, output *ListInventoryEntriesOutput) { + op := &request.Operation{ + Name: opListInventoryEntries, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListInventoryEntriesInput{} + } + + output = &ListInventoryEntriesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListInventoryEntries API operation for Amazon Simple Systems Manager (SSM). +// +// A list of inventory items returned by the request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListInventoryEntries for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidTypeNameException "InvalidTypeNameException" +// The parameter type name is not valid. +// +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListInventoryEntries +func (c *SSM) ListInventoryEntries(input *ListInventoryEntriesInput) (*ListInventoryEntriesOutput, error) { + req, out := c.ListInventoryEntriesRequest(input) + return out, req.Send() +} + +// ListInventoryEntriesWithContext is the same as ListInventoryEntries with the addition of +// the ability to pass a context and additional request options. +// +// See ListInventoryEntries for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListInventoryEntriesWithContext(ctx aws.Context, input *ListInventoryEntriesInput, opts ...request.Option) (*ListInventoryEntriesOutput, error) { + req, out := c.ListInventoryEntriesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListResourceComplianceSummaries = "ListResourceComplianceSummaries" + +// ListResourceComplianceSummariesRequest generates a "aws/request.Request" representing the +// client's request for the ListResourceComplianceSummaries operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListResourceComplianceSummaries for more information on using the ListResourceComplianceSummaries +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListResourceComplianceSummariesRequest method. +// req, resp := client.ListResourceComplianceSummariesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListResourceComplianceSummaries +func (c *SSM) ListResourceComplianceSummariesRequest(input *ListResourceComplianceSummariesInput) (req *request.Request, output *ListResourceComplianceSummariesOutput) { + op := &request.Operation{ + Name: opListResourceComplianceSummaries, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListResourceComplianceSummariesInput{} + } + + output = &ListResourceComplianceSummariesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListResourceComplianceSummaries API operation for Amazon Simple Systems Manager (SSM). +// +// Returns a resource-level summary count. The summary includes information +// about compliant and non-compliant statuses and detailed compliance-item severity +// counts, according to the filter criteria you specify. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListResourceComplianceSummaries for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidFilter "InvalidFilter" +// The filter name is not valid. Verify the you entered the correct name and +// try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListResourceComplianceSummaries +func (c *SSM) ListResourceComplianceSummaries(input *ListResourceComplianceSummariesInput) (*ListResourceComplianceSummariesOutput, error) { + req, out := c.ListResourceComplianceSummariesRequest(input) + return out, req.Send() +} + +// ListResourceComplianceSummariesWithContext is the same as ListResourceComplianceSummaries with the addition of +// the ability to pass a context and additional request options. +// +// See ListResourceComplianceSummaries for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListResourceComplianceSummariesWithContext(ctx aws.Context, input *ListResourceComplianceSummariesInput, opts ...request.Option) (*ListResourceComplianceSummariesOutput, error) { + req, out := c.ListResourceComplianceSummariesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListResourceDataSync = "ListResourceDataSync" + +// ListResourceDataSyncRequest generates a "aws/request.Request" representing the +// client's request for the ListResourceDataSync operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListResourceDataSync for more information on using the ListResourceDataSync +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListResourceDataSyncRequest method. +// req, resp := client.ListResourceDataSyncRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListResourceDataSync +func (c *SSM) ListResourceDataSyncRequest(input *ListResourceDataSyncInput) (req *request.Request, output *ListResourceDataSyncOutput) { + op := &request.Operation{ + Name: opListResourceDataSync, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListResourceDataSyncInput{} + } + + output = &ListResourceDataSyncOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListResourceDataSync API operation for Amazon Simple Systems Manager (SSM). +// +// Lists your resource data sync configurations. Includes information about +// the last time a sync attempted to start, the last sync status, and the last +// time a sync successfully completed. +// +// The number of sync configurations might be too large to return using a single +// call to ListResourceDataSync. You can limit the number of sync configurations +// returned by using the MaxResults parameter. To determine whether there are +// more sync configurations to list, check the value of NextToken in the output. +// If there are more sync configurations to list, you can request them by specifying +// the NextToken returned in the call to the parameter of a subsequent call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListResourceDataSync for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListResourceDataSync +func (c *SSM) ListResourceDataSync(input *ListResourceDataSyncInput) (*ListResourceDataSyncOutput, error) { + req, out := c.ListResourceDataSyncRequest(input) + return out, req.Send() +} + +// ListResourceDataSyncWithContext is the same as ListResourceDataSync with the addition of +// the ability to pass a context and additional request options. +// +// See ListResourceDataSync for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListResourceDataSyncWithContext(ctx aws.Context, input *ListResourceDataSyncInput, opts ...request.Option) (*ListResourceDataSyncOutput, error) { + req, out := c.ListResourceDataSyncRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListTagsForResource = "ListTagsForResource" + +// ListTagsForResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsForResource for more information on using the ListTagsForResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsForResourceRequest method. +// req, resp := client.ListTagsForResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListTagsForResource +func (c *SSM) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { + op := &request.Operation{ + Name: opListTagsForResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsForResourceInput{} + } + + output = &ListTagsForResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagsForResource API operation for Amazon Simple Systems Manager (SSM). +// +// Returns a list of the tags assigned to the specified resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ListTagsForResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceType "InvalidResourceType" +// The resource type is not valid. For example, if you are attempting to tag +// an instance, the instance must be a registered, managed instance. +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ListTagsForResource +func (c *SSM) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + return out, req.Send() +} + +// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsForResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyDocumentPermission = "ModifyDocumentPermission" + +// ModifyDocumentPermissionRequest generates a "aws/request.Request" representing the +// client's request for the ModifyDocumentPermission operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyDocumentPermission for more information on using the ModifyDocumentPermission +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyDocumentPermissionRequest method. +// req, resp := client.ModifyDocumentPermissionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ModifyDocumentPermission +func (c *SSM) ModifyDocumentPermissionRequest(input *ModifyDocumentPermissionInput) (req *request.Request, output *ModifyDocumentPermissionOutput) { + op := &request.Operation{ + Name: opModifyDocumentPermission, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyDocumentPermissionInput{} + } + + output = &ModifyDocumentPermissionOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyDocumentPermission API operation for Amazon Simple Systems Manager (SSM). +// +// Shares a Systems Manager document publicly or privately. If you share a document +// privately, you must specify the AWS user account IDs for those people who +// can use the document. If you share a document publicly, you must specify +// All as the account ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ModifyDocumentPermission for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidPermissionType "InvalidPermissionType" +// The permission type is not supported. Share is the only supported permission +// type. +// +// * ErrCodeDocumentPermissionLimit "DocumentPermissionLimit" +// The document cannot be shared with more AWS user accounts. You can share +// a document with a maximum of 20 accounts. You can publicly share up to five +// documents. If you need to increase this limit, contact AWS Support. +// +// * ErrCodeDocumentLimitExceeded "DocumentLimitExceeded" +// You can have at most 200 active Systems Manager documents. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ModifyDocumentPermission +func (c *SSM) ModifyDocumentPermission(input *ModifyDocumentPermissionInput) (*ModifyDocumentPermissionOutput, error) { + req, out := c.ModifyDocumentPermissionRequest(input) + return out, req.Send() +} + +// ModifyDocumentPermissionWithContext is the same as ModifyDocumentPermission with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyDocumentPermission for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ModifyDocumentPermissionWithContext(ctx aws.Context, input *ModifyDocumentPermissionInput, opts ...request.Option) (*ModifyDocumentPermissionOutput, error) { + req, out := c.ModifyDocumentPermissionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutComplianceItems = "PutComplianceItems" + +// PutComplianceItemsRequest generates a "aws/request.Request" representing the +// client's request for the PutComplianceItems operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutComplianceItems for more information on using the PutComplianceItems +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutComplianceItemsRequest method. +// req, resp := client.PutComplianceItemsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/PutComplianceItems +func (c *SSM) PutComplianceItemsRequest(input *PutComplianceItemsInput) (req *request.Request, output *PutComplianceItemsOutput) { + op := &request.Operation{ + Name: opPutComplianceItems, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutComplianceItemsInput{} + } + + output = &PutComplianceItemsOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutComplianceItems API operation for Amazon Simple Systems Manager (SSM). +// +// Registers a compliance type and other compliance details on a designated +// resource. This action lets you register custom compliance details with a +// resource. This call overwrites existing compliance information on the resource, +// so you must provide a full list of compliance items each time that you send +// the request. +// +// ComplianceType can be one of the following: +// +// * ExecutionId: The execution ID when the patch, association, or custom +// compliance item was applied. +// +// * ExecutionType: Specify patch, association, or Custom:string. +// +// * ExecutionTime. The time the patch, association, or custom compliance +// item was applied to the instance. +// +// * Id: The patch, association, or custom compliance ID. +// +// * Title: A title. +// +// * Status: The status of the compliance item. For example, approved for +// patches, or Failed for associations. +// +// * Severity: A patch severity. For example, critical. +// +// * DocumentName: A SSM document name. For example, AWS-RunPatchBaseline. +// +// * DocumentVersion: An SSM document version number. For example, 4. +// +// * Classification: A patch classification. For example, security updates. +// +// * PatchBaselineId: A patch baseline ID. +// +// * PatchSeverity: A patch severity. For example, Critical. +// +// * PatchState: A patch state. For example, InstancesWithFailedPatches. +// +// * PatchGroup: The name of a patch group. +// +// * InstalledTime: The time the association, patch, or custom compliance +// item was applied to the resource. Specify the time by using the following +// format: yyyy-MM-dd'T'HH:mm:ss'Z' +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation PutComplianceItems for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidItemContentException "InvalidItemContentException" +// One or more content items is not valid. +// +// * ErrCodeTotalSizeLimitExceededException "TotalSizeLimitExceededException" +// The size of inventory data has exceeded the total size limit for the resource. +// +// * ErrCodeItemSizeLimitExceededException "ItemSizeLimitExceededException" +// The inventory item size has exceeded the size limit. +// +// * ErrCodeComplianceTypeCountLimitExceededException "ComplianceTypeCountLimitExceededException" +// You specified too many custom compliance types. You can specify a maximum +// of 10 different types. +// +// * ErrCodeInvalidResourceType "InvalidResourceType" +// The resource type is not valid. For example, if you are attempting to tag +// an instance, the instance must be a registered, managed instance. +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/PutComplianceItems +func (c *SSM) PutComplianceItems(input *PutComplianceItemsInput) (*PutComplianceItemsOutput, error) { + req, out := c.PutComplianceItemsRequest(input) + return out, req.Send() +} + +// PutComplianceItemsWithContext is the same as PutComplianceItems with the addition of +// the ability to pass a context and additional request options. +// +// See PutComplianceItems for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) PutComplianceItemsWithContext(ctx aws.Context, input *PutComplianceItemsInput, opts ...request.Option) (*PutComplianceItemsOutput, error) { + req, out := c.PutComplianceItemsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutInventory = "PutInventory" + +// PutInventoryRequest generates a "aws/request.Request" representing the +// client's request for the PutInventory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutInventory for more information on using the PutInventory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutInventoryRequest method. +// req, resp := client.PutInventoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/PutInventory +func (c *SSM) PutInventoryRequest(input *PutInventoryInput) (req *request.Request, output *PutInventoryOutput) { + op := &request.Operation{ + Name: opPutInventory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutInventoryInput{} + } + + output = &PutInventoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutInventory API operation for Amazon Simple Systems Manager (SSM). +// +// Bulk update custom inventory items on one more instance. The request adds +// an inventory item, if it doesn't already exist, or updates an inventory item, +// if it does exist. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation PutInventory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidTypeNameException "InvalidTypeNameException" +// The parameter type name is not valid. +// +// * ErrCodeInvalidItemContentException "InvalidItemContentException" +// One or more content items is not valid. +// +// * ErrCodeTotalSizeLimitExceededException "TotalSizeLimitExceededException" +// The size of inventory data has exceeded the total size limit for the resource. +// +// * ErrCodeItemSizeLimitExceededException "ItemSizeLimitExceededException" +// The inventory item size has exceeded the size limit. +// +// * ErrCodeItemContentMismatchException "ItemContentMismatchException" +// The inventory item has invalid content. +// +// * ErrCodeCustomSchemaCountLimitExceededException "CustomSchemaCountLimitExceededException" +// You have exceeded the limit for custom schemas. Delete one or more custom +// schemas and try again. +// +// * ErrCodeUnsupportedInventorySchemaVersionException "UnsupportedInventorySchemaVersionException" +// Inventory item type schema version has to match supported versions in the +// service. Check output of GetInventorySchema to see the available schema version +// for each type. +// +// * ErrCodeUnsupportedInventoryItemContextException "UnsupportedInventoryItemContextException" +// The Context attribute that you specified for the InventoryItem is not allowed +// for this inventory type. You can only use the Context attribute with inventory +// types like AWS:ComplianceItem. +// +// * ErrCodeInvalidInventoryItemContextException "InvalidInventoryItemContextException" +// You specified invalid keys or values in the Context attribute for InventoryItem. +// Verify the keys and values, and try again. +// +// * ErrCodeSubTypeCountLimitExceededException "SubTypeCountLimitExceededException" +// The sub-type count exceeded the limit for the inventory type. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/PutInventory +func (c *SSM) PutInventory(input *PutInventoryInput) (*PutInventoryOutput, error) { + req, out := c.PutInventoryRequest(input) + return out, req.Send() +} + +// PutInventoryWithContext is the same as PutInventory with the addition of +// the ability to pass a context and additional request options. +// +// See PutInventory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) PutInventoryWithContext(ctx aws.Context, input *PutInventoryInput, opts ...request.Option) (*PutInventoryOutput, error) { + req, out := c.PutInventoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutParameter = "PutParameter" + +// PutParameterRequest generates a "aws/request.Request" representing the +// client's request for the PutParameter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutParameter for more information on using the PutParameter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutParameterRequest method. +// req, resp := client.PutParameterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/PutParameter +func (c *SSM) PutParameterRequest(input *PutParameterInput) (req *request.Request, output *PutParameterOutput) { + op := &request.Operation{ + Name: opPutParameter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutParameterInput{} + } + + output = &PutParameterOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutParameter API operation for Amazon Simple Systems Manager (SSM). +// +// Add a parameter to the system. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation PutParameter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidKeyId "InvalidKeyId" +// The query key ID is not valid. +// +// * ErrCodeParameterLimitExceeded "ParameterLimitExceeded" +// You have exceeded the number of parameters for this AWS account. Delete one +// or more parameters and try again. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// * ErrCodeParameterAlreadyExists "ParameterAlreadyExists" +// The parameter already exists. You can't create duplicate parameters. +// +// * ErrCodeHierarchyLevelLimitExceededException "HierarchyLevelLimitExceededException" +// A hierarchy can have a maximum of 15 levels. For more information, see Working +// with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html). +// +// * ErrCodeHierarchyTypeMismatchException "HierarchyTypeMismatchException" +// Parameter Store does not support changing a parameter type in a hierarchy. +// For example, you can't change a parameter from a String type to a SecureString +// type. You must create a new, unique parameter. +// +// * ErrCodeInvalidAllowedPatternException "InvalidAllowedPatternException" +// The request does not meet the regular expression requirement. +// +// * ErrCodeParameterMaxVersionLimitExceeded "ParameterMaxVersionLimitExceeded" +// The parameter exceeded the maximum number of allowed versions. +// +// * ErrCodeParameterPatternMismatchException "ParameterPatternMismatchException" +// The parameter name is not valid. +// +// * ErrCodeUnsupportedParameterType "UnsupportedParameterType" +// The parameter type is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/PutParameter +func (c *SSM) PutParameter(input *PutParameterInput) (*PutParameterOutput, error) { + req, out := c.PutParameterRequest(input) + return out, req.Send() +} + +// PutParameterWithContext is the same as PutParameter with the addition of +// the ability to pass a context and additional request options. +// +// See PutParameter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) PutParameterWithContext(ctx aws.Context, input *PutParameterInput, opts ...request.Option) (*PutParameterOutput, error) { + req, out := c.PutParameterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRegisterDefaultPatchBaseline = "RegisterDefaultPatchBaseline" + +// RegisterDefaultPatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the RegisterDefaultPatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RegisterDefaultPatchBaseline for more information on using the RegisterDefaultPatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RegisterDefaultPatchBaselineRequest method. +// req, resp := client.RegisterDefaultPatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterDefaultPatchBaseline +func (c *SSM) RegisterDefaultPatchBaselineRequest(input *RegisterDefaultPatchBaselineInput) (req *request.Request, output *RegisterDefaultPatchBaselineOutput) { + op := &request.Operation{ + Name: opRegisterDefaultPatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RegisterDefaultPatchBaselineInput{} + } + + output = &RegisterDefaultPatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// RegisterDefaultPatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Defines the default patch baseline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation RegisterDefaultPatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterDefaultPatchBaseline +func (c *SSM) RegisterDefaultPatchBaseline(input *RegisterDefaultPatchBaselineInput) (*RegisterDefaultPatchBaselineOutput, error) { + req, out := c.RegisterDefaultPatchBaselineRequest(input) + return out, req.Send() +} + +// RegisterDefaultPatchBaselineWithContext is the same as RegisterDefaultPatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See RegisterDefaultPatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) RegisterDefaultPatchBaselineWithContext(ctx aws.Context, input *RegisterDefaultPatchBaselineInput, opts ...request.Option) (*RegisterDefaultPatchBaselineOutput, error) { + req, out := c.RegisterDefaultPatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRegisterPatchBaselineForPatchGroup = "RegisterPatchBaselineForPatchGroup" + +// RegisterPatchBaselineForPatchGroupRequest generates a "aws/request.Request" representing the +// client's request for the RegisterPatchBaselineForPatchGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RegisterPatchBaselineForPatchGroup for more information on using the RegisterPatchBaselineForPatchGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RegisterPatchBaselineForPatchGroupRequest method. +// req, resp := client.RegisterPatchBaselineForPatchGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterPatchBaselineForPatchGroup +func (c *SSM) RegisterPatchBaselineForPatchGroupRequest(input *RegisterPatchBaselineForPatchGroupInput) (req *request.Request, output *RegisterPatchBaselineForPatchGroupOutput) { + op := &request.Operation{ + Name: opRegisterPatchBaselineForPatchGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RegisterPatchBaselineForPatchGroupInput{} + } + + output = &RegisterPatchBaselineForPatchGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// RegisterPatchBaselineForPatchGroup API operation for Amazon Simple Systems Manager (SSM). +// +// Registers a patch baseline for a patch group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation RegisterPatchBaselineForPatchGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAlreadyExistsException "AlreadyExistsException" +// Error returned if an attempt is made to register a patch group with a patch +// baseline that is already registered with a different patch baseline. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Error returned when the caller has exceeded the default resource limits. +// For example, too many Maintenance Windows or Patch baselines have been created. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterPatchBaselineForPatchGroup +func (c *SSM) RegisterPatchBaselineForPatchGroup(input *RegisterPatchBaselineForPatchGroupInput) (*RegisterPatchBaselineForPatchGroupOutput, error) { + req, out := c.RegisterPatchBaselineForPatchGroupRequest(input) + return out, req.Send() +} + +// RegisterPatchBaselineForPatchGroupWithContext is the same as RegisterPatchBaselineForPatchGroup with the addition of +// the ability to pass a context and additional request options. +// +// See RegisterPatchBaselineForPatchGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) RegisterPatchBaselineForPatchGroupWithContext(ctx aws.Context, input *RegisterPatchBaselineForPatchGroupInput, opts ...request.Option) (*RegisterPatchBaselineForPatchGroupOutput, error) { + req, out := c.RegisterPatchBaselineForPatchGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRegisterTargetWithMaintenanceWindow = "RegisterTargetWithMaintenanceWindow" + +// RegisterTargetWithMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the RegisterTargetWithMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RegisterTargetWithMaintenanceWindow for more information on using the RegisterTargetWithMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RegisterTargetWithMaintenanceWindowRequest method. +// req, resp := client.RegisterTargetWithMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterTargetWithMaintenanceWindow +func (c *SSM) RegisterTargetWithMaintenanceWindowRequest(input *RegisterTargetWithMaintenanceWindowInput) (req *request.Request, output *RegisterTargetWithMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opRegisterTargetWithMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RegisterTargetWithMaintenanceWindowInput{} + } + + output = &RegisterTargetWithMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// RegisterTargetWithMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Registers a target with a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation RegisterTargetWithMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" +// Error returned when an idempotent operation is retried and the parameters +// don't match the original call to the API with the same idempotency token. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Error returned when the caller has exceeded the default resource limits. +// For example, too many Maintenance Windows or Patch baselines have been created. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterTargetWithMaintenanceWindow +func (c *SSM) RegisterTargetWithMaintenanceWindow(input *RegisterTargetWithMaintenanceWindowInput) (*RegisterTargetWithMaintenanceWindowOutput, error) { + req, out := c.RegisterTargetWithMaintenanceWindowRequest(input) + return out, req.Send() +} + +// RegisterTargetWithMaintenanceWindowWithContext is the same as RegisterTargetWithMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See RegisterTargetWithMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) RegisterTargetWithMaintenanceWindowWithContext(ctx aws.Context, input *RegisterTargetWithMaintenanceWindowInput, opts ...request.Option) (*RegisterTargetWithMaintenanceWindowOutput, error) { + req, out := c.RegisterTargetWithMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRegisterTaskWithMaintenanceWindow = "RegisterTaskWithMaintenanceWindow" + +// RegisterTaskWithMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the RegisterTaskWithMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RegisterTaskWithMaintenanceWindow for more information on using the RegisterTaskWithMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RegisterTaskWithMaintenanceWindowRequest method. +// req, resp := client.RegisterTaskWithMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterTaskWithMaintenanceWindow +func (c *SSM) RegisterTaskWithMaintenanceWindowRequest(input *RegisterTaskWithMaintenanceWindowInput) (req *request.Request, output *RegisterTaskWithMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opRegisterTaskWithMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RegisterTaskWithMaintenanceWindowInput{} + } + + output = &RegisterTaskWithMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// RegisterTaskWithMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Adds a new task to a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation RegisterTaskWithMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" +// Error returned when an idempotent operation is retried and the parameters +// don't match the original call to the API with the same idempotency token. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Error returned when the caller has exceeded the default resource limits. +// For example, too many Maintenance Windows or Patch baselines have been created. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeFeatureNotAvailableException "FeatureNotAvailableException" +// You attempted to register a LAMBDA or STEP_FUNCTION task in a region where +// the corresponding service is not available. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RegisterTaskWithMaintenanceWindow +func (c *SSM) RegisterTaskWithMaintenanceWindow(input *RegisterTaskWithMaintenanceWindowInput) (*RegisterTaskWithMaintenanceWindowOutput, error) { + req, out := c.RegisterTaskWithMaintenanceWindowRequest(input) + return out, req.Send() +} + +// RegisterTaskWithMaintenanceWindowWithContext is the same as RegisterTaskWithMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See RegisterTaskWithMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) RegisterTaskWithMaintenanceWindowWithContext(ctx aws.Context, input *RegisterTaskWithMaintenanceWindowInput, opts ...request.Option) (*RegisterTaskWithMaintenanceWindowOutput, error) { + req, out := c.RegisterTaskWithMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveTagsFromResource = "RemoveTagsFromResource" + +// RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the +// client's request for the RemoveTagsFromResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveTagsFromResource for more information on using the RemoveTagsFromResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveTagsFromResourceRequest method. +// req, resp := client.RemoveTagsFromResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RemoveTagsFromResource +func (c *SSM) RemoveTagsFromResourceRequest(input *RemoveTagsFromResourceInput) (req *request.Request, output *RemoveTagsFromResourceOutput) { + op := &request.Operation{ + Name: opRemoveTagsFromResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveTagsFromResourceInput{} + } + + output = &RemoveTagsFromResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// RemoveTagsFromResource API operation for Amazon Simple Systems Manager (SSM). +// +// Removes all tags from the specified resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation RemoveTagsFromResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidResourceType "InvalidResourceType" +// The resource type is not valid. For example, if you are attempting to tag +// an instance, the instance must be a registered, managed instance. +// +// * ErrCodeInvalidResourceId "InvalidResourceId" +// The resource ID is not valid. Verify that you entered the correct ID and +// try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RemoveTagsFromResource +func (c *SSM) RemoveTagsFromResource(input *RemoveTagsFromResourceInput) (*RemoveTagsFromResourceOutput, error) { + req, out := c.RemoveTagsFromResourceRequest(input) + return out, req.Send() +} + +// RemoveTagsFromResourceWithContext is the same as RemoveTagsFromResource with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveTagsFromResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) RemoveTagsFromResourceWithContext(ctx aws.Context, input *RemoveTagsFromResourceInput, opts ...request.Option) (*RemoveTagsFromResourceOutput, error) { + req, out := c.RemoveTagsFromResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSendAutomationSignal = "SendAutomationSignal" + +// SendAutomationSignalRequest generates a "aws/request.Request" representing the +// client's request for the SendAutomationSignal operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SendAutomationSignal for more information on using the SendAutomationSignal +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SendAutomationSignalRequest method. +// req, resp := client.SendAutomationSignalRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/SendAutomationSignal +func (c *SSM) SendAutomationSignalRequest(input *SendAutomationSignalInput) (req *request.Request, output *SendAutomationSignalOutput) { + op := &request.Operation{ + Name: opSendAutomationSignal, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SendAutomationSignalInput{} + } + + output = &SendAutomationSignalOutput{} + req = c.newRequest(op, input, output) + return +} + +// SendAutomationSignal API operation for Amazon Simple Systems Manager (SSM). +// +// Sends a signal to an Automation execution to change the current behavior +// or status of the execution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation SendAutomationSignal for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAutomationExecutionNotFoundException "AutomationExecutionNotFoundException" +// There is no automation execution information for the requested automation +// execution ID. +// +// * ErrCodeAutomationStepNotFoundException "AutomationStepNotFoundException" +// The specified step name and execution ID don't exist. Verify the information +// and try again. +// +// * ErrCodeInvalidAutomationSignalException "InvalidAutomationSignalException" +// The signal is not valid for the current Automation execution. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/SendAutomationSignal +func (c *SSM) SendAutomationSignal(input *SendAutomationSignalInput) (*SendAutomationSignalOutput, error) { + req, out := c.SendAutomationSignalRequest(input) + return out, req.Send() +} + +// SendAutomationSignalWithContext is the same as SendAutomationSignal with the addition of +// the ability to pass a context and additional request options. +// +// See SendAutomationSignal for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) SendAutomationSignalWithContext(ctx aws.Context, input *SendAutomationSignalInput, opts ...request.Option) (*SendAutomationSignalOutput, error) { + req, out := c.SendAutomationSignalRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSendCommand = "SendCommand" + +// SendCommandRequest generates a "aws/request.Request" representing the +// client's request for the SendCommand operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SendCommand for more information on using the SendCommand +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SendCommandRequest method. +// req, resp := client.SendCommandRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/SendCommand +func (c *SSM) SendCommandRequest(input *SendCommandInput) (req *request.Request, output *SendCommandOutput) { + op := &request.Operation{ + Name: opSendCommand, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SendCommandInput{} + } + + output = &SendCommandOutput{} + req = c.newRequest(op, input, output) + return +} + +// SendCommand API operation for Amazon Simple Systems Manager (SSM). +// +// Executes commands on one or more managed instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation SendCommand for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDuplicateInstanceId "DuplicateInstanceId" +// You cannot specify an instance ID in more than one association. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// * ErrCodeInvalidOutputFolder "InvalidOutputFolder" +// The S3 bucket does not exist. +// +// * ErrCodeInvalidParameters "InvalidParameters" +// You must specify values for all required parameters in the Systems Manager +// document. You can only supply values to parameters defined in the Systems +// Manager document. +// +// * ErrCodeUnsupportedPlatformType "UnsupportedPlatformType" +// The document does not support the platform type of the given instance ID(s). +// For example, you sent an document for a Windows instance to a Linux instance. +// +// * ErrCodeMaxDocumentSizeExceeded "MaxDocumentSizeExceeded" +// The size limit of a document is 64 KB. +// +// * ErrCodeInvalidRole "InvalidRole" +// The role name can't contain invalid characters. Also verify that you specified +// an IAM role for notifications that includes the required trust policy. For +// information about configuring the IAM role for Run Command notifications, +// see Configuring Amazon SNS Notifications for Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/rc-sns-notifications.html) +// in the AWS Systems Manager User Guide. +// +// * ErrCodeInvalidNotificationConfig "InvalidNotificationConfig" +// One or more configuration items is not valid. Verify that a valid Amazon +// Resource Name (ARN) was provided for an Amazon SNS topic. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/SendCommand +func (c *SSM) SendCommand(input *SendCommandInput) (*SendCommandOutput, error) { + req, out := c.SendCommandRequest(input) + return out, req.Send() +} + +// SendCommandWithContext is the same as SendCommand with the addition of +// the ability to pass a context and additional request options. +// +// See SendCommand for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) SendCommandWithContext(ctx aws.Context, input *SendCommandInput, opts ...request.Option) (*SendCommandOutput, error) { + req, out := c.SendCommandRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartAutomationExecution = "StartAutomationExecution" + +// StartAutomationExecutionRequest generates a "aws/request.Request" representing the +// client's request for the StartAutomationExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartAutomationExecution for more information on using the StartAutomationExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartAutomationExecutionRequest method. +// req, resp := client.StartAutomationExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartAutomationExecution +func (c *SSM) StartAutomationExecutionRequest(input *StartAutomationExecutionInput) (req *request.Request, output *StartAutomationExecutionOutput) { + op := &request.Operation{ + Name: opStartAutomationExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartAutomationExecutionInput{} + } + + output = &StartAutomationExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartAutomationExecution API operation for Amazon Simple Systems Manager (SSM). +// +// Initiates execution of an Automation document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation StartAutomationExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAutomationDefinitionNotFoundException "AutomationDefinitionNotFoundException" +// An Automation document with the specified name could not be found. +// +// * ErrCodeInvalidAutomationExecutionParametersException "InvalidAutomationExecutionParametersException" +// The supplied parameters for invoking the specified Automation document are +// incorrect. For example, they may not match the set of parameters permitted +// for the specified Automation document. +// +// * ErrCodeAutomationExecutionLimitExceededException "AutomationExecutionLimitExceededException" +// The number of simultaneously running Automation executions exceeded the allowable +// limit. +// +// * ErrCodeAutomationDefinitionVersionNotFoundException "AutomationDefinitionVersionNotFoundException" +// An Automation document with the specified name and version could not be found. +// +// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" +// Error returned when an idempotent operation is retried and the parameters +// don't match the original call to the API with the same idempotency token. +// +// * ErrCodeInvalidTarget "InvalidTarget" +// The target is not valid or does not exist. It might not be configured for +// EC2 Systems Manager or you might not have permission to perform the operation. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartAutomationExecution +func (c *SSM) StartAutomationExecution(input *StartAutomationExecutionInput) (*StartAutomationExecutionOutput, error) { + req, out := c.StartAutomationExecutionRequest(input) + return out, req.Send() +} + +// StartAutomationExecutionWithContext is the same as StartAutomationExecution with the addition of +// the ability to pass a context and additional request options. +// +// See StartAutomationExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) StartAutomationExecutionWithContext(ctx aws.Context, input *StartAutomationExecutionInput, opts ...request.Option) (*StartAutomationExecutionOutput, error) { + req, out := c.StartAutomationExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopAutomationExecution = "StopAutomationExecution" + +// StopAutomationExecutionRequest generates a "aws/request.Request" representing the +// client's request for the StopAutomationExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopAutomationExecution for more information on using the StopAutomationExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopAutomationExecutionRequest method. +// req, resp := client.StopAutomationExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StopAutomationExecution +func (c *SSM) StopAutomationExecutionRequest(input *StopAutomationExecutionInput) (req *request.Request, output *StopAutomationExecutionOutput) { + op := &request.Operation{ + Name: opStopAutomationExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopAutomationExecutionInput{} + } + + output = &StopAutomationExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopAutomationExecution API operation for Amazon Simple Systems Manager (SSM). +// +// Stop an Automation that is currently executing. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation StopAutomationExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAutomationExecutionNotFoundException "AutomationExecutionNotFoundException" +// There is no automation execution information for the requested automation +// execution ID. +// +// * ErrCodeInvalidAutomationStatusUpdateException "InvalidAutomationStatusUpdateException" +// The specified update status operation is not valid. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StopAutomationExecution +func (c *SSM) StopAutomationExecution(input *StopAutomationExecutionInput) (*StopAutomationExecutionOutput, error) { + req, out := c.StopAutomationExecutionRequest(input) + return out, req.Send() +} + +// StopAutomationExecutionWithContext is the same as StopAutomationExecution with the addition of +// the ability to pass a context and additional request options. +// +// See StopAutomationExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) StopAutomationExecutionWithContext(ctx aws.Context, input *StopAutomationExecutionInput, opts ...request.Option) (*StopAutomationExecutionOutput, error) { + req, out := c.StopAutomationExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAssociation = "UpdateAssociation" + +// UpdateAssociationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAssociation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAssociation for more information on using the UpdateAssociation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAssociationRequest method. +// req, resp := client.UpdateAssociationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateAssociation +func (c *SSM) UpdateAssociationRequest(input *UpdateAssociationInput) (req *request.Request, output *UpdateAssociationOutput) { + op := &request.Operation{ + Name: opUpdateAssociation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateAssociationInput{} + } + + output = &UpdateAssociationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateAssociation API operation for Amazon Simple Systems Manager (SSM). +// +// Updates an association. You can update the association name and version, +// the document version, schedule, parameters, and Amazon S3 output. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateAssociation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidSchedule "InvalidSchedule" +// The schedule is invalid. Verify your cron or rate expression and try again. +// +// * ErrCodeInvalidParameters "InvalidParameters" +// You must specify values for all required parameters in the Systems Manager +// document. You can only supply values to parameters defined in the Systems +// Manager document. +// +// * ErrCodeInvalidOutputLocation "InvalidOutputLocation" +// The output location is not valid or does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// * ErrCodeInvalidUpdate "InvalidUpdate" +// The update is not valid. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidTarget "InvalidTarget" +// The target is not valid or does not exist. It might not be configured for +// EC2 Systems Manager or you might not have permission to perform the operation. +// +// * ErrCodeInvalidAssociationVersion "InvalidAssociationVersion" +// The version you specified is not valid. Use ListAssociationVersions to view +// all versions of an association according to the association ID. Or, use the +// $LATEST parameter to view the latest version of the association. +// +// * ErrCodeAssociationVersionLimitExceeded "AssociationVersionLimitExceeded" +// You have reached the maximum number versions allowed for an association. +// Each association has a limit of 1,000 versions. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateAssociation +func (c *SSM) UpdateAssociation(input *UpdateAssociationInput) (*UpdateAssociationOutput, error) { + req, out := c.UpdateAssociationRequest(input) + return out, req.Send() +} + +// UpdateAssociationWithContext is the same as UpdateAssociation with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAssociation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateAssociationWithContext(ctx aws.Context, input *UpdateAssociationInput, opts ...request.Option) (*UpdateAssociationOutput, error) { + req, out := c.UpdateAssociationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAssociationStatus = "UpdateAssociationStatus" + +// UpdateAssociationStatusRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAssociationStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAssociationStatus for more information on using the UpdateAssociationStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAssociationStatusRequest method. +// req, resp := client.UpdateAssociationStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateAssociationStatus +func (c *SSM) UpdateAssociationStatusRequest(input *UpdateAssociationStatusInput) (req *request.Request, output *UpdateAssociationStatusOutput) { + op := &request.Operation{ + Name: opUpdateAssociationStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateAssociationStatusInput{} + } + + output = &UpdateAssociationStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateAssociationStatus API operation for Amazon Simple Systems Manager (SSM). +// +// Updates the status of the Systems Manager document associated with the specified +// instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateAssociationStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// * ErrCodeStatusUnchanged "StatusUnchanged" +// The updated status is the same as the current status. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateAssociationStatus +func (c *SSM) UpdateAssociationStatus(input *UpdateAssociationStatusInput) (*UpdateAssociationStatusOutput, error) { + req, out := c.UpdateAssociationStatusRequest(input) + return out, req.Send() +} + +// UpdateAssociationStatusWithContext is the same as UpdateAssociationStatus with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAssociationStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateAssociationStatusWithContext(ctx aws.Context, input *UpdateAssociationStatusInput, opts ...request.Option) (*UpdateAssociationStatusOutput, error) { + req, out := c.UpdateAssociationStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateDocument = "UpdateDocument" + +// UpdateDocumentRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDocument operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDocument for more information on using the UpdateDocument +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDocumentRequest method. +// req, resp := client.UpdateDocumentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateDocument +func (c *SSM) UpdateDocumentRequest(input *UpdateDocumentInput) (req *request.Request, output *UpdateDocumentOutput) { + op := &request.Operation{ + Name: opUpdateDocument, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateDocumentInput{} + } + + output = &UpdateDocumentOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDocument API operation for Amazon Simple Systems Manager (SSM). +// +// The document you want to update. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateDocument for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMaxDocumentSizeExceeded "MaxDocumentSizeExceeded" +// The size limit of a document is 64 KB. +// +// * ErrCodeDocumentVersionLimitExceeded "DocumentVersionLimitExceeded" +// The document has too many versions. Delete one or more document versions +// and try again. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeDuplicateDocumentContent "DuplicateDocumentContent" +// The content of the association document matches another document. Change +// the content of the document and try again. +// +// * ErrCodeInvalidDocumentContent "InvalidDocumentContent" +// The content for the document is not valid. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// * ErrCodeInvalidDocumentSchemaVersion "InvalidDocumentSchemaVersion" +// The version of the document schema is not supported. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateDocument +func (c *SSM) UpdateDocument(input *UpdateDocumentInput) (*UpdateDocumentOutput, error) { + req, out := c.UpdateDocumentRequest(input) + return out, req.Send() +} + +// UpdateDocumentWithContext is the same as UpdateDocument with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDocument for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateDocumentWithContext(ctx aws.Context, input *UpdateDocumentInput, opts ...request.Option) (*UpdateDocumentOutput, error) { + req, out := c.UpdateDocumentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateDocumentDefaultVersion = "UpdateDocumentDefaultVersion" + +// UpdateDocumentDefaultVersionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDocumentDefaultVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDocumentDefaultVersion for more information on using the UpdateDocumentDefaultVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDocumentDefaultVersionRequest method. +// req, resp := client.UpdateDocumentDefaultVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateDocumentDefaultVersion +func (c *SSM) UpdateDocumentDefaultVersionRequest(input *UpdateDocumentDefaultVersionInput) (req *request.Request, output *UpdateDocumentDefaultVersionOutput) { + op := &request.Operation{ + Name: opUpdateDocumentDefaultVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateDocumentDefaultVersionInput{} + } + + output = &UpdateDocumentDefaultVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDocumentDefaultVersion API operation for Amazon Simple Systems Manager (SSM). +// +// Set the default version of a document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateDocumentDefaultVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// +// * ErrCodeInvalidDocumentSchemaVersion "InvalidDocumentSchemaVersion" +// The version of the document schema is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateDocumentDefaultVersion +func (c *SSM) UpdateDocumentDefaultVersion(input *UpdateDocumentDefaultVersionInput) (*UpdateDocumentDefaultVersionOutput, error) { + req, out := c.UpdateDocumentDefaultVersionRequest(input) + return out, req.Send() +} + +// UpdateDocumentDefaultVersionWithContext is the same as UpdateDocumentDefaultVersion with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDocumentDefaultVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateDocumentDefaultVersionWithContext(ctx aws.Context, input *UpdateDocumentDefaultVersionInput, opts ...request.Option) (*UpdateDocumentDefaultVersionOutput, error) { + req, out := c.UpdateDocumentDefaultVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateMaintenanceWindow = "UpdateMaintenanceWindow" + +// UpdateMaintenanceWindowRequest generates a "aws/request.Request" representing the +// client's request for the UpdateMaintenanceWindow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateMaintenanceWindow for more information on using the UpdateMaintenanceWindow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateMaintenanceWindowRequest method. +// req, resp := client.UpdateMaintenanceWindowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateMaintenanceWindow +func (c *SSM) UpdateMaintenanceWindowRequest(input *UpdateMaintenanceWindowInput) (req *request.Request, output *UpdateMaintenanceWindowOutput) { + op := &request.Operation{ + Name: opUpdateMaintenanceWindow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateMaintenanceWindowInput{} + } + + output = &UpdateMaintenanceWindowOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateMaintenanceWindow API operation for Amazon Simple Systems Manager (SSM). +// +// Updates an existing Maintenance Window. Only specified parameters are modified. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateMaintenanceWindow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateMaintenanceWindow +func (c *SSM) UpdateMaintenanceWindow(input *UpdateMaintenanceWindowInput) (*UpdateMaintenanceWindowOutput, error) { + req, out := c.UpdateMaintenanceWindowRequest(input) + return out, req.Send() +} + +// UpdateMaintenanceWindowWithContext is the same as UpdateMaintenanceWindow with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateMaintenanceWindow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateMaintenanceWindowWithContext(ctx aws.Context, input *UpdateMaintenanceWindowInput, opts ...request.Option) (*UpdateMaintenanceWindowOutput, error) { + req, out := c.UpdateMaintenanceWindowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateMaintenanceWindowTarget = "UpdateMaintenanceWindowTarget" + +// UpdateMaintenanceWindowTargetRequest generates a "aws/request.Request" representing the +// client's request for the UpdateMaintenanceWindowTarget operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateMaintenanceWindowTarget for more information on using the UpdateMaintenanceWindowTarget +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateMaintenanceWindowTargetRequest method. +// req, resp := client.UpdateMaintenanceWindowTargetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateMaintenanceWindowTarget +func (c *SSM) UpdateMaintenanceWindowTargetRequest(input *UpdateMaintenanceWindowTargetInput) (req *request.Request, output *UpdateMaintenanceWindowTargetOutput) { + op := &request.Operation{ + Name: opUpdateMaintenanceWindowTarget, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateMaintenanceWindowTargetInput{} + } + + output = &UpdateMaintenanceWindowTargetOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateMaintenanceWindowTarget API operation for Amazon Simple Systems Manager (SSM). +// +// Modifies the target of an existing Maintenance Window. You can't change the +// target type, but you can change the following: +// +// The target from being an ID target to a Tag target, or a Tag target to an +// ID target. +// +// IDs for an ID target. +// +// Tags for a Tag target. +// +// Owner. +// +// Name. +// +// Description. +// +// If a parameter is null, then the corresponding field is not modified. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateMaintenanceWindowTarget for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateMaintenanceWindowTarget +func (c *SSM) UpdateMaintenanceWindowTarget(input *UpdateMaintenanceWindowTargetInput) (*UpdateMaintenanceWindowTargetOutput, error) { + req, out := c.UpdateMaintenanceWindowTargetRequest(input) + return out, req.Send() +} + +// UpdateMaintenanceWindowTargetWithContext is the same as UpdateMaintenanceWindowTarget with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateMaintenanceWindowTarget for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateMaintenanceWindowTargetWithContext(ctx aws.Context, input *UpdateMaintenanceWindowTargetInput, opts ...request.Option) (*UpdateMaintenanceWindowTargetOutput, error) { + req, out := c.UpdateMaintenanceWindowTargetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateMaintenanceWindowTask = "UpdateMaintenanceWindowTask" + +// UpdateMaintenanceWindowTaskRequest generates a "aws/request.Request" representing the +// client's request for the UpdateMaintenanceWindowTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateMaintenanceWindowTask for more information on using the UpdateMaintenanceWindowTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateMaintenanceWindowTaskRequest method. +// req, resp := client.UpdateMaintenanceWindowTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateMaintenanceWindowTask +func (c *SSM) UpdateMaintenanceWindowTaskRequest(input *UpdateMaintenanceWindowTaskInput) (req *request.Request, output *UpdateMaintenanceWindowTaskOutput) { + op := &request.Operation{ + Name: opUpdateMaintenanceWindowTask, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateMaintenanceWindowTaskInput{} + } + + output = &UpdateMaintenanceWindowTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateMaintenanceWindowTask API operation for Amazon Simple Systems Manager (SSM). +// +// Modifies a task assigned to a Maintenance Window. You can't change the task +// type, but you can change the following values: +// +// * TaskARN. For example, you can change a RUN_COMMAND task from AWS-RunPowerShellScript +// to AWS-RunShellScript. +// +// * ServiceRoleArn +// +// * TaskInvocationParameters +// +// * Priority +// +// * MaxConcurrency +// +// * MaxErrors +// +// If a parameter is null, then the corresponding field is not modified. Also, +// if you set Replace to true, then all fields required by the RegisterTaskWithMaintenanceWindow +// action are required for this request. Optional fields that aren't specified +// are set to null. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateMaintenanceWindowTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateMaintenanceWindowTask +func (c *SSM) UpdateMaintenanceWindowTask(input *UpdateMaintenanceWindowTaskInput) (*UpdateMaintenanceWindowTaskOutput, error) { + req, out := c.UpdateMaintenanceWindowTaskRequest(input) + return out, req.Send() +} + +// UpdateMaintenanceWindowTaskWithContext is the same as UpdateMaintenanceWindowTask with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateMaintenanceWindowTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateMaintenanceWindowTaskWithContext(ctx aws.Context, input *UpdateMaintenanceWindowTaskInput, opts ...request.Option) (*UpdateMaintenanceWindowTaskOutput, error) { + req, out := c.UpdateMaintenanceWindowTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateManagedInstanceRole = "UpdateManagedInstanceRole" + +// UpdateManagedInstanceRoleRequest generates a "aws/request.Request" representing the +// client's request for the UpdateManagedInstanceRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateManagedInstanceRole for more information on using the UpdateManagedInstanceRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateManagedInstanceRoleRequest method. +// req, resp := client.UpdateManagedInstanceRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateManagedInstanceRole +func (c *SSM) UpdateManagedInstanceRoleRequest(input *UpdateManagedInstanceRoleInput) (req *request.Request, output *UpdateManagedInstanceRoleOutput) { + op := &request.Operation{ + Name: opUpdateManagedInstanceRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateManagedInstanceRoleInput{} + } + + output = &UpdateManagedInstanceRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateManagedInstanceRole API operation for Amazon Simple Systems Manager (SSM). +// +// Assigns or changes an Amazon Identity and Access Management (IAM) role to +// the managed instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdateManagedInstanceRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInstanceId "InvalidInstanceId" +// The following problems can cause this exception: +// +// You do not have permission to access the instance. +// +// The SSM Agent is not running. On managed instances and Linux instances, verify +// that the SSM Agent is running. On EC2 Windows instances, verify that the +// EC2Config service is running. +// +// The SSM Agent or EC2Config service is not registered to the SSM endpoint. +// Try reinstalling the SSM Agent or EC2Config service. +// +// The instance is not in valid state. Valid states are: Running, Pending, Stopped, +// Stopping. Invalid states are: Shutting-down and Terminated. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdateManagedInstanceRole +func (c *SSM) UpdateManagedInstanceRole(input *UpdateManagedInstanceRoleInput) (*UpdateManagedInstanceRoleOutput, error) { + req, out := c.UpdateManagedInstanceRoleRequest(input) + return out, req.Send() +} + +// UpdateManagedInstanceRoleWithContext is the same as UpdateManagedInstanceRole with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateManagedInstanceRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdateManagedInstanceRoleWithContext(ctx aws.Context, input *UpdateManagedInstanceRoleInput, opts ...request.Option) (*UpdateManagedInstanceRoleOutput, error) { + req, out := c.UpdateManagedInstanceRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdatePatchBaseline = "UpdatePatchBaseline" + +// UpdatePatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the UpdatePatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdatePatchBaseline for more information on using the UpdatePatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdatePatchBaselineRequest method. +// req, resp := client.UpdatePatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdatePatchBaseline +func (c *SSM) UpdatePatchBaselineRequest(input *UpdatePatchBaselineInput) (req *request.Request, output *UpdatePatchBaselineOutput) { + op := &request.Operation{ + Name: opUpdatePatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdatePatchBaselineInput{} + } + + output = &UpdatePatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdatePatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Modifies an existing patch baseline. Fields not specified in the request +// are left unchanged. +// +// For information about valid key and value pairs in PatchFilters for each +// supported operating system type, see PatchFilter (http://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation UpdatePatchBaseline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/UpdatePatchBaseline +func (c *SSM) UpdatePatchBaseline(input *UpdatePatchBaselineInput) (*UpdatePatchBaselineOutput, error) { + req, out := c.UpdatePatchBaselineRequest(input) + return out, req.Send() +} + +// UpdatePatchBaselineWithContext is the same as UpdatePatchBaseline with the addition of +// the ability to pass a context and additional request options. +// +// See UpdatePatchBaseline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) UpdatePatchBaselineWithContext(ctx aws.Context, input *UpdatePatchBaselineInput, opts ...request.Option) (*UpdatePatchBaselineOutput, error) { + req, out := c.UpdatePatchBaselineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// An activation registers one or more on-premises servers or virtual machines +// (VMs) with AWS so that you can configure those servers or VMs using Run Command. +// A server or VM that has been registered with AWS is called a managed instance. +type Activation struct { + _ struct{} `type:"structure"` + + // The ID created by Systems Manager when you submitted the activation. + ActivationId *string `type:"string"` + + // The date the activation was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // A name for the managed instance when it is created. + DefaultInstanceName *string `type:"string"` + + // A user defined description of the activation. + Description *string `type:"string"` + + // The date when this activation can no longer be used to register managed instances. + ExpirationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Whether or not the activation is expired. + Expired *bool `type:"boolean"` + + // The Amazon Identity and Access Management (IAM) role to assign to the managed + // instance. + IamRole *string `type:"string"` + + // The maximum number of managed instances that can be registered using this + // activation. + RegistrationLimit *int64 `min:"1" type:"integer"` + + // The number of managed instances already registered with this activation. + RegistrationsCount *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s Activation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Activation) GoString() string { + return s.String() +} + +// SetActivationId sets the ActivationId field's value. +func (s *Activation) SetActivationId(v string) *Activation { + s.ActivationId = &v + return s +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *Activation) SetCreatedDate(v time.Time) *Activation { + s.CreatedDate = &v + return s +} + +// SetDefaultInstanceName sets the DefaultInstanceName field's value. +func (s *Activation) SetDefaultInstanceName(v string) *Activation { + s.DefaultInstanceName = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Activation) SetDescription(v string) *Activation { + s.Description = &v + return s +} + +// SetExpirationDate sets the ExpirationDate field's value. +func (s *Activation) SetExpirationDate(v time.Time) *Activation { + s.ExpirationDate = &v + return s +} + +// SetExpired sets the Expired field's value. +func (s *Activation) SetExpired(v bool) *Activation { + s.Expired = &v + return s +} + +// SetIamRole sets the IamRole field's value. +func (s *Activation) SetIamRole(v string) *Activation { + s.IamRole = &v + return s +} + +// SetRegistrationLimit sets the RegistrationLimit field's value. +func (s *Activation) SetRegistrationLimit(v int64) *Activation { + s.RegistrationLimit = &v + return s +} + +// SetRegistrationsCount sets the RegistrationsCount field's value. +func (s *Activation) SetRegistrationsCount(v int64) *Activation { + s.RegistrationsCount = &v + return s +} + +type AddTagsToResourceInput struct { + _ struct{} `type:"structure"` + + // The resource ID you want to tag. + // + // Use the ID of the resource. Here are some examples: + // + // ManagedInstance: mi-012345abcde + // + // MaintenanceWindow: mw-012345abcde + // + // PatchBaseline: pb-012345abcde + // + // For the Document and Parameter values, use the name of the resource. + // + // The ManagedInstance type for this API action is only for on-premises managed + // instances. You must specify the the name of the managed instance in the following + // format: mi-ID_number. For example, mi-1a2b3c4d5e6f. + // + // ResourceId is a required field + ResourceId *string `type:"string" required:"true"` + + // Specifies the type of resource you are tagging. + // + // The ManagedInstance type for this API action is for on-premises managed instances. + // You must specify the the name of the managed instance in the following format: + // mi-ID_number. For example, mi-1a2b3c4d5e6f. + // + // ResourceType is a required field + ResourceType *string `type:"string" required:"true" enum:"ResourceTypeForTagging"` + + // One or more tags. The value parameter is required, but if you don't want + // the tag to have a value, specify the parameter with no value, and we set + // the value to an empty string. + // + // Do not enter personally identifiable information in this field. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s AddTagsToResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsToResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddTagsToResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddTagsToResourceInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *AddTagsToResourceInput) SetResourceId(v string) *AddTagsToResourceInput { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *AddTagsToResourceInput) SetResourceType(v string) *AddTagsToResourceInput { + s.ResourceType = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *AddTagsToResourceInput) SetTags(v []*Tag) *AddTagsToResourceInput { + s.Tags = v + return s +} + +type AddTagsToResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddTagsToResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsToResourceOutput) GoString() string { + return s.String() +} + +// Describes an association of a Systems Manager document and an instance. +type Association struct { + _ struct{} `type:"structure"` + + // The ID created by the system when you create an association. An association + // is a binding between a document and a set of targets with a schedule. + AssociationId *string `type:"string"` + + // The association name. + AssociationName *string `type:"string"` + + // The association version. + AssociationVersion *string `type:"string"` + + // The version of the document used in the association. + DocumentVersion *string `type:"string"` + + // The ID of the instance. + InstanceId *string `type:"string"` + + // The date on which the association was last run. + LastExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the Systems Manager document. + Name *string `type:"string"` + + // Information about the association. + Overview *AssociationOverview `type:"structure"` + + // A cron expression that specifies a schedule when the association runs. + ScheduleExpression *string `min:"1" type:"string"` + + // The instances targeted by the request to create an association. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s Association) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Association) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *Association) SetAssociationId(v string) *Association { + s.AssociationId = &v + return s +} + +// SetAssociationName sets the AssociationName field's value. +func (s *Association) SetAssociationName(v string) *Association { + s.AssociationName = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *Association) SetAssociationVersion(v string) *Association { + s.AssociationVersion = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *Association) SetDocumentVersion(v string) *Association { + s.DocumentVersion = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *Association) SetInstanceId(v string) *Association { + s.InstanceId = &v + return s +} + +// SetLastExecutionDate sets the LastExecutionDate field's value. +func (s *Association) SetLastExecutionDate(v time.Time) *Association { + s.LastExecutionDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *Association) SetName(v string) *Association { + s.Name = &v + return s +} + +// SetOverview sets the Overview field's value. +func (s *Association) SetOverview(v *AssociationOverview) *Association { + s.Overview = v + return s +} + +// SetScheduleExpression sets the ScheduleExpression field's value. +func (s *Association) SetScheduleExpression(v string) *Association { + s.ScheduleExpression = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *Association) SetTargets(v []*Target) *Association { + s.Targets = v + return s +} + +// Describes the parameters for a document. +type AssociationDescription struct { + _ struct{} `type:"structure"` + + // The association ID. + AssociationId *string `type:"string"` + + // The association name. + AssociationName *string `type:"string"` + + // The association version. + AssociationVersion *string `type:"string"` + + // The date when the association was made. + Date *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The document version. + DocumentVersion *string `type:"string"` + + // The ID of the instance. + InstanceId *string `type:"string"` + + // The date on which the association was last run. + LastExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The last date on which the association was successfully run. + LastSuccessfulExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The date when the association was last updated. + LastUpdateAssociationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the Systems Manager document. + Name *string `type:"string"` + + // An Amazon S3 bucket where you want to store the output details of the request. + OutputLocation *InstanceAssociationOutputLocation `type:"structure"` + + // Information about the association. + Overview *AssociationOverview `type:"structure"` + + // A description of the parameters for a document. + Parameters map[string][]*string `type:"map"` + + // A cron expression that specifies a schedule when the association runs. + ScheduleExpression *string `min:"1" type:"string"` + + // The association status. + Status *AssociationStatus `type:"structure"` + + // The instances targeted by the request. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s AssociationDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationDescription) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *AssociationDescription) SetAssociationId(v string) *AssociationDescription { + s.AssociationId = &v + return s +} + +// SetAssociationName sets the AssociationName field's value. +func (s *AssociationDescription) SetAssociationName(v string) *AssociationDescription { + s.AssociationName = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *AssociationDescription) SetAssociationVersion(v string) *AssociationDescription { + s.AssociationVersion = &v + return s +} + +// SetDate sets the Date field's value. +func (s *AssociationDescription) SetDate(v time.Time) *AssociationDescription { + s.Date = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *AssociationDescription) SetDocumentVersion(v string) *AssociationDescription { + s.DocumentVersion = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *AssociationDescription) SetInstanceId(v string) *AssociationDescription { + s.InstanceId = &v + return s +} + +// SetLastExecutionDate sets the LastExecutionDate field's value. +func (s *AssociationDescription) SetLastExecutionDate(v time.Time) *AssociationDescription { + s.LastExecutionDate = &v + return s +} + +// SetLastSuccessfulExecutionDate sets the LastSuccessfulExecutionDate field's value. +func (s *AssociationDescription) SetLastSuccessfulExecutionDate(v time.Time) *AssociationDescription { + s.LastSuccessfulExecutionDate = &v + return s +} + +// SetLastUpdateAssociationDate sets the LastUpdateAssociationDate field's value. +func (s *AssociationDescription) SetLastUpdateAssociationDate(v time.Time) *AssociationDescription { + s.LastUpdateAssociationDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *AssociationDescription) SetName(v string) *AssociationDescription { + s.Name = &v + return s +} + +// SetOutputLocation sets the OutputLocation field's value. +func (s *AssociationDescription) SetOutputLocation(v *InstanceAssociationOutputLocation) *AssociationDescription { + s.OutputLocation = v + return s +} + +// SetOverview sets the Overview field's value. +func (s *AssociationDescription) SetOverview(v *AssociationOverview) *AssociationDescription { + s.Overview = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *AssociationDescription) SetParameters(v map[string][]*string) *AssociationDescription { + s.Parameters = v + return s +} + +// SetScheduleExpression sets the ScheduleExpression field's value. +func (s *AssociationDescription) SetScheduleExpression(v string) *AssociationDescription { + s.ScheduleExpression = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AssociationDescription) SetStatus(v *AssociationStatus) *AssociationDescription { + s.Status = v + return s +} + +// SetTargets sets the Targets field's value. +func (s *AssociationDescription) SetTargets(v []*Target) *AssociationDescription { + s.Targets = v + return s +} + +// Describes a filter. +type AssociationFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true" enum:"AssociationFilterKey"` + + // The filter value. + // + // Value is a required field + Value *string `locationName:"value" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociationFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociationFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociationFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *AssociationFilter) SetKey(v string) *AssociationFilter { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *AssociationFilter) SetValue(v string) *AssociationFilter { + s.Value = &v + return s +} + +// Information about the association. +type AssociationOverview struct { + _ struct{} `type:"structure"` + + // Returns the number of targets for the association status. For example, if + // you created an association with two instances, and one of them was successful, + // this would return the count of instances by status. + AssociationStatusAggregatedCount map[string]*int64 `type:"map"` + + // A detailed status of the association. + DetailedStatus *string `type:"string"` + + // The status of the association. Status can be: Pending, Success, or Failed. + Status *string `type:"string"` +} + +// String returns the string representation +func (s AssociationOverview) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationOverview) GoString() string { + return s.String() +} + +// SetAssociationStatusAggregatedCount sets the AssociationStatusAggregatedCount field's value. +func (s *AssociationOverview) SetAssociationStatusAggregatedCount(v map[string]*int64) *AssociationOverview { + s.AssociationStatusAggregatedCount = v + return s +} + +// SetDetailedStatus sets the DetailedStatus field's value. +func (s *AssociationOverview) SetDetailedStatus(v string) *AssociationOverview { + s.DetailedStatus = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AssociationOverview) SetStatus(v string) *AssociationOverview { + s.Status = &v + return s +} + +// Describes an association status. +type AssociationStatus struct { + _ struct{} `type:"structure"` + + // A user-defined string. + AdditionalInfo *string `type:"string"` + + // The date when the status changed. + // + // Date is a required field + Date *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // The reason for the status. + // + // Message is a required field + Message *string `min:"1" type:"string" required:"true"` + + // The status. + // + // Name is a required field + Name *string `type:"string" required:"true" enum:"AssociationStatusName"` +} + +// String returns the string representation +func (s AssociationStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationStatus) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociationStatus) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociationStatus"} + if s.Date == nil { + invalidParams.Add(request.NewErrParamRequired("Date")) + } + if s.Message == nil { + invalidParams.Add(request.NewErrParamRequired("Message")) + } + if s.Message != nil && len(*s.Message) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Message", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAdditionalInfo sets the AdditionalInfo field's value. +func (s *AssociationStatus) SetAdditionalInfo(v string) *AssociationStatus { + s.AdditionalInfo = &v + return s +} + +// SetDate sets the Date field's value. +func (s *AssociationStatus) SetDate(v time.Time) *AssociationStatus { + s.Date = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *AssociationStatus) SetMessage(v string) *AssociationStatus { + s.Message = &v + return s +} + +// SetName sets the Name field's value. +func (s *AssociationStatus) SetName(v string) *AssociationStatus { + s.Name = &v + return s +} + +// Information about the association version. +type AssociationVersionInfo struct { + _ struct{} `type:"structure"` + + // The ID created by the system when the association was created. + AssociationId *string `type:"string"` + + // The name specified for the association version when the association version + // was created. + AssociationName *string `type:"string"` + + // The association version. + AssociationVersion *string `type:"string"` + + // The date the association version was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The version of a Systems Manager document used when the association version + // was created. + DocumentVersion *string `type:"string"` + + // The name specified when the association was created. + Name *string `type:"string"` + + // The location in Amazon S3 specified for the association when the association + // version was created. + OutputLocation *InstanceAssociationOutputLocation `type:"structure"` + + // Parameters specified when the association version was created. + Parameters map[string][]*string `type:"map"` + + // The cron or rate schedule specified for the association when the association + // version was created. + ScheduleExpression *string `min:"1" type:"string"` + + // The targets specified for the association when the association version was + // created. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s AssociationVersionInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationVersionInfo) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *AssociationVersionInfo) SetAssociationId(v string) *AssociationVersionInfo { + s.AssociationId = &v + return s +} + +// SetAssociationName sets the AssociationName field's value. +func (s *AssociationVersionInfo) SetAssociationName(v string) *AssociationVersionInfo { + s.AssociationName = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *AssociationVersionInfo) SetAssociationVersion(v string) *AssociationVersionInfo { + s.AssociationVersion = &v + return s +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *AssociationVersionInfo) SetCreatedDate(v time.Time) *AssociationVersionInfo { + s.CreatedDate = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *AssociationVersionInfo) SetDocumentVersion(v string) *AssociationVersionInfo { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *AssociationVersionInfo) SetName(v string) *AssociationVersionInfo { + s.Name = &v + return s +} + +// SetOutputLocation sets the OutputLocation field's value. +func (s *AssociationVersionInfo) SetOutputLocation(v *InstanceAssociationOutputLocation) *AssociationVersionInfo { + s.OutputLocation = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *AssociationVersionInfo) SetParameters(v map[string][]*string) *AssociationVersionInfo { + s.Parameters = v + return s +} + +// SetScheduleExpression sets the ScheduleExpression field's value. +func (s *AssociationVersionInfo) SetScheduleExpression(v string) *AssociationVersionInfo { + s.ScheduleExpression = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *AssociationVersionInfo) SetTargets(v []*Target) *AssociationVersionInfo { + s.Targets = v + return s +} + +// Detailed information about the current state of an individual Automation +// execution. +type AutomationExecution struct { + _ struct{} `type:"structure"` + + // The execution ID. + AutomationExecutionId *string `min:"36" type:"string"` + + // The execution status of the Automation. + AutomationExecutionStatus *string `type:"string" enum:"AutomationExecutionStatus"` + + // The action of the currently executing step. + CurrentAction *string `type:"string"` + + // The name of the currently executing step. + CurrentStepName *string `type:"string"` + + // The name of the Automation document used during the execution. + DocumentName *string `type:"string"` + + // The version of the document to use during execution. + DocumentVersion *string `type:"string"` + + // The Amazon Resource Name (ARN) of the user who executed the automation. + ExecutedBy *string `type:"string"` + + // The time the execution finished. + ExecutionEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time the execution started. + ExecutionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // A message describing why an execution has failed, if the status is set to + // Failed. + FailureMessage *string `type:"string"` + + // The MaxConcurrency value specified by the user when the execution started. + MaxConcurrency *string `min:"1" type:"string"` + + // The MaxErrors value specified by the user when the execution started. + MaxErrors *string `min:"1" type:"string"` + + // The automation execution mode. + Mode *string `type:"string" enum:"ExecutionMode"` + + // The list of execution outputs as defined in the automation document. + Outputs map[string][]*string `min:"1" type:"map"` + + // The key-value map of execution parameters, which were supplied when calling + // StartAutomationExecution. + Parameters map[string][]*string `min:"1" type:"map"` + + // The AutomationExecutionId of the parent automation. + ParentAutomationExecutionId *string `min:"36" type:"string"` + + // A list of resolved targets in the rate control execution. + ResolvedTargets *ResolvedTargets `type:"structure"` + + // A list of details about the current state of all steps that comprise an execution. + // An Automation document contains a list of steps that are executed in order. + StepExecutions []*StepExecution `type:"list"` + + // A boolean value that indicates if the response contains the full list of + // the Automation step executions. If true, use the DescribeAutomationStepExecutions + // API action to get the full list of step executions. + StepExecutionsTruncated *bool `type:"boolean"` + + // The target of the execution. + Target *string `type:"string"` + + // The parameter name. + TargetParameterName *string `min:"1" type:"string"` + + // The specified targets. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s AutomationExecution) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AutomationExecution) GoString() string { + return s.String() +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *AutomationExecution) SetAutomationExecutionId(v string) *AutomationExecution { + s.AutomationExecutionId = &v + return s +} + +// SetAutomationExecutionStatus sets the AutomationExecutionStatus field's value. +func (s *AutomationExecution) SetAutomationExecutionStatus(v string) *AutomationExecution { + s.AutomationExecutionStatus = &v + return s +} + +// SetCurrentAction sets the CurrentAction field's value. +func (s *AutomationExecution) SetCurrentAction(v string) *AutomationExecution { + s.CurrentAction = &v + return s +} + +// SetCurrentStepName sets the CurrentStepName field's value. +func (s *AutomationExecution) SetCurrentStepName(v string) *AutomationExecution { + s.CurrentStepName = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *AutomationExecution) SetDocumentName(v string) *AutomationExecution { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *AutomationExecution) SetDocumentVersion(v string) *AutomationExecution { + s.DocumentVersion = &v + return s +} + +// SetExecutedBy sets the ExecutedBy field's value. +func (s *AutomationExecution) SetExecutedBy(v string) *AutomationExecution { + s.ExecutedBy = &v + return s +} + +// SetExecutionEndTime sets the ExecutionEndTime field's value. +func (s *AutomationExecution) SetExecutionEndTime(v time.Time) *AutomationExecution { + s.ExecutionEndTime = &v + return s +} + +// SetExecutionStartTime sets the ExecutionStartTime field's value. +func (s *AutomationExecution) SetExecutionStartTime(v time.Time) *AutomationExecution { + s.ExecutionStartTime = &v + return s +} + +// SetFailureMessage sets the FailureMessage field's value. +func (s *AutomationExecution) SetFailureMessage(v string) *AutomationExecution { + s.FailureMessage = &v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *AutomationExecution) SetMaxConcurrency(v string) *AutomationExecution { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *AutomationExecution) SetMaxErrors(v string) *AutomationExecution { + s.MaxErrors = &v + return s +} + +// SetMode sets the Mode field's value. +func (s *AutomationExecution) SetMode(v string) *AutomationExecution { + s.Mode = &v + return s +} + +// SetOutputs sets the Outputs field's value. +func (s *AutomationExecution) SetOutputs(v map[string][]*string) *AutomationExecution { + s.Outputs = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *AutomationExecution) SetParameters(v map[string][]*string) *AutomationExecution { + s.Parameters = v + return s +} + +// SetParentAutomationExecutionId sets the ParentAutomationExecutionId field's value. +func (s *AutomationExecution) SetParentAutomationExecutionId(v string) *AutomationExecution { + s.ParentAutomationExecutionId = &v + return s +} + +// SetResolvedTargets sets the ResolvedTargets field's value. +func (s *AutomationExecution) SetResolvedTargets(v *ResolvedTargets) *AutomationExecution { + s.ResolvedTargets = v + return s +} + +// SetStepExecutions sets the StepExecutions field's value. +func (s *AutomationExecution) SetStepExecutions(v []*StepExecution) *AutomationExecution { + s.StepExecutions = v + return s +} + +// SetStepExecutionsTruncated sets the StepExecutionsTruncated field's value. +func (s *AutomationExecution) SetStepExecutionsTruncated(v bool) *AutomationExecution { + s.StepExecutionsTruncated = &v + return s +} + +// SetTarget sets the Target field's value. +func (s *AutomationExecution) SetTarget(v string) *AutomationExecution { + s.Target = &v + return s +} + +// SetTargetParameterName sets the TargetParameterName field's value. +func (s *AutomationExecution) SetTargetParameterName(v string) *AutomationExecution { + s.TargetParameterName = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *AutomationExecution) SetTargets(v []*Target) *AutomationExecution { + s.Targets = v + return s +} + +// A filter used to match specific automation executions. This is used to limit +// the scope of Automation execution information returned. +type AutomationExecutionFilter struct { + _ struct{} `type:"structure"` + + // One or more keys to limit the results. Valid filter keys include the following: + // DocumentNamePrefix, ExecutionStatus, ExecutionId, ParentExecutionId, CurrentAction, + // StartTimeBefore, StartTimeAfter. + // + // Key is a required field + Key *string `type:"string" required:"true" enum:"AutomationExecutionFilterKey"` + + // The values used to limit the execution information associated with the filter's + // key. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s AutomationExecutionFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AutomationExecutionFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AutomationExecutionFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AutomationExecutionFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *AutomationExecutionFilter) SetKey(v string) *AutomationExecutionFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *AutomationExecutionFilter) SetValues(v []*string) *AutomationExecutionFilter { + s.Values = v + return s +} + +// Details about a specific Automation execution. +type AutomationExecutionMetadata struct { + _ struct{} `type:"structure"` + + // The execution ID. + AutomationExecutionId *string `min:"36" type:"string"` + + // The status of the execution. Valid values include: Running, Succeeded, Failed, + // Timed out, or Cancelled. + AutomationExecutionStatus *string `type:"string" enum:"AutomationExecutionStatus"` + + // The action of the currently executing step. + CurrentAction *string `type:"string"` + + // The name of the currently executing step. + CurrentStepName *string `type:"string"` + + // The name of the Automation document used during execution. + DocumentName *string `type:"string"` + + // The document version used during the execution. + DocumentVersion *string `type:"string"` + + // The IAM role ARN of the user who executed the Automation. + ExecutedBy *string `type:"string"` + + // The time the execution finished. This is not populated if the execution is + // still in progress. + ExecutionEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time the execution started.> + ExecutionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The list of execution outputs as defined in the Automation document. + FailureMessage *string `type:"string"` + + // An Amazon S3 bucket where execution information is stored. + LogFile *string `type:"string"` + + // The MaxConcurrency value specified by the user when starting the Automation. + MaxConcurrency *string `min:"1" type:"string"` + + // The MaxErrors value specified by the user when starting the Automation. + MaxErrors *string `min:"1" type:"string"` + + // The Automation execution mode. + Mode *string `type:"string" enum:"ExecutionMode"` + + // The list of execution outputs as defined in the Automation document. + Outputs map[string][]*string `min:"1" type:"map"` + + // The ExecutionId of the parent Automation. + ParentAutomationExecutionId *string `min:"36" type:"string"` + + // A list of targets that resolved during the execution. + ResolvedTargets *ResolvedTargets `type:"structure"` + + // The list of execution outputs as defined in the Automation document. + Target *string `type:"string"` + + // The list of execution outputs as defined in the Automation document. + TargetParameterName *string `min:"1" type:"string"` + + // The targets defined by the user when starting the Automation. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s AutomationExecutionMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AutomationExecutionMetadata) GoString() string { + return s.String() +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *AutomationExecutionMetadata) SetAutomationExecutionId(v string) *AutomationExecutionMetadata { + s.AutomationExecutionId = &v + return s +} + +// SetAutomationExecutionStatus sets the AutomationExecutionStatus field's value. +func (s *AutomationExecutionMetadata) SetAutomationExecutionStatus(v string) *AutomationExecutionMetadata { + s.AutomationExecutionStatus = &v + return s +} + +// SetCurrentAction sets the CurrentAction field's value. +func (s *AutomationExecutionMetadata) SetCurrentAction(v string) *AutomationExecutionMetadata { + s.CurrentAction = &v + return s +} + +// SetCurrentStepName sets the CurrentStepName field's value. +func (s *AutomationExecutionMetadata) SetCurrentStepName(v string) *AutomationExecutionMetadata { + s.CurrentStepName = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *AutomationExecutionMetadata) SetDocumentName(v string) *AutomationExecutionMetadata { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *AutomationExecutionMetadata) SetDocumentVersion(v string) *AutomationExecutionMetadata { + s.DocumentVersion = &v + return s +} + +// SetExecutedBy sets the ExecutedBy field's value. +func (s *AutomationExecutionMetadata) SetExecutedBy(v string) *AutomationExecutionMetadata { + s.ExecutedBy = &v + return s +} + +// SetExecutionEndTime sets the ExecutionEndTime field's value. +func (s *AutomationExecutionMetadata) SetExecutionEndTime(v time.Time) *AutomationExecutionMetadata { + s.ExecutionEndTime = &v + return s +} + +// SetExecutionStartTime sets the ExecutionStartTime field's value. +func (s *AutomationExecutionMetadata) SetExecutionStartTime(v time.Time) *AutomationExecutionMetadata { + s.ExecutionStartTime = &v + return s +} + +// SetFailureMessage sets the FailureMessage field's value. +func (s *AutomationExecutionMetadata) SetFailureMessage(v string) *AutomationExecutionMetadata { + s.FailureMessage = &v + return s +} + +// SetLogFile sets the LogFile field's value. +func (s *AutomationExecutionMetadata) SetLogFile(v string) *AutomationExecutionMetadata { + s.LogFile = &v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *AutomationExecutionMetadata) SetMaxConcurrency(v string) *AutomationExecutionMetadata { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *AutomationExecutionMetadata) SetMaxErrors(v string) *AutomationExecutionMetadata { + s.MaxErrors = &v + return s +} + +// SetMode sets the Mode field's value. +func (s *AutomationExecutionMetadata) SetMode(v string) *AutomationExecutionMetadata { + s.Mode = &v + return s +} + +// SetOutputs sets the Outputs field's value. +func (s *AutomationExecutionMetadata) SetOutputs(v map[string][]*string) *AutomationExecutionMetadata { + s.Outputs = v + return s +} + +// SetParentAutomationExecutionId sets the ParentAutomationExecutionId field's value. +func (s *AutomationExecutionMetadata) SetParentAutomationExecutionId(v string) *AutomationExecutionMetadata { + s.ParentAutomationExecutionId = &v + return s +} + +// SetResolvedTargets sets the ResolvedTargets field's value. +func (s *AutomationExecutionMetadata) SetResolvedTargets(v *ResolvedTargets) *AutomationExecutionMetadata { + s.ResolvedTargets = v + return s +} + +// SetTarget sets the Target field's value. +func (s *AutomationExecutionMetadata) SetTarget(v string) *AutomationExecutionMetadata { + s.Target = &v + return s +} + +// SetTargetParameterName sets the TargetParameterName field's value. +func (s *AutomationExecutionMetadata) SetTargetParameterName(v string) *AutomationExecutionMetadata { + s.TargetParameterName = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *AutomationExecutionMetadata) SetTargets(v []*Target) *AutomationExecutionMetadata { + s.Targets = v + return s +} + +type CancelCommandInput struct { + _ struct{} `type:"structure"` + + // The ID of the command you want to cancel. + // + // CommandId is a required field + CommandId *string `min:"36" type:"string" required:"true"` + + // (Optional) A list of instance IDs on which you want to cancel the command. + // If not provided, the command is canceled on every instance on which it was + // requested. + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s CancelCommandInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCommandInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelCommandInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelCommandInput"} + if s.CommandId == nil { + invalidParams.Add(request.NewErrParamRequired("CommandId")) + } + if s.CommandId != nil && len(*s.CommandId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("CommandId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommandId sets the CommandId field's value. +func (s *CancelCommandInput) SetCommandId(v string) *CancelCommandInput { + s.CommandId = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *CancelCommandInput) SetInstanceIds(v []*string) *CancelCommandInput { + s.InstanceIds = v + return s +} + +// Whether or not the command was successfully canceled. There is no guarantee +// that a request can be canceled. +type CancelCommandOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelCommandOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCommandOutput) GoString() string { + return s.String() +} + +// Describes a command request. +type Command struct { + _ struct{} `type:"structure"` + + // A unique identifier for this command. + CommandId *string `min:"36" type:"string"` + + // User-specified information about the command, such as a brief description + // of what the command should do. + Comment *string `type:"string"` + + // The number of targets for which the command invocation reached a terminal + // state. Terminal states include the following: Success, Failed, Execution + // Timed Out, Delivery Timed Out, Canceled, Terminated, or Undeliverable. + CompletedCount *int64 `type:"integer"` + + // The name of the document requested for execution. + DocumentName *string `type:"string"` + + // The SSM document version. + DocumentVersion *string `type:"string"` + + // The number of targets for which the status is Failed or Execution Timed Out. + ErrorCount *int64 `type:"integer"` + + // If this time is reached and the command has not already started executing, + // it will not run. Calculated based on the ExpiresAfter user input provided + // as part of the SendCommand API. + ExpiresAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The instance IDs against which this command was requested. + InstanceIds []*string `type:"list"` + + // The maximum number of instances that are allowed to execute the command at + // the same time. You can specify a number of instances, such as 10, or a percentage + // of instances, such as 10%. The default value is 50. For more information + // about how to use MaxConcurrency, see Executing a Command Using Systems Manager + // Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html). + MaxConcurrency *string `min:"1" type:"string"` + + // The maximum number of errors allowed before the system stops sending the + // command to additional targets. You can specify a number of errors, such as + // 10, or a percentage or errors, such as 10%. The default value is 0. For more + // information about how to use MaxErrors, see Executing a Command Using Systems + // Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html). + MaxErrors *string `min:"1" type:"string"` + + // Configurations for sending notifications about command status changes. + NotificationConfig *NotificationConfig `type:"structure"` + + // The S3 bucket where the responses to the command executions should be stored. + // This was requested when issuing the command. + OutputS3BucketName *string `min:"3" type:"string"` + + // The S3 directory path inside the bucket where the responses to the command + // executions should be stored. This was requested when issuing the command. + OutputS3KeyPrefix *string `type:"string"` + + // (Deprecated) You can no longer specify this parameter. The system ignores + // it. Instead, Systems Manager automatically determines the Amazon S3 bucket + // region. + OutputS3Region *string `min:"3" type:"string"` + + // The parameter values to be inserted in the document when executing the command. + Parameters map[string][]*string `type:"map"` + + // The date and time the command was requested. + RequestedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The IAM service role that Run Command uses to act on your behalf when sending + // notifications about command status changes. + ServiceRole *string `type:"string"` + + // The status of the command. + Status *string `type:"string" enum:"CommandStatus"` + + // A detailed status of the command execution. StatusDetails includes more information + // than Status because it includes states resulting from error and concurrency + // control parameters. StatusDetails can show different results than Status. + // For more information about these statuses, see Run Command Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). + // StatusDetails can be one of the following values: + // + // * Pending: The command has not been sent to any instances. + // + // * In Progress: The command has been sent to at least one instance but + // has not reached a final state on all instances. + // + // * Success: The command successfully executed on all invocations. This + // is a terminal state. + // + // * Delivery Timed Out: The value of MaxErrors or more command invocations + // shows a status of Delivery Timed Out. This is a terminal state. + // + // * Execution Timed Out: The value of MaxErrors or more command invocations + // shows a status of Execution Timed Out. This is a terminal state. + // + // * Failed: The value of MaxErrors or more command invocations shows a status + // of Failed. This is a terminal state. + // + // * Incomplete: The command was attempted on all instances and one or more + // invocations does not have a value of Success but not enough invocations + // failed for the status to be Failed. This is a terminal state. + // + // * Canceled: The command was terminated before it was completed. This is + // a terminal state. + // + // * Rate Exceeded: The number of instances targeted by the command exceeded + // the account limit for pending invocations. The system has canceled the + // command before executing it on any instance. This is a terminal state. + StatusDetails *string `type:"string"` + + // The number of targets for the command. + TargetCount *int64 `type:"integer"` + + // An array of search criteria that targets instances using a Key,Value combination + // that you specify. Targets is required if you don't provide one or more instance + // IDs in the call. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s Command) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Command) GoString() string { + return s.String() +} + +// SetCommandId sets the CommandId field's value. +func (s *Command) SetCommandId(v string) *Command { + s.CommandId = &v + return s +} + +// SetComment sets the Comment field's value. +func (s *Command) SetComment(v string) *Command { + s.Comment = &v + return s +} + +// SetCompletedCount sets the CompletedCount field's value. +func (s *Command) SetCompletedCount(v int64) *Command { + s.CompletedCount = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *Command) SetDocumentName(v string) *Command { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *Command) SetDocumentVersion(v string) *Command { + s.DocumentVersion = &v + return s +} + +// SetErrorCount sets the ErrorCount field's value. +func (s *Command) SetErrorCount(v int64) *Command { + s.ErrorCount = &v + return s +} + +// SetExpiresAfter sets the ExpiresAfter field's value. +func (s *Command) SetExpiresAfter(v time.Time) *Command { + s.ExpiresAfter = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *Command) SetInstanceIds(v []*string) *Command { + s.InstanceIds = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *Command) SetMaxConcurrency(v string) *Command { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *Command) SetMaxErrors(v string) *Command { + s.MaxErrors = &v + return s +} + +// SetNotificationConfig sets the NotificationConfig field's value. +func (s *Command) SetNotificationConfig(v *NotificationConfig) *Command { + s.NotificationConfig = v + return s +} + +// SetOutputS3BucketName sets the OutputS3BucketName field's value. +func (s *Command) SetOutputS3BucketName(v string) *Command { + s.OutputS3BucketName = &v + return s +} + +// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. +func (s *Command) SetOutputS3KeyPrefix(v string) *Command { + s.OutputS3KeyPrefix = &v + return s +} + +// SetOutputS3Region sets the OutputS3Region field's value. +func (s *Command) SetOutputS3Region(v string) *Command { + s.OutputS3Region = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *Command) SetParameters(v map[string][]*string) *Command { + s.Parameters = v + return s +} + +// SetRequestedDateTime sets the RequestedDateTime field's value. +func (s *Command) SetRequestedDateTime(v time.Time) *Command { + s.RequestedDateTime = &v + return s +} + +// SetServiceRole sets the ServiceRole field's value. +func (s *Command) SetServiceRole(v string) *Command { + s.ServiceRole = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *Command) SetStatus(v string) *Command { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *Command) SetStatusDetails(v string) *Command { + s.StatusDetails = &v + return s +} + +// SetTargetCount sets the TargetCount field's value. +func (s *Command) SetTargetCount(v int64) *Command { + s.TargetCount = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *Command) SetTargets(v []*Target) *Command { + s.Targets = v + return s +} + +// Describes a command filter. +type CommandFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true" enum:"CommandFilterKey"` + + // The filter value. + // + // Value is a required field + Value *string `locationName:"value" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CommandFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CommandFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CommandFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CommandFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *CommandFilter) SetKey(v string) *CommandFilter { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *CommandFilter) SetValue(v string) *CommandFilter { + s.Value = &v + return s +} + +// An invocation is copy of a command sent to a specific instance. A command +// can apply to one or more instances. A command invocation applies to one instance. +// For example, if a user executes SendCommand against three instances, then +// a command invocation is created for each requested instance ID. A command +// invocation returns status and detail information about a command you executed. +type CommandInvocation struct { + _ struct{} `type:"structure"` + + // The command against which this invocation was requested. + CommandId *string `min:"36" type:"string"` + + CommandPlugins []*CommandPlugin `type:"list"` + + // User-specified information about the command, such as a brief description + // of what the command should do. + Comment *string `type:"string"` + + // The document name that was requested for execution. + DocumentName *string `type:"string"` + + // The SSM document version. + DocumentVersion *string `type:"string"` + + // The instance ID in which this invocation was requested. + InstanceId *string `type:"string"` + + // The name of the invocation target. For Amazon EC2 instances this is the value + // for the aws:Name tag. For on-premises instances, this is the name of the + // instance. + InstanceName *string `type:"string"` + + // Configurations for sending notifications about command status changes on + // a per instance basis. + NotificationConfig *NotificationConfig `type:"structure"` + + // The time and date the request was sent to this instance. + RequestedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The IAM service role that Run Command uses to act on your behalf when sending + // notifications about command status changes on a per instance basis. + ServiceRole *string `type:"string"` + + // The URL to the plugin's StdErr file in Amazon S3, if the Amazon S3 bucket + // was defined for the parent command. For an invocation, StandardErrorUrl is + // populated if there is just one plugin defined for the command, and the Amazon + // S3 bucket was defined for the command. + StandardErrorUrl *string `type:"string"` + + // The URL to the plugin's StdOut file in Amazon S3, if the Amazon S3 bucket + // was defined for the parent command. For an invocation, StandardOutputUrl + // is populated if there is just one plugin defined for the command, and the + // Amazon S3 bucket was defined for the command. + StandardOutputUrl *string `type:"string"` + + // Whether or not the invocation succeeded, failed, or is pending. + Status *string `type:"string" enum:"CommandInvocationStatus"` + + // A detailed status of the command execution for each invocation (each instance + // targeted by the command). StatusDetails includes more information than Status + // because it includes states resulting from error and concurrency control parameters. + // StatusDetails can show different results than Status. For more information + // about these statuses, see Run Command Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). + // StatusDetails can be one of the following values: + // + // * Pending: The command has not been sent to the instance. + // + // * In Progress: The command has been sent to the instance but has not reached + // a terminal state. + // + // * Success: The execution of the command or plugin was successfully completed. + // This is a terminal state. + // + // * Delivery Timed Out: The command was not delivered to the instance before + // the delivery timeout expired. Delivery timeouts do not count against the + // parent command's MaxErrors limit, but they do contribute to whether the + // parent command status is Success or Incomplete. This is a terminal state. + // + // * Execution Timed Out: Command execution started on the instance, but + // the execution was not complete before the execution timeout expired. Execution + // timeouts count against the MaxErrors limit of the parent command. This + // is a terminal state. + // + // * Failed: The command was not successful on the instance. For a plugin, + // this indicates that the result code was not zero. For a command invocation, + // this indicates that the result code for one or more plugins was not zero. + // Invocation failures count against the MaxErrors limit of the parent command. + // This is a terminal state. + // + // * Canceled: The command was terminated before it was completed. This is + // a terminal state. + // + // * Undeliverable: The command can't be delivered to the instance. The instance + // might not exist or might not be responding. Undeliverable invocations + // don't count against the parent command's MaxErrors limit and don't contribute + // to whether the parent command status is Success or Incomplete. This is + // a terminal state. + // + // * Terminated: The parent command exceeded its MaxErrors limit and subsequent + // command invocations were canceled by the system. This is a terminal state. + StatusDetails *string `type:"string"` + + // Gets the trace output sent by the agent. + TraceOutput *string `type:"string"` +} + +// String returns the string representation +func (s CommandInvocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CommandInvocation) GoString() string { + return s.String() +} + +// SetCommandId sets the CommandId field's value. +func (s *CommandInvocation) SetCommandId(v string) *CommandInvocation { + s.CommandId = &v + return s +} + +// SetCommandPlugins sets the CommandPlugins field's value. +func (s *CommandInvocation) SetCommandPlugins(v []*CommandPlugin) *CommandInvocation { + s.CommandPlugins = v + return s +} + +// SetComment sets the Comment field's value. +func (s *CommandInvocation) SetComment(v string) *CommandInvocation { + s.Comment = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *CommandInvocation) SetDocumentName(v string) *CommandInvocation { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *CommandInvocation) SetDocumentVersion(v string) *CommandInvocation { + s.DocumentVersion = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *CommandInvocation) SetInstanceId(v string) *CommandInvocation { + s.InstanceId = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *CommandInvocation) SetInstanceName(v string) *CommandInvocation { + s.InstanceName = &v + return s +} + +// SetNotificationConfig sets the NotificationConfig field's value. +func (s *CommandInvocation) SetNotificationConfig(v *NotificationConfig) *CommandInvocation { + s.NotificationConfig = v + return s +} + +// SetRequestedDateTime sets the RequestedDateTime field's value. +func (s *CommandInvocation) SetRequestedDateTime(v time.Time) *CommandInvocation { + s.RequestedDateTime = &v + return s +} + +// SetServiceRole sets the ServiceRole field's value. +func (s *CommandInvocation) SetServiceRole(v string) *CommandInvocation { + s.ServiceRole = &v + return s +} + +// SetStandardErrorUrl sets the StandardErrorUrl field's value. +func (s *CommandInvocation) SetStandardErrorUrl(v string) *CommandInvocation { + s.StandardErrorUrl = &v + return s +} + +// SetStandardOutputUrl sets the StandardOutputUrl field's value. +func (s *CommandInvocation) SetStandardOutputUrl(v string) *CommandInvocation { + s.StandardOutputUrl = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CommandInvocation) SetStatus(v string) *CommandInvocation { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *CommandInvocation) SetStatusDetails(v string) *CommandInvocation { + s.StatusDetails = &v + return s +} + +// SetTraceOutput sets the TraceOutput field's value. +func (s *CommandInvocation) SetTraceOutput(v string) *CommandInvocation { + s.TraceOutput = &v + return s +} + +// Describes plugin details. +type CommandPlugin struct { + _ struct{} `type:"structure"` + + // The name of the plugin. Must be one of the following: aws:updateAgent, aws:domainjoin, + // aws:applications, aws:runPowerShellScript, aws:psmodule, aws:cloudWatch, + // aws:runShellScript, or aws:updateSSMAgent. + Name *string `min:"4" type:"string"` + + // Output of the plugin execution. + Output *string `type:"string"` + + // The S3 bucket where the responses to the command executions should be stored. + // This was requested when issuing the command. For example, in the following + // response: + // + // test_folder/ab19cb99-a030-46dd-9dfc-8eSAMPLEPre-Fix/i-1234567876543/awsrunShellScript + // + // test_folder is the name of the Amazon S3 bucket; + // + // ab19cb99-a030-46dd-9dfc-8eSAMPLEPre-Fix is the name of the S3 prefix; + // + // i-1234567876543 is the instance ID; + // + // awsrunShellScript is the name of the plugin. + OutputS3BucketName *string `min:"3" type:"string"` + + // The S3 directory path inside the bucket where the responses to the command + // executions should be stored. This was requested when issuing the command. + // For example, in the following response: + // + // test_folder/ab19cb99-a030-46dd-9dfc-8eSAMPLEPre-Fix/i-1234567876543/awsrunShellScript + // + // test_folder is the name of the Amazon S3 bucket; + // + // ab19cb99-a030-46dd-9dfc-8eSAMPLEPre-Fix is the name of the S3 prefix; + // + // i-1234567876543 is the instance ID; + // + // awsrunShellScript is the name of the plugin. + OutputS3KeyPrefix *string `type:"string"` + + // (Deprecated) You can no longer specify this parameter. The system ignores + // it. Instead, Systems Manager automatically determines the Amazon S3 bucket + // region. + OutputS3Region *string `min:"3" type:"string"` + + // A numeric response code generated after executing the plugin. + ResponseCode *int64 `type:"integer"` + + // The time the plugin stopped executing. Could stop prematurely if, for example, + // a cancel command was sent. + ResponseFinishDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time the plugin started executing. + ResponseStartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The URL for the complete text written by the plugin to stderr. If execution + // is not yet complete, then this string is empty. + StandardErrorUrl *string `type:"string"` + + // The URL for the complete text written by the plugin to stdout in Amazon S3. + // If the Amazon S3 bucket for the command was not specified, then this string + // is empty. + StandardOutputUrl *string `type:"string"` + + // The status of this plugin. You can execute a document with multiple plugins. + Status *string `type:"string" enum:"CommandPluginStatus"` + + // A detailed status of the plugin execution. StatusDetails includes more information + // than Status because it includes states resulting from error and concurrency + // control parameters. StatusDetails can show different results than Status. + // For more information about these statuses, see Run Command Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). + // StatusDetails can be one of the following values: + // + // * Pending: The command has not been sent to the instance. + // + // * In Progress: The command has been sent to the instance but has not reached + // a terminal state. + // + // * Success: The execution of the command or plugin was successfully completed. + // This is a terminal state. + // + // * Delivery Timed Out: The command was not delivered to the instance before + // the delivery timeout expired. Delivery timeouts do not count against the + // parent command's MaxErrors limit, but they do contribute to whether the + // parent command status is Success or Incomplete. This is a terminal state. + // + // * Execution Timed Out: Command execution started on the instance, but + // the execution was not complete before the execution timeout expired. Execution + // timeouts count against the MaxErrors limit of the parent command. This + // is a terminal state. + // + // * Failed: The command was not successful on the instance. For a plugin, + // this indicates that the result code was not zero. For a command invocation, + // this indicates that the result code for one or more plugins was not zero. + // Invocation failures count against the MaxErrors limit of the parent command. + // This is a terminal state. + // + // * Canceled: The command was terminated before it was completed. This is + // a terminal state. + // + // * Undeliverable: The command can't be delivered to the instance. The instance + // might not exist, or it might not be responding. Undeliverable invocations + // don't count against the parent command's MaxErrors limit, and they don't + // contribute to whether the parent command status is Success or Incomplete. + // This is a terminal state. + // + // * Terminated: The parent command exceeded its MaxErrors limit and subsequent + // command invocations were canceled by the system. This is a terminal state. + StatusDetails *string `type:"string"` +} + +// String returns the string representation +func (s CommandPlugin) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CommandPlugin) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *CommandPlugin) SetName(v string) *CommandPlugin { + s.Name = &v + return s +} + +// SetOutput sets the Output field's value. +func (s *CommandPlugin) SetOutput(v string) *CommandPlugin { + s.Output = &v + return s +} + +// SetOutputS3BucketName sets the OutputS3BucketName field's value. +func (s *CommandPlugin) SetOutputS3BucketName(v string) *CommandPlugin { + s.OutputS3BucketName = &v + return s +} + +// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. +func (s *CommandPlugin) SetOutputS3KeyPrefix(v string) *CommandPlugin { + s.OutputS3KeyPrefix = &v + return s +} + +// SetOutputS3Region sets the OutputS3Region field's value. +func (s *CommandPlugin) SetOutputS3Region(v string) *CommandPlugin { + s.OutputS3Region = &v + return s +} + +// SetResponseCode sets the ResponseCode field's value. +func (s *CommandPlugin) SetResponseCode(v int64) *CommandPlugin { + s.ResponseCode = &v + return s +} + +// SetResponseFinishDateTime sets the ResponseFinishDateTime field's value. +func (s *CommandPlugin) SetResponseFinishDateTime(v time.Time) *CommandPlugin { + s.ResponseFinishDateTime = &v + return s +} + +// SetResponseStartDateTime sets the ResponseStartDateTime field's value. +func (s *CommandPlugin) SetResponseStartDateTime(v time.Time) *CommandPlugin { + s.ResponseStartDateTime = &v + return s +} + +// SetStandardErrorUrl sets the StandardErrorUrl field's value. +func (s *CommandPlugin) SetStandardErrorUrl(v string) *CommandPlugin { + s.StandardErrorUrl = &v + return s +} + +// SetStandardOutputUrl sets the StandardOutputUrl field's value. +func (s *CommandPlugin) SetStandardOutputUrl(v string) *CommandPlugin { + s.StandardOutputUrl = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CommandPlugin) SetStatus(v string) *CommandPlugin { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *CommandPlugin) SetStatusDetails(v string) *CommandPlugin { + s.StatusDetails = &v + return s +} + +// A summary of the call execution that includes an execution ID, the type of +// execution (for example, Command), and the date/time of the execution using +// a datetime object that is saved in the following format: yyyy-MM-dd'T'HH:mm:ss'Z'. +type ComplianceExecutionSummary struct { + _ struct{} `type:"structure"` + + // An ID created by the system when PutComplianceItems was called. For example, + // CommandID is a valid execution ID. You can use this ID in subsequent calls. + ExecutionId *string `type:"string"` + + // The time the execution ran as a datetime object that is saved in the following + // format: yyyy-MM-dd'T'HH:mm:ss'Z'. + // + // ExecutionTime is a required field + ExecutionTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // The type of execution. For example, Command is a valid execution type. + ExecutionType *string `type:"string"` +} + +// String returns the string representation +func (s ComplianceExecutionSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceExecutionSummary) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ComplianceExecutionSummary) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ComplianceExecutionSummary"} + if s.ExecutionTime == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionTime")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExecutionId sets the ExecutionId field's value. +func (s *ComplianceExecutionSummary) SetExecutionId(v string) *ComplianceExecutionSummary { + s.ExecutionId = &v + return s +} + +// SetExecutionTime sets the ExecutionTime field's value. +func (s *ComplianceExecutionSummary) SetExecutionTime(v time.Time) *ComplianceExecutionSummary { + s.ExecutionTime = &v + return s +} + +// SetExecutionType sets the ExecutionType field's value. +func (s *ComplianceExecutionSummary) SetExecutionType(v string) *ComplianceExecutionSummary { + s.ExecutionType = &v + return s +} + +// Information about the compliance as defined by the resource type. For example, +// for a patch resource type, Items includes information about the PatchSeverity, +// Classification, etc. +type ComplianceItem struct { + _ struct{} `type:"structure"` + + // The compliance type. For example, Association (for a State Manager association), + // Patch, or Custom:string are all valid compliance types. + ComplianceType *string `min:"1" type:"string"` + + // A "Key": "Value" tag combination for the compliance item. + Details map[string]*string `type:"map"` + + // A summary for the compliance item. The summary includes an execution ID, + // the execution type (for example, command), and the execution time. + ExecutionSummary *ComplianceExecutionSummary `type:"structure"` + + // An ID for the compliance item. For example, if the compliance item is a Windows + // patch, the ID could be the number of the KB article; for example: KB4010320. + Id *string `min:"1" type:"string"` + + // An ID for the resource. For a managed instance, this is the instance ID. + ResourceId *string `min:"1" type:"string"` + + // The type of resource. ManagedInstance is currently the only supported resource + // type. + ResourceType *string `min:"1" type:"string"` + + // The severity of the compliance status. Severity can be one of the following: + // Critical, High, Medium, Low, Informational, Unspecified. + Severity *string `type:"string" enum:"ComplianceSeverity"` + + // The status of the compliance item. An item is either COMPLIANT or NON_COMPLIANT. + Status *string `type:"string" enum:"ComplianceStatus"` + + // A title for the compliance item. For example, if the compliance item is a + // Windows patch, the title could be the title of the KB article for the patch; + // for example: Security Update for Active Directory Federation Services. + Title *string `type:"string"` +} + +// String returns the string representation +func (s ComplianceItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceItem) GoString() string { + return s.String() +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *ComplianceItem) SetComplianceType(v string) *ComplianceItem { + s.ComplianceType = &v + return s +} + +// SetDetails sets the Details field's value. +func (s *ComplianceItem) SetDetails(v map[string]*string) *ComplianceItem { + s.Details = v + return s +} + +// SetExecutionSummary sets the ExecutionSummary field's value. +func (s *ComplianceItem) SetExecutionSummary(v *ComplianceExecutionSummary) *ComplianceItem { + s.ExecutionSummary = v + return s +} + +// SetId sets the Id field's value. +func (s *ComplianceItem) SetId(v string) *ComplianceItem { + s.Id = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ComplianceItem) SetResourceId(v string) *ComplianceItem { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ComplianceItem) SetResourceType(v string) *ComplianceItem { + s.ResourceType = &v + return s +} + +// SetSeverity sets the Severity field's value. +func (s *ComplianceItem) SetSeverity(v string) *ComplianceItem { + s.Severity = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ComplianceItem) SetStatus(v string) *ComplianceItem { + s.Status = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *ComplianceItem) SetTitle(v string) *ComplianceItem { + s.Title = &v + return s +} + +// Information about a compliance item. +type ComplianceItemEntry struct { + _ struct{} `type:"structure"` + + // A "Key": "Value" tag combination for the compliance item. + Details map[string]*string `type:"map"` + + // The compliance item ID. For example, if the compliance item is a Windows + // patch, the ID could be the number of the KB article. + Id *string `min:"1" type:"string"` + + // The severity of the compliance status. Severity can be one of the following: + // Critical, High, Medium, Low, Informational, Unspecified. + // + // Severity is a required field + Severity *string `type:"string" required:"true" enum:"ComplianceSeverity"` + + // The status of the compliance item. An item is either COMPLIANT or NON_COMPLIANT. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"ComplianceStatus"` + + // The title of the compliance item. For example, if the compliance item is + // a Windows patch, the title could be the title of the KB article for the patch; + // for example: Security Update for Active Directory Federation Services. + Title *string `type:"string"` +} + +// String returns the string representation +func (s ComplianceItemEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceItemEntry) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ComplianceItemEntry) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ComplianceItemEntry"} + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.Severity == nil { + invalidParams.Add(request.NewErrParamRequired("Severity")) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDetails sets the Details field's value. +func (s *ComplianceItemEntry) SetDetails(v map[string]*string) *ComplianceItemEntry { + s.Details = v + return s +} + +// SetId sets the Id field's value. +func (s *ComplianceItemEntry) SetId(v string) *ComplianceItemEntry { + s.Id = &v + return s +} + +// SetSeverity sets the Severity field's value. +func (s *ComplianceItemEntry) SetSeverity(v string) *ComplianceItemEntry { + s.Severity = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ComplianceItemEntry) SetStatus(v string) *ComplianceItemEntry { + s.Status = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *ComplianceItemEntry) SetTitle(v string) *ComplianceItemEntry { + s.Title = &v + return s +} + +// One or more filters. Use a filter to return a more specific list of results. +type ComplianceStringFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + Key *string `min:"1" type:"string"` + + // The type of comparison that should be performed for the value: Equal, NotEqual, + // BeginWith, LessThan, or GreaterThan. + Type *string `type:"string" enum:"ComplianceQueryOperatorType"` + + // The value for which to search. + Values []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s ComplianceStringFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceStringFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ComplianceStringFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ComplianceStringFilter"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ComplianceStringFilter) SetKey(v string) *ComplianceStringFilter { + s.Key = &v + return s +} + +// SetType sets the Type field's value. +func (s *ComplianceStringFilter) SetType(v string) *ComplianceStringFilter { + s.Type = &v + return s +} + +// SetValues sets the Values field's value. +func (s *ComplianceStringFilter) SetValues(v []*string) *ComplianceStringFilter { + s.Values = v + return s +} + +// A summary of compliance information by compliance type. +type ComplianceSummaryItem struct { + _ struct{} `type:"structure"` + + // The type of compliance item. For example, the compliance type can be Association, + // Patch, or Custom:string. + ComplianceType *string `min:"1" type:"string"` + + // A list of COMPLIANT items for the specified compliance type. + CompliantSummary *CompliantSummary `type:"structure"` + + // A list of NON_COMPLIANT items for the specified compliance type. + NonCompliantSummary *NonCompliantSummary `type:"structure"` +} + +// String returns the string representation +func (s ComplianceSummaryItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceSummaryItem) GoString() string { + return s.String() +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *ComplianceSummaryItem) SetComplianceType(v string) *ComplianceSummaryItem { + s.ComplianceType = &v + return s +} + +// SetCompliantSummary sets the CompliantSummary field's value. +func (s *ComplianceSummaryItem) SetCompliantSummary(v *CompliantSummary) *ComplianceSummaryItem { + s.CompliantSummary = v + return s +} + +// SetNonCompliantSummary sets the NonCompliantSummary field's value. +func (s *ComplianceSummaryItem) SetNonCompliantSummary(v *NonCompliantSummary) *ComplianceSummaryItem { + s.NonCompliantSummary = v + return s +} + +// A summary of resources that are compliant. The summary is organized according +// to the resource count for each compliance type. +type CompliantSummary struct { + _ struct{} `type:"structure"` + + // The total number of resources that are compliant. + CompliantCount *int64 `type:"integer"` + + // A summary of the compliance severity by compliance type. + SeveritySummary *SeveritySummary `type:"structure"` +} + +// String returns the string representation +func (s CompliantSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompliantSummary) GoString() string { + return s.String() +} + +// SetCompliantCount sets the CompliantCount field's value. +func (s *CompliantSummary) SetCompliantCount(v int64) *CompliantSummary { + s.CompliantCount = &v + return s +} + +// SetSeveritySummary sets the SeveritySummary field's value. +func (s *CompliantSummary) SetSeveritySummary(v *SeveritySummary) *CompliantSummary { + s.SeveritySummary = v + return s +} + +type CreateActivationInput struct { + _ struct{} `type:"structure"` + + // The name of the registered, managed instance as it will appear in the Amazon + // EC2 console or when you use the AWS command line tools to list EC2 resources. + // + // Do not enter personally identifiable information in this field. + DefaultInstanceName *string `type:"string"` + + // A user-defined description of the resource that you want to register with + // Amazon EC2. + // + // Do not enter personally identifiable information in this field. + Description *string `type:"string"` + + // The date by which this activation request should expire. The default value + // is 24 hours. + ExpirationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The Amazon Identity and Access Management (IAM) role that you want to assign + // to the managed instance. + // + // IamRole is a required field + IamRole *string `type:"string" required:"true"` + + // Specify the maximum number of managed instances you want to register. The + // default value is 1 instance. + RegistrationLimit *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s CreateActivationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateActivationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateActivationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateActivationInput"} + if s.IamRole == nil { + invalidParams.Add(request.NewErrParamRequired("IamRole")) + } + if s.RegistrationLimit != nil && *s.RegistrationLimit < 1 { + invalidParams.Add(request.NewErrParamMinValue("RegistrationLimit", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDefaultInstanceName sets the DefaultInstanceName field's value. +func (s *CreateActivationInput) SetDefaultInstanceName(v string) *CreateActivationInput { + s.DefaultInstanceName = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateActivationInput) SetDescription(v string) *CreateActivationInput { + s.Description = &v + return s +} + +// SetExpirationDate sets the ExpirationDate field's value. +func (s *CreateActivationInput) SetExpirationDate(v time.Time) *CreateActivationInput { + s.ExpirationDate = &v + return s +} + +// SetIamRole sets the IamRole field's value. +func (s *CreateActivationInput) SetIamRole(v string) *CreateActivationInput { + s.IamRole = &v + return s +} + +// SetRegistrationLimit sets the RegistrationLimit field's value. +func (s *CreateActivationInput) SetRegistrationLimit(v int64) *CreateActivationInput { + s.RegistrationLimit = &v + return s +} + +type CreateActivationOutput struct { + _ struct{} `type:"structure"` + + // The code the system generates when it processes the activation. The activation + // code functions like a password to validate the activation ID. + ActivationCode *string `min:"20" type:"string"` + + // The ID number generated by the system when it processed the activation. The + // activation ID functions like a user name. + ActivationId *string `type:"string"` +} + +// String returns the string representation +func (s CreateActivationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateActivationOutput) GoString() string { + return s.String() +} + +// SetActivationCode sets the ActivationCode field's value. +func (s *CreateActivationOutput) SetActivationCode(v string) *CreateActivationOutput { + s.ActivationCode = &v + return s +} + +// SetActivationId sets the ActivationId field's value. +func (s *CreateActivationOutput) SetActivationId(v string) *CreateActivationOutput { + s.ActivationId = &v + return s +} + +type CreateAssociationBatchInput struct { + _ struct{} `type:"structure"` + + // One or more associations. + // + // Entries is a required field + Entries []*CreateAssociationBatchRequestEntry `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateAssociationBatchInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAssociationBatchInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAssociationBatchInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAssociationBatchInput"} + if s.Entries == nil { + invalidParams.Add(request.NewErrParamRequired("Entries")) + } + if s.Entries != nil && len(s.Entries) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Entries", 1)) + } + if s.Entries != nil { + for i, v := range s.Entries { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Entries", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEntries sets the Entries field's value. +func (s *CreateAssociationBatchInput) SetEntries(v []*CreateAssociationBatchRequestEntry) *CreateAssociationBatchInput { + s.Entries = v + return s +} + +type CreateAssociationBatchOutput struct { + _ struct{} `type:"structure"` + + // Information about the associations that failed. + Failed []*FailedCreateAssociation `type:"list"` + + // Information about the associations that succeeded. + Successful []*AssociationDescription `type:"list"` +} + +// String returns the string representation +func (s CreateAssociationBatchOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAssociationBatchOutput) GoString() string { + return s.String() +} + +// SetFailed sets the Failed field's value. +func (s *CreateAssociationBatchOutput) SetFailed(v []*FailedCreateAssociation) *CreateAssociationBatchOutput { + s.Failed = v + return s +} + +// SetSuccessful sets the Successful field's value. +func (s *CreateAssociationBatchOutput) SetSuccessful(v []*AssociationDescription) *CreateAssociationBatchOutput { + s.Successful = v + return s +} + +// Describes the association of a Systems Manager document and an instance. +type CreateAssociationBatchRequestEntry struct { + _ struct{} `type:"structure"` + + // Specify a descriptive name for the association. + AssociationName *string `type:"string"` + + // The document version. + DocumentVersion *string `type:"string"` + + // The ID of the instance. + InstanceId *string `type:"string"` + + // The name of the configuration document. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // An Amazon S3 bucket where you want to store the results of this request. + OutputLocation *InstanceAssociationOutputLocation `type:"structure"` + + // A description of the parameters for a document. + Parameters map[string][]*string `type:"map"` + + // A cron expression that specifies a schedule when the association runs. + ScheduleExpression *string `min:"1" type:"string"` + + // The instances targeted by the request. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s CreateAssociationBatchRequestEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAssociationBatchRequestEntry) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAssociationBatchRequestEntry) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAssociationBatchRequestEntry"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.ScheduleExpression != nil && len(*s.ScheduleExpression) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduleExpression", 1)) + } + if s.OutputLocation != nil { + if err := s.OutputLocation.Validate(); err != nil { + invalidParams.AddNested("OutputLocation", err.(request.ErrInvalidParams)) + } + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationName sets the AssociationName field's value. +func (s *CreateAssociationBatchRequestEntry) SetAssociationName(v string) *CreateAssociationBatchRequestEntry { + s.AssociationName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *CreateAssociationBatchRequestEntry) SetDocumentVersion(v string) *CreateAssociationBatchRequestEntry { + s.DocumentVersion = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *CreateAssociationBatchRequestEntry) SetInstanceId(v string) *CreateAssociationBatchRequestEntry { + s.InstanceId = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateAssociationBatchRequestEntry) SetName(v string) *CreateAssociationBatchRequestEntry { + s.Name = &v + return s +} + +// SetOutputLocation sets the OutputLocation field's value. +func (s *CreateAssociationBatchRequestEntry) SetOutputLocation(v *InstanceAssociationOutputLocation) *CreateAssociationBatchRequestEntry { + s.OutputLocation = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *CreateAssociationBatchRequestEntry) SetParameters(v map[string][]*string) *CreateAssociationBatchRequestEntry { + s.Parameters = v + return s +} + +// SetScheduleExpression sets the ScheduleExpression field's value. +func (s *CreateAssociationBatchRequestEntry) SetScheduleExpression(v string) *CreateAssociationBatchRequestEntry { + s.ScheduleExpression = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *CreateAssociationBatchRequestEntry) SetTargets(v []*Target) *CreateAssociationBatchRequestEntry { + s.Targets = v + return s +} + +type CreateAssociationInput struct { + _ struct{} `type:"structure"` + + // Specify a descriptive name for the association. + AssociationName *string `type:"string"` + + // The document version you want to associate with the target(s). Can be a specific + // version or the default version. + DocumentVersion *string `type:"string"` + + // The instance ID. + InstanceId *string `type:"string"` + + // The name of the Systems Manager document. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // An Amazon S3 bucket where you want to store the output details of the request. + OutputLocation *InstanceAssociationOutputLocation `type:"structure"` + + // The parameters for the documents runtime configuration. + Parameters map[string][]*string `type:"map"` + + // A cron expression when the association will be applied to the target(s). + ScheduleExpression *string `min:"1" type:"string"` + + // The targets (either instances or tags) for the association. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s CreateAssociationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAssociationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAssociationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAssociationInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.ScheduleExpression != nil && len(*s.ScheduleExpression) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduleExpression", 1)) + } + if s.OutputLocation != nil { + if err := s.OutputLocation.Validate(); err != nil { + invalidParams.AddNested("OutputLocation", err.(request.ErrInvalidParams)) + } + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationName sets the AssociationName field's value. +func (s *CreateAssociationInput) SetAssociationName(v string) *CreateAssociationInput { + s.AssociationName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *CreateAssociationInput) SetDocumentVersion(v string) *CreateAssociationInput { + s.DocumentVersion = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *CreateAssociationInput) SetInstanceId(v string) *CreateAssociationInput { + s.InstanceId = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateAssociationInput) SetName(v string) *CreateAssociationInput { + s.Name = &v + return s +} + +// SetOutputLocation sets the OutputLocation field's value. +func (s *CreateAssociationInput) SetOutputLocation(v *InstanceAssociationOutputLocation) *CreateAssociationInput { + s.OutputLocation = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *CreateAssociationInput) SetParameters(v map[string][]*string) *CreateAssociationInput { + s.Parameters = v + return s +} + +// SetScheduleExpression sets the ScheduleExpression field's value. +func (s *CreateAssociationInput) SetScheduleExpression(v string) *CreateAssociationInput { + s.ScheduleExpression = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *CreateAssociationInput) SetTargets(v []*Target) *CreateAssociationInput { + s.Targets = v + return s +} + +type CreateAssociationOutput struct { + _ struct{} `type:"structure"` + + // Information about the association. + AssociationDescription *AssociationDescription `type:"structure"` +} + +// String returns the string representation +func (s CreateAssociationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAssociationOutput) GoString() string { + return s.String() +} + +// SetAssociationDescription sets the AssociationDescription field's value. +func (s *CreateAssociationOutput) SetAssociationDescription(v *AssociationDescription) *CreateAssociationOutput { + s.AssociationDescription = v + return s +} + +type CreateDocumentInput struct { + _ struct{} `type:"structure"` + + // A valid JSON or YAML string. + // + // Content is a required field + Content *string `min:"1" type:"string" required:"true"` + + // Specify the document format for the request. The document format can be either + // JSON or YAML. JSON is the default format. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The type of document to create. Valid document types include: Policy, Automation, + // and Command. + DocumentType *string `type:"string" enum:"DocumentType"` + + // A name for the Systems Manager document. + // + // Do not use the following to begin the names of documents you create. They + // are reserved by AWS for use as document prefixes: + // + // aws + // + // amazon + // + // amzn + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // Specify a target type to define the kinds of resources the document can run + // on. For example, to run a document on EC2 instances, specify the following + // value: /AWS::EC2::Instance. If you specify a value of '/' the document can + // run on all types of resources. If you don't specify a value, the document + // can't run on any resources. For a list of valid resource types, see AWS Resource + // Types Reference (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide. + TargetType *string `type:"string"` +} + +// String returns the string representation +func (s CreateDocumentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDocumentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDocumentInput"} + if s.Content == nil { + invalidParams.Add(request.NewErrParamRequired("Content")) + } + if s.Content != nil && len(*s.Content) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Content", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContent sets the Content field's value. +func (s *CreateDocumentInput) SetContent(v string) *CreateDocumentInput { + s.Content = &v + return s +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *CreateDocumentInput) SetDocumentFormat(v string) *CreateDocumentInput { + s.DocumentFormat = &v + return s +} + +// SetDocumentType sets the DocumentType field's value. +func (s *CreateDocumentInput) SetDocumentType(v string) *CreateDocumentInput { + s.DocumentType = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateDocumentInput) SetName(v string) *CreateDocumentInput { + s.Name = &v + return s +} + +// SetTargetType sets the TargetType field's value. +func (s *CreateDocumentInput) SetTargetType(v string) *CreateDocumentInput { + s.TargetType = &v + return s +} + +type CreateDocumentOutput struct { + _ struct{} `type:"structure"` + + // Information about the Systems Manager document. + DocumentDescription *DocumentDescription `type:"structure"` +} + +// String returns the string representation +func (s CreateDocumentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDocumentOutput) GoString() string { + return s.String() +} + +// SetDocumentDescription sets the DocumentDescription field's value. +func (s *CreateDocumentOutput) SetDocumentDescription(v *DocumentDescription) *CreateDocumentOutput { + s.DocumentDescription = v + return s +} + +type CreateMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // Enables a Maintenance Window task to execute on managed instances, even if + // you have not registered those instances as targets. If enabled, then you + // must specify the unregistered instances (by instance ID) when you register + // a task with the Maintenance Window + // + // If you don't enable this option, then you must specify previously-registered + // targets when you register a task with the Maintenance Window. + // + // AllowUnassociatedTargets is a required field + AllowUnassociatedTargets *bool `type:"boolean" required:"true"` + + // User-provided idempotency token. + ClientToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // The number of hours before the end of the Maintenance Window that Systems + // Manager stops scheduling new tasks for execution. + // + // Cutoff is a required field + Cutoff *int64 `type:"integer" required:"true"` + + // An optional description for the Maintenance Window. We recommend specifying + // a description to help you organize your Maintenance Windows. + Description *string `min:"1" type:"string"` + + // The duration of the Maintenance Window in hours. + // + // Duration is a required field + Duration *int64 `min:"1" type:"integer" required:"true"` + + // The name of the Maintenance Window. + // + // Name is a required field + Name *string `min:"3" type:"string" required:"true"` + + // The schedule of the Maintenance Window in the form of a cron or rate expression. + // + // Schedule is a required field + Schedule *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateMaintenanceWindowInput"} + if s.AllowUnassociatedTargets == nil { + invalidParams.Add(request.NewErrParamRequired("AllowUnassociatedTargets")) + } + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.Cutoff == nil { + invalidParams.Add(request.NewErrParamRequired("Cutoff")) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.Duration == nil { + invalidParams.Add(request.NewErrParamRequired("Duration")) + } + if s.Duration != nil && *s.Duration < 1 { + invalidParams.Add(request.NewErrParamMinValue("Duration", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.Schedule == nil { + invalidParams.Add(request.NewErrParamRequired("Schedule")) + } + if s.Schedule != nil && len(*s.Schedule) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Schedule", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowUnassociatedTargets sets the AllowUnassociatedTargets field's value. +func (s *CreateMaintenanceWindowInput) SetAllowUnassociatedTargets(v bool) *CreateMaintenanceWindowInput { + s.AllowUnassociatedTargets = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateMaintenanceWindowInput) SetClientToken(v string) *CreateMaintenanceWindowInput { + s.ClientToken = &v + return s +} + +// SetCutoff sets the Cutoff field's value. +func (s *CreateMaintenanceWindowInput) SetCutoff(v int64) *CreateMaintenanceWindowInput { + s.Cutoff = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateMaintenanceWindowInput) SetDescription(v string) *CreateMaintenanceWindowInput { + s.Description = &v + return s +} + +// SetDuration sets the Duration field's value. +func (s *CreateMaintenanceWindowInput) SetDuration(v int64) *CreateMaintenanceWindowInput { + s.Duration = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateMaintenanceWindowInput) SetName(v string) *CreateMaintenanceWindowInput { + s.Name = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *CreateMaintenanceWindowInput) SetSchedule(v string) *CreateMaintenanceWindowInput { + s.Schedule = &v + return s +} + +type CreateMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the created Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s CreateMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowId sets the WindowId field's value. +func (s *CreateMaintenanceWindowOutput) SetWindowId(v string) *CreateMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +type CreatePatchBaselineInput struct { + _ struct{} `type:"structure"` + + // A set of rules used to include patches in the baseline. + ApprovalRules *PatchRuleGroup `type:"structure"` + + // A list of explicitly approved patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. + ApprovedPatches []*string `type:"list"` + + // Defines the compliance level for approved patches. This means that if an + // approved patch is reported as missing, this is the severity of the compliance + // violation. The default value is UNSPECIFIED. + ApprovedPatchesComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` + + // Indicates whether the list of approved patches includes non-security updates + // that should be applied to the instances. The default value is 'false'. Applies + // to Linux instances only. + ApprovedPatchesEnableNonSecurity *bool `type:"boolean"` + + // User-provided idempotency token. + ClientToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // A description of the patch baseline. + Description *string `min:"1" type:"string"` + + // A set of global filters used to exclude patches from the baseline. + GlobalFilters *PatchFilterGroup `type:"structure"` + + // The name of the patch baseline. + // + // Name is a required field + Name *string `min:"3" type:"string" required:"true"` + + // Defines the operating system the patch baseline applies to. The Default value + // is WINDOWS. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` + + // A list of explicitly rejected patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. + RejectedPatches []*string `type:"list"` + + // Information about the patches to use to update the instances, including target + // operating systems and source repositories. Applies to Linux instances only. + Sources []*PatchSource `type:"list"` +} + +// String returns the string representation +func (s CreatePatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePatchBaselineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePatchBaselineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePatchBaselineInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.ApprovalRules != nil { + if err := s.ApprovalRules.Validate(); err != nil { + invalidParams.AddNested("ApprovalRules", err.(request.ErrInvalidParams)) + } + } + if s.GlobalFilters != nil { + if err := s.GlobalFilters.Validate(); err != nil { + invalidParams.AddNested("GlobalFilters", err.(request.ErrInvalidParams)) + } + } + if s.Sources != nil { + for i, v := range s.Sources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Sources", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApprovalRules sets the ApprovalRules field's value. +func (s *CreatePatchBaselineInput) SetApprovalRules(v *PatchRuleGroup) *CreatePatchBaselineInput { + s.ApprovalRules = v + return s +} + +// SetApprovedPatches sets the ApprovedPatches field's value. +func (s *CreatePatchBaselineInput) SetApprovedPatches(v []*string) *CreatePatchBaselineInput { + s.ApprovedPatches = v + return s +} + +// SetApprovedPatchesComplianceLevel sets the ApprovedPatchesComplianceLevel field's value. +func (s *CreatePatchBaselineInput) SetApprovedPatchesComplianceLevel(v string) *CreatePatchBaselineInput { + s.ApprovedPatchesComplianceLevel = &v + return s +} + +// SetApprovedPatchesEnableNonSecurity sets the ApprovedPatchesEnableNonSecurity field's value. +func (s *CreatePatchBaselineInput) SetApprovedPatchesEnableNonSecurity(v bool) *CreatePatchBaselineInput { + s.ApprovedPatchesEnableNonSecurity = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreatePatchBaselineInput) SetClientToken(v string) *CreatePatchBaselineInput { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreatePatchBaselineInput) SetDescription(v string) *CreatePatchBaselineInput { + s.Description = &v + return s +} + +// SetGlobalFilters sets the GlobalFilters field's value. +func (s *CreatePatchBaselineInput) SetGlobalFilters(v *PatchFilterGroup) *CreatePatchBaselineInput { + s.GlobalFilters = v + return s +} + +// SetName sets the Name field's value. +func (s *CreatePatchBaselineInput) SetName(v string) *CreatePatchBaselineInput { + s.Name = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *CreatePatchBaselineInput) SetOperatingSystem(v string) *CreatePatchBaselineInput { + s.OperatingSystem = &v + return s +} + +// SetRejectedPatches sets the RejectedPatches field's value. +func (s *CreatePatchBaselineInput) SetRejectedPatches(v []*string) *CreatePatchBaselineInput { + s.RejectedPatches = v + return s +} + +// SetSources sets the Sources field's value. +func (s *CreatePatchBaselineInput) SetSources(v []*PatchSource) *CreatePatchBaselineInput { + s.Sources = v + return s +} + +type CreatePatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // The ID of the created patch baseline. + BaselineId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s CreatePatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePatchBaselineOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *CreatePatchBaselineOutput) SetBaselineId(v string) *CreatePatchBaselineOutput { + s.BaselineId = &v + return s +} + +type CreateResourceDataSyncInput struct { + _ struct{} `type:"structure"` + + // Amazon S3 configuration details for the sync. + // + // S3Destination is a required field + S3Destination *ResourceDataSyncS3Destination `type:"structure" required:"true"` + + // A name for the configuration. + // + // SyncName is a required field + SyncName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateResourceDataSyncInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateResourceDataSyncInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateResourceDataSyncInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateResourceDataSyncInput"} + if s.S3Destination == nil { + invalidParams.Add(request.NewErrParamRequired("S3Destination")) + } + if s.SyncName == nil { + invalidParams.Add(request.NewErrParamRequired("SyncName")) + } + if s.SyncName != nil && len(*s.SyncName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SyncName", 1)) + } + if s.S3Destination != nil { + if err := s.S3Destination.Validate(); err != nil { + invalidParams.AddNested("S3Destination", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3Destination sets the S3Destination field's value. +func (s *CreateResourceDataSyncInput) SetS3Destination(v *ResourceDataSyncS3Destination) *CreateResourceDataSyncInput { + s.S3Destination = v + return s +} + +// SetSyncName sets the SyncName field's value. +func (s *CreateResourceDataSyncInput) SetSyncName(v string) *CreateResourceDataSyncInput { + s.SyncName = &v + return s +} + +type CreateResourceDataSyncOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateResourceDataSyncOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateResourceDataSyncOutput) GoString() string { + return s.String() +} + +type DeleteActivationInput struct { + _ struct{} `type:"structure"` + + // The ID of the activation that you want to delete. + // + // ActivationId is a required field + ActivationId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteActivationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteActivationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteActivationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteActivationInput"} + if s.ActivationId == nil { + invalidParams.Add(request.NewErrParamRequired("ActivationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActivationId sets the ActivationId field's value. +func (s *DeleteActivationInput) SetActivationId(v string) *DeleteActivationInput { + s.ActivationId = &v + return s +} + +type DeleteActivationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteActivationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteActivationOutput) GoString() string { + return s.String() +} + +type DeleteAssociationInput struct { + _ struct{} `type:"structure"` + + // The association ID that you want to delete. + AssociationId *string `type:"string"` + + // The ID of the instance. + InstanceId *string `type:"string"` + + // The name of the Systems Manager document. + Name *string `type:"string"` +} + +// String returns the string representation +func (s DeleteAssociationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAssociationInput) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *DeleteAssociationInput) SetAssociationId(v string) *DeleteAssociationInput { + s.AssociationId = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DeleteAssociationInput) SetInstanceId(v string) *DeleteAssociationInput { + s.InstanceId = &v + return s +} + +// SetName sets the Name field's value. +func (s *DeleteAssociationInput) SetName(v string) *DeleteAssociationInput { + s.Name = &v + return s +} + +type DeleteAssociationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAssociationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAssociationOutput) GoString() string { + return s.String() +} + +type DeleteDocumentInput struct { + _ struct{} `type:"structure"` + + // The name of the document. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDocumentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDocumentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDocumentInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DeleteDocumentInput) SetName(v string) *DeleteDocumentInput { + s.Name = &v + return s +} + +type DeleteDocumentOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDocumentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDocumentOutput) GoString() string { + return s.String() +} + +type DeleteInventoryInput struct { + _ struct{} `type:"structure"` + + // User-provided idempotency token. + ClientToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // Use this option to view a summary of the deletion request without deleting + // any data or the data type. This option is useful when you only want to understand + // what will be deleted. Once you validate that the data to be deleted is what + // you intend to delete, you can run the same command without specifying the + // DryRun option. + DryRun *bool `type:"boolean"` + + // Use the SchemaDeleteOption to delete a custom inventory type (schema). If + // you don't choose this option, the system only deletes existing inventory + // data associated with the custom inventory type. Choose one of the following + // options: + // + // DisableSchema: If you choose this option, the system ignores all inventory + // data for the specified version, and any earlier versions. To enable this + // schema again, you must call the PutInventory action for a version greater + // than the disbled version. + // + // DeleteSchema: This option deletes the specified custom type from the Inventory + // service. You can recreate the schema later, if you want. + SchemaDeleteOption *string `type:"string" enum:"InventorySchemaDeleteOption"` + + // The name of the custom inventory type for which you want to delete either + // all previously collected data, or the inventory type itself. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteInventoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInventoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInventoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInventoryInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.TypeName == nil { + invalidParams.Add(request.NewErrParamRequired("TypeName")) + } + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *DeleteInventoryInput) SetClientToken(v string) *DeleteInventoryInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteInventoryInput) SetDryRun(v bool) *DeleteInventoryInput { + s.DryRun = &v + return s +} + +// SetSchemaDeleteOption sets the SchemaDeleteOption field's value. +func (s *DeleteInventoryInput) SetSchemaDeleteOption(v string) *DeleteInventoryInput { + s.SchemaDeleteOption = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *DeleteInventoryInput) SetTypeName(v string) *DeleteInventoryInput { + s.TypeName = &v + return s +} + +type DeleteInventoryOutput struct { + _ struct{} `type:"structure"` + + // Every DeleteInventory action is assigned a unique ID. This option returns + // a unique ID. You can use this ID to query the status of a delete operation. + // This option is useful for ensuring that a delete operation has completed + // before you begin other actions. + DeletionId *string `type:"string"` + + // A summary of the delete operation. For more information about this summary, + // see Understanding the Delete Inventory Summary (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-delete.html#sysman-inventory-delete-summary). + DeletionSummary *InventoryDeletionSummary `type:"structure"` + + // The name of the inventory data type specified in the request. + TypeName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteInventoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInventoryOutput) GoString() string { + return s.String() +} + +// SetDeletionId sets the DeletionId field's value. +func (s *DeleteInventoryOutput) SetDeletionId(v string) *DeleteInventoryOutput { + s.DeletionId = &v + return s +} + +// SetDeletionSummary sets the DeletionSummary field's value. +func (s *DeleteInventoryOutput) SetDeletionSummary(v *InventoryDeletionSummary) *DeleteInventoryOutput { + s.DeletionSummary = v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *DeleteInventoryOutput) SetTypeName(v string) *DeleteInventoryOutput { + s.TypeName = &v + return s +} + +type DeleteMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window to delete. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteMaintenanceWindowInput"} + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWindowId sets the WindowId field's value. +func (s *DeleteMaintenanceWindowInput) SetWindowId(v string) *DeleteMaintenanceWindowInput { + s.WindowId = &v + return s +} + +type DeleteMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the deleted Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s DeleteMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowId sets the WindowId field's value. +func (s *DeleteMaintenanceWindowOutput) SetWindowId(v string) *DeleteMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +type DeleteParameterInput struct { + _ struct{} `type:"structure"` + + // The name of the parameter to delete. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteParameterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteParameterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteParameterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteParameterInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DeleteParameterInput) SetName(v string) *DeleteParameterInput { + s.Name = &v + return s +} + +type DeleteParameterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteParameterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteParameterOutput) GoString() string { + return s.String() +} + +type DeleteParametersInput struct { + _ struct{} `type:"structure"` + + // The names of the parameters to delete. + // + // Names is a required field + Names []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DeleteParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteParametersInput"} + if s.Names == nil { + invalidParams.Add(request.NewErrParamRequired("Names")) + } + if s.Names != nil && len(s.Names) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Names", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNames sets the Names field's value. +func (s *DeleteParametersInput) SetNames(v []*string) *DeleteParametersInput { + s.Names = v + return s +} + +type DeleteParametersOutput struct { + _ struct{} `type:"structure"` + + // The names of the deleted parameters. + DeletedParameters []*string `min:"1" type:"list"` + + // The names of parameters that weren't deleted because the parameters are not + // valid. + InvalidParameters []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s DeleteParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteParametersOutput) GoString() string { + return s.String() +} + +// SetDeletedParameters sets the DeletedParameters field's value. +func (s *DeleteParametersOutput) SetDeletedParameters(v []*string) *DeleteParametersOutput { + s.DeletedParameters = v + return s +} + +// SetInvalidParameters sets the InvalidParameters field's value. +func (s *DeleteParametersOutput) SetInvalidParameters(v []*string) *DeleteParametersOutput { + s.InvalidParameters = v + return s +} + +type DeletePatchBaselineInput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline to delete. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePatchBaselineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePatchBaselineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePatchBaselineInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBaselineId sets the BaselineId field's value. +func (s *DeletePatchBaselineInput) SetBaselineId(v string) *DeletePatchBaselineInput { + s.BaselineId = &v + return s +} + +type DeletePatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // The ID of the deleted patch baseline. + BaselineId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s DeletePatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePatchBaselineOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *DeletePatchBaselineOutput) SetBaselineId(v string) *DeletePatchBaselineOutput { + s.BaselineId = &v + return s +} + +type DeleteResourceDataSyncInput struct { + _ struct{} `type:"structure"` + + // The name of the configuration to delete. + // + // SyncName is a required field + SyncName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteResourceDataSyncInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourceDataSyncInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteResourceDataSyncInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteResourceDataSyncInput"} + if s.SyncName == nil { + invalidParams.Add(request.NewErrParamRequired("SyncName")) + } + if s.SyncName != nil && len(*s.SyncName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SyncName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSyncName sets the SyncName field's value. +func (s *DeleteResourceDataSyncInput) SetSyncName(v string) *DeleteResourceDataSyncInput { + s.SyncName = &v + return s +} + +type DeleteResourceDataSyncOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteResourceDataSyncOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourceDataSyncOutput) GoString() string { + return s.String() +} + +type DeregisterManagedInstanceInput struct { + _ struct{} `type:"structure"` + + // The ID assigned to the managed instance when you registered it using the + // activation process. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeregisterManagedInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterManagedInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeregisterManagedInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterManagedInstanceInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DeregisterManagedInstanceInput) SetInstanceId(v string) *DeregisterManagedInstanceInput { + s.InstanceId = &v + return s +} + +type DeregisterManagedInstanceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeregisterManagedInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterManagedInstanceOutput) GoString() string { + return s.String() +} + +type DeregisterPatchBaselineForPatchGroupInput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline to deregister the patch group from. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` + + // The name of the patch group that should be deregistered from the patch baseline. + // + // PatchGroup is a required field + PatchGroup *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeregisterPatchBaselineForPatchGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterPatchBaselineForPatchGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeregisterPatchBaselineForPatchGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterPatchBaselineForPatchGroupInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + if s.PatchGroup == nil { + invalidParams.Add(request.NewErrParamRequired("PatchGroup")) + } + if s.PatchGroup != nil && len(*s.PatchGroup) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PatchGroup", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBaselineId sets the BaselineId field's value. +func (s *DeregisterPatchBaselineForPatchGroupInput) SetBaselineId(v string) *DeregisterPatchBaselineForPatchGroupInput { + s.BaselineId = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *DeregisterPatchBaselineForPatchGroupInput) SetPatchGroup(v string) *DeregisterPatchBaselineForPatchGroupInput { + s.PatchGroup = &v + return s +} + +type DeregisterPatchBaselineForPatchGroupOutput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline the patch group was deregistered from. + BaselineId *string `min:"20" type:"string"` + + // The name of the patch group deregistered from the patch baseline. + PatchGroup *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeregisterPatchBaselineForPatchGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterPatchBaselineForPatchGroupOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *DeregisterPatchBaselineForPatchGroupOutput) SetBaselineId(v string) *DeregisterPatchBaselineForPatchGroupOutput { + s.BaselineId = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *DeregisterPatchBaselineForPatchGroupOutput) SetPatchGroup(v string) *DeregisterPatchBaselineForPatchGroupOutput { + s.PatchGroup = &v + return s +} + +type DeregisterTargetFromMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // The system checks if the target is being referenced by a task. If the target + // is being referenced, the system returns an error and does not deregister + // the target from the Maintenance Window. + Safe *bool `type:"boolean"` + + // The ID of the Maintenance Window the target should be removed from. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` + + // The ID of the target definition to remove. + // + // WindowTargetId is a required field + WindowTargetId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeregisterTargetFromMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTargetFromMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeregisterTargetFromMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterTargetFromMaintenanceWindowInput"} + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.WindowTargetId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowTargetId")) + } + if s.WindowTargetId != nil && len(*s.WindowTargetId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowTargetId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSafe sets the Safe field's value. +func (s *DeregisterTargetFromMaintenanceWindowInput) SetSafe(v bool) *DeregisterTargetFromMaintenanceWindowInput { + s.Safe = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTargetFromMaintenanceWindowInput) SetWindowId(v string) *DeregisterTargetFromMaintenanceWindowInput { + s.WindowId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *DeregisterTargetFromMaintenanceWindowInput) SetWindowTargetId(v string) *DeregisterTargetFromMaintenanceWindowInput { + s.WindowTargetId = &v + return s +} + +type DeregisterTargetFromMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window the target was removed from. + WindowId *string `min:"20" type:"string"` + + // The ID of the removed target definition. + WindowTargetId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s DeregisterTargetFromMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTargetFromMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTargetFromMaintenanceWindowOutput) SetWindowId(v string) *DeregisterTargetFromMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *DeregisterTargetFromMaintenanceWindowOutput) SetWindowTargetId(v string) *DeregisterTargetFromMaintenanceWindowOutput { + s.WindowTargetId = &v + return s +} + +type DeregisterTaskFromMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window the task should be removed from. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` + + // The ID of the task to remove from the Maintenance Window. + // + // WindowTaskId is a required field + WindowTaskId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeregisterTaskFromMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTaskFromMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeregisterTaskFromMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterTaskFromMaintenanceWindowInput"} + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.WindowTaskId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowTaskId")) + } + if s.WindowTaskId != nil && len(*s.WindowTaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowTaskId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTaskFromMaintenanceWindowInput) SetWindowId(v string) *DeregisterTaskFromMaintenanceWindowInput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *DeregisterTaskFromMaintenanceWindowInput) SetWindowTaskId(v string) *DeregisterTaskFromMaintenanceWindowInput { + s.WindowTaskId = &v + return s +} + +type DeregisterTaskFromMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window the task was removed from. + WindowId *string `min:"20" type:"string"` + + // The ID of the task removed from the Maintenance Window. + WindowTaskId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s DeregisterTaskFromMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTaskFromMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTaskFromMaintenanceWindowOutput) SetWindowId(v string) *DeregisterTaskFromMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *DeregisterTaskFromMaintenanceWindowOutput) SetWindowTaskId(v string) *DeregisterTaskFromMaintenanceWindowOutput { + s.WindowTaskId = &v + return s +} + +// Filter for the DescribeActivation API. +type DescribeActivationsFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + FilterKey *string `type:"string" enum:"DescribeActivationsFilterKeys"` + + // The filter values. + FilterValues []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeActivationsFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeActivationsFilter) GoString() string { + return s.String() +} + +// SetFilterKey sets the FilterKey field's value. +func (s *DescribeActivationsFilter) SetFilterKey(v string) *DescribeActivationsFilter { + s.FilterKey = &v + return s +} + +// SetFilterValues sets the FilterValues field's value. +func (s *DescribeActivationsFilter) SetFilterValues(v []*string) *DescribeActivationsFilter { + s.FilterValues = v + return s +} + +type DescribeActivationsInput struct { + _ struct{} `type:"structure"` + + // A filter to view information about your activations. + Filters []*DescribeActivationsFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeActivationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeActivationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeActivationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeActivationsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeActivationsInput) SetFilters(v []*DescribeActivationsFilter) *DescribeActivationsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeActivationsInput) SetMaxResults(v int64) *DescribeActivationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeActivationsInput) SetNextToken(v string) *DescribeActivationsInput { + s.NextToken = &v + return s +} + +type DescribeActivationsOutput struct { + _ struct{} `type:"structure"` + + // A list of activations for your AWS account. + ActivationList []*Activation `type:"list"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeActivationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeActivationsOutput) GoString() string { + return s.String() +} + +// SetActivationList sets the ActivationList field's value. +func (s *DescribeActivationsOutput) SetActivationList(v []*Activation) *DescribeActivationsOutput { + s.ActivationList = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeActivationsOutput) SetNextToken(v string) *DescribeActivationsOutput { + s.NextToken = &v + return s +} + +type DescribeAssociationInput struct { + _ struct{} `type:"structure"` + + // The association ID for which you want information. + AssociationId *string `type:"string"` + + // Specify the association version to retrieve. To view the latest version, + // either specify $LATEST for this parameter, or omit this parameter. To view + // a list of all associations for an instance, use ListInstanceAssociations. + // To get a list of versions for a specific association, use ListAssociationVersions. + AssociationVersion *string `type:"string"` + + // The instance ID. + InstanceId *string `type:"string"` + + // The name of the Systems Manager document. + Name *string `type:"string"` +} + +// String returns the string representation +func (s DescribeAssociationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAssociationInput) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *DescribeAssociationInput) SetAssociationId(v string) *DescribeAssociationInput { + s.AssociationId = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *DescribeAssociationInput) SetAssociationVersion(v string) *DescribeAssociationInput { + s.AssociationVersion = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeAssociationInput) SetInstanceId(v string) *DescribeAssociationInput { + s.InstanceId = &v + return s +} + +// SetName sets the Name field's value. +func (s *DescribeAssociationInput) SetName(v string) *DescribeAssociationInput { + s.Name = &v + return s +} + +type DescribeAssociationOutput struct { + _ struct{} `type:"structure"` + + // Information about the association. + AssociationDescription *AssociationDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeAssociationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAssociationOutput) GoString() string { + return s.String() +} + +// SetAssociationDescription sets the AssociationDescription field's value. +func (s *DescribeAssociationOutput) SetAssociationDescription(v *AssociationDescription) *DescribeAssociationOutput { + s.AssociationDescription = v + return s +} + +type DescribeAutomationExecutionsInput struct { + _ struct{} `type:"structure"` + + // Filters used to limit the scope of executions that are requested. + Filters []*AutomationExecutionFilter `min:"1" type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeAutomationExecutionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAutomationExecutionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAutomationExecutionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAutomationExecutionsInput"} + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeAutomationExecutionsInput) SetFilters(v []*AutomationExecutionFilter) *DescribeAutomationExecutionsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeAutomationExecutionsInput) SetMaxResults(v int64) *DescribeAutomationExecutionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAutomationExecutionsInput) SetNextToken(v string) *DescribeAutomationExecutionsInput { + s.NextToken = &v + return s +} + +type DescribeAutomationExecutionsOutput struct { + _ struct{} `type:"structure"` + + // The list of details about each automation execution which has occurred which + // matches the filter specification, if any. + AutomationExecutionMetadataList []*AutomationExecutionMetadata `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeAutomationExecutionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAutomationExecutionsOutput) GoString() string { + return s.String() +} + +// SetAutomationExecutionMetadataList sets the AutomationExecutionMetadataList field's value. +func (s *DescribeAutomationExecutionsOutput) SetAutomationExecutionMetadataList(v []*AutomationExecutionMetadata) *DescribeAutomationExecutionsOutput { + s.AutomationExecutionMetadataList = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAutomationExecutionsOutput) SetNextToken(v string) *DescribeAutomationExecutionsOutput { + s.NextToken = &v + return s +} + +type DescribeAutomationStepExecutionsInput struct { + _ struct{} `type:"structure"` + + // The Automation execution ID for which you want step execution descriptions. + // + // AutomationExecutionId is a required field + AutomationExecutionId *string `min:"36" type:"string" required:"true"` + + // One or more filters to limit the number of step executions returned by the + // request. + Filters []*StepExecutionFilter `min:"1" type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // A boolean that indicates whether to list step executions in reverse order + // by start time. The default value is false. + ReverseOrder *bool `type:"boolean"` +} + +// String returns the string representation +func (s DescribeAutomationStepExecutionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAutomationStepExecutionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAutomationStepExecutionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAutomationStepExecutionsInput"} + if s.AutomationExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("AutomationExecutionId")) + } + if s.AutomationExecutionId != nil && len(*s.AutomationExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("AutomationExecutionId", 36)) + } + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *DescribeAutomationStepExecutionsInput) SetAutomationExecutionId(v string) *DescribeAutomationStepExecutionsInput { + s.AutomationExecutionId = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeAutomationStepExecutionsInput) SetFilters(v []*StepExecutionFilter) *DescribeAutomationStepExecutionsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeAutomationStepExecutionsInput) SetMaxResults(v int64) *DescribeAutomationStepExecutionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAutomationStepExecutionsInput) SetNextToken(v string) *DescribeAutomationStepExecutionsInput { + s.NextToken = &v + return s +} + +// SetReverseOrder sets the ReverseOrder field's value. +func (s *DescribeAutomationStepExecutionsInput) SetReverseOrder(v bool) *DescribeAutomationStepExecutionsInput { + s.ReverseOrder = &v + return s +} + +type DescribeAutomationStepExecutionsOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // A list of details about the current state of all steps that make up an execution. + StepExecutions []*StepExecution `type:"list"` +} + +// String returns the string representation +func (s DescribeAutomationStepExecutionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAutomationStepExecutionsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAutomationStepExecutionsOutput) SetNextToken(v string) *DescribeAutomationStepExecutionsOutput { + s.NextToken = &v + return s +} + +// SetStepExecutions sets the StepExecutions field's value. +func (s *DescribeAutomationStepExecutionsOutput) SetStepExecutions(v []*StepExecution) *DescribeAutomationStepExecutionsOutput { + s.StepExecutions = v + return s +} + +type DescribeAvailablePatchesInput struct { + _ struct{} `type:"structure"` + + // Filters used to scope down the returned patches. + Filters []*PatchOrchestratorFilter `type:"list"` + + // The maximum number of patches to return (per page). + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeAvailablePatchesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAvailablePatchesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAvailablePatchesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAvailablePatchesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeAvailablePatchesInput) SetFilters(v []*PatchOrchestratorFilter) *DescribeAvailablePatchesInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeAvailablePatchesInput) SetMaxResults(v int64) *DescribeAvailablePatchesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAvailablePatchesInput) SetNextToken(v string) *DescribeAvailablePatchesInput { + s.NextToken = &v + return s +} + +type DescribeAvailablePatchesOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // An array of patches. Each entry in the array is a patch structure. + Patches []*Patch `type:"list"` +} + +// String returns the string representation +func (s DescribeAvailablePatchesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAvailablePatchesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAvailablePatchesOutput) SetNextToken(v string) *DescribeAvailablePatchesOutput { + s.NextToken = &v + return s +} + +// SetPatches sets the Patches field's value. +func (s *DescribeAvailablePatchesOutput) SetPatches(v []*Patch) *DescribeAvailablePatchesOutput { + s.Patches = v + return s +} + +type DescribeDocumentInput struct { + _ struct{} `type:"structure"` + + // The document version for which you want information. Can be a specific version + // or the default version. + DocumentVersion *string `type:"string"` + + // The name of the Systems Manager document. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeDocumentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDocumentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDocumentInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *DescribeDocumentInput) SetDocumentVersion(v string) *DescribeDocumentInput { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *DescribeDocumentInput) SetName(v string) *DescribeDocumentInput { + s.Name = &v + return s +} + +type DescribeDocumentOutput struct { + _ struct{} `type:"structure"` + + // Information about the Systems Manager document. + Document *DocumentDescription `type:"structure"` +} + +// String returns the string representation +func (s DescribeDocumentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDocumentOutput) GoString() string { + return s.String() +} + +// SetDocument sets the Document field's value. +func (s *DescribeDocumentOutput) SetDocument(v *DocumentDescription) *DescribeDocumentOutput { + s.Document = v + return s +} + +type DescribeDocumentPermissionInput struct { + _ struct{} `type:"structure"` + + // The name of the document for which you are the owner. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // The permission type for the document. The permission type can be Share. + // + // PermissionType is a required field + PermissionType *string `type:"string" required:"true" enum:"DocumentPermissionType"` +} + +// String returns the string representation +func (s DescribeDocumentPermissionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDocumentPermissionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDocumentPermissionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDocumentPermissionInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.PermissionType == nil { + invalidParams.Add(request.NewErrParamRequired("PermissionType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DescribeDocumentPermissionInput) SetName(v string) *DescribeDocumentPermissionInput { + s.Name = &v + return s +} + +// SetPermissionType sets the PermissionType field's value. +func (s *DescribeDocumentPermissionInput) SetPermissionType(v string) *DescribeDocumentPermissionInput { + s.PermissionType = &v + return s +} + +type DescribeDocumentPermissionOutput struct { + _ struct{} `type:"structure"` + + // The account IDs that have permission to use this document. The ID can be + // either an AWS account or All. + AccountIds []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeDocumentPermissionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDocumentPermissionOutput) GoString() string { + return s.String() +} + +// SetAccountIds sets the AccountIds field's value. +func (s *DescribeDocumentPermissionOutput) SetAccountIds(v []*string) *DescribeDocumentPermissionOutput { + s.AccountIds = v + return s +} + +type DescribeEffectiveInstanceAssociationsInput struct { + _ struct{} `type:"structure"` + + // The instance ID for which you want to view all associations. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEffectiveInstanceAssociationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEffectiveInstanceAssociationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEffectiveInstanceAssociationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEffectiveInstanceAssociationsInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeEffectiveInstanceAssociationsInput) SetInstanceId(v string) *DescribeEffectiveInstanceAssociationsInput { + s.InstanceId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeEffectiveInstanceAssociationsInput) SetMaxResults(v int64) *DescribeEffectiveInstanceAssociationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeEffectiveInstanceAssociationsInput) SetNextToken(v string) *DescribeEffectiveInstanceAssociationsInput { + s.NextToken = &v + return s +} + +type DescribeEffectiveInstanceAssociationsOutput struct { + _ struct{} `type:"structure"` + + // The associations for the requested instance. + Associations []*InstanceAssociation `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEffectiveInstanceAssociationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEffectiveInstanceAssociationsOutput) GoString() string { + return s.String() +} + +// SetAssociations sets the Associations field's value. +func (s *DescribeEffectiveInstanceAssociationsOutput) SetAssociations(v []*InstanceAssociation) *DescribeEffectiveInstanceAssociationsOutput { + s.Associations = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeEffectiveInstanceAssociationsOutput) SetNextToken(v string) *DescribeEffectiveInstanceAssociationsOutput { + s.NextToken = &v + return s +} + +type DescribeEffectivePatchesForPatchBaselineInput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline to retrieve the effective patches for. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` + + // The maximum number of patches to return (per page). + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEffectivePatchesForPatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEffectivePatchesForPatchBaselineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEffectivePatchesForPatchBaselineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEffectivePatchesForPatchBaselineInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBaselineId sets the BaselineId field's value. +func (s *DescribeEffectivePatchesForPatchBaselineInput) SetBaselineId(v string) *DescribeEffectivePatchesForPatchBaselineInput { + s.BaselineId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeEffectivePatchesForPatchBaselineInput) SetMaxResults(v int64) *DescribeEffectivePatchesForPatchBaselineInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeEffectivePatchesForPatchBaselineInput) SetNextToken(v string) *DescribeEffectivePatchesForPatchBaselineInput { + s.NextToken = &v + return s +} + +type DescribeEffectivePatchesForPatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // An array of patches and patch status. + EffectivePatches []*EffectivePatch `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEffectivePatchesForPatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEffectivePatchesForPatchBaselineOutput) GoString() string { + return s.String() +} + +// SetEffectivePatches sets the EffectivePatches field's value. +func (s *DescribeEffectivePatchesForPatchBaselineOutput) SetEffectivePatches(v []*EffectivePatch) *DescribeEffectivePatchesForPatchBaselineOutput { + s.EffectivePatches = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeEffectivePatchesForPatchBaselineOutput) SetNextToken(v string) *DescribeEffectivePatchesForPatchBaselineOutput { + s.NextToken = &v + return s +} + +type DescribeInstanceAssociationsStatusInput struct { + _ struct{} `type:"structure"` + + // The instance IDs for which you want association status information. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceAssociationsStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceAssociationsStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInstanceAssociationsStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInstanceAssociationsStatusInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeInstanceAssociationsStatusInput) SetInstanceId(v string) *DescribeInstanceAssociationsStatusInput { + s.InstanceId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstanceAssociationsStatusInput) SetMaxResults(v int64) *DescribeInstanceAssociationsStatusInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceAssociationsStatusInput) SetNextToken(v string) *DescribeInstanceAssociationsStatusInput { + s.NextToken = &v + return s +} + +type DescribeInstanceAssociationsStatusOutput struct { + _ struct{} `type:"structure"` + + // Status information about the association. + InstanceAssociationStatusInfos []*InstanceAssociationStatusInfo `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceAssociationsStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceAssociationsStatusOutput) GoString() string { + return s.String() +} + +// SetInstanceAssociationStatusInfos sets the InstanceAssociationStatusInfos field's value. +func (s *DescribeInstanceAssociationsStatusOutput) SetInstanceAssociationStatusInfos(v []*InstanceAssociationStatusInfo) *DescribeInstanceAssociationsStatusOutput { + s.InstanceAssociationStatusInfos = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceAssociationsStatusOutput) SetNextToken(v string) *DescribeInstanceAssociationsStatusOutput { + s.NextToken = &v + return s +} + +type DescribeInstanceInformationInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of instances. + Filters []*InstanceInformationStringFilter `type:"list"` + + // One or more filters. Use a filter to return a more specific list of instances. + InstanceInformationFilterList []*InstanceInformationFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"5" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceInformationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceInformationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInstanceInformationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInstanceInformationInput"} + if s.MaxResults != nil && *s.MaxResults < 5 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 5)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + if s.InstanceInformationFilterList != nil { + for i, v := range s.InstanceInformationFilterList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InstanceInformationFilterList", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeInstanceInformationInput) SetFilters(v []*InstanceInformationStringFilter) *DescribeInstanceInformationInput { + s.Filters = v + return s +} + +// SetInstanceInformationFilterList sets the InstanceInformationFilterList field's value. +func (s *DescribeInstanceInformationInput) SetInstanceInformationFilterList(v []*InstanceInformationFilter) *DescribeInstanceInformationInput { + s.InstanceInformationFilterList = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstanceInformationInput) SetMaxResults(v int64) *DescribeInstanceInformationInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceInformationInput) SetNextToken(v string) *DescribeInstanceInformationInput { + s.NextToken = &v + return s +} + +type DescribeInstanceInformationOutput struct { + _ struct{} `type:"structure"` + + // The instance information list. + InstanceInformationList []*InstanceInformation `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceInformationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceInformationOutput) GoString() string { + return s.String() +} + +// SetInstanceInformationList sets the InstanceInformationList field's value. +func (s *DescribeInstanceInformationOutput) SetInstanceInformationList(v []*InstanceInformation) *DescribeInstanceInformationOutput { + s.InstanceInformationList = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceInformationOutput) SetNextToken(v string) *DescribeInstanceInformationOutput { + s.NextToken = &v + return s +} + +type DescribeInstancePatchStatesForPatchGroupInput struct { + _ struct{} `type:"structure"` + + // Each entry in the array is a structure containing: + // + // Key (string between 1 and 200 characters) + // + // Values (array containing a single string) + // + // Type (string "Equal", "NotEqual", "LessThan", "GreaterThan") + Filters []*InstancePatchStateFilter `type:"list"` + + // The maximum number of patches to return (per page). + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The name of the patch group for which the patch state information should + // be retrieved. + // + // PatchGroup is a required field + PatchGroup *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeInstancePatchStatesForPatchGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchStatesForPatchGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInstancePatchStatesForPatchGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInstancePatchStatesForPatchGroupInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.PatchGroup == nil { + invalidParams.Add(request.NewErrParamRequired("PatchGroup")) + } + if s.PatchGroup != nil && len(*s.PatchGroup) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PatchGroup", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeInstancePatchStatesForPatchGroupInput) SetFilters(v []*InstancePatchStateFilter) *DescribeInstancePatchStatesForPatchGroupInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstancePatchStatesForPatchGroupInput) SetMaxResults(v int64) *DescribeInstancePatchStatesForPatchGroupInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchStatesForPatchGroupInput) SetNextToken(v string) *DescribeInstancePatchStatesForPatchGroupInput { + s.NextToken = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *DescribeInstancePatchStatesForPatchGroupInput) SetPatchGroup(v string) *DescribeInstancePatchStatesForPatchGroupInput { + s.PatchGroup = &v + return s +} + +type DescribeInstancePatchStatesForPatchGroupOutput struct { + _ struct{} `type:"structure"` + + // The high-level patch state for the requested instances. + InstancePatchStates []*InstancePatchState `min:"1" type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancePatchStatesForPatchGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchStatesForPatchGroupOutput) GoString() string { + return s.String() +} + +// SetInstancePatchStates sets the InstancePatchStates field's value. +func (s *DescribeInstancePatchStatesForPatchGroupOutput) SetInstancePatchStates(v []*InstancePatchState) *DescribeInstancePatchStatesForPatchGroupOutput { + s.InstancePatchStates = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchStatesForPatchGroupOutput) SetNextToken(v string) *DescribeInstancePatchStatesForPatchGroupOutput { + s.NextToken = &v + return s +} + +type DescribeInstancePatchStatesInput struct { + _ struct{} `type:"structure"` + + // The ID of the instance whose patch state information should be retrieved. + // + // InstanceIds is a required field + InstanceIds []*string `type:"list" required:"true"` + + // The maximum number of instances to return (per page). + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancePatchStatesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchStatesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInstancePatchStatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInstancePatchStatesInput"} + if s.InstanceIds == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceIds")) + } + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DescribeInstancePatchStatesInput) SetInstanceIds(v []*string) *DescribeInstancePatchStatesInput { + s.InstanceIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstancePatchStatesInput) SetMaxResults(v int64) *DescribeInstancePatchStatesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchStatesInput) SetNextToken(v string) *DescribeInstancePatchStatesInput { + s.NextToken = &v + return s +} + +type DescribeInstancePatchStatesOutput struct { + _ struct{} `type:"structure"` + + // The high-level patch state for the requested instances. + InstancePatchStates []*InstancePatchState `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancePatchStatesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchStatesOutput) GoString() string { + return s.String() +} + +// SetInstancePatchStates sets the InstancePatchStates field's value. +func (s *DescribeInstancePatchStatesOutput) SetInstancePatchStates(v []*InstancePatchState) *DescribeInstancePatchStatesOutput { + s.InstancePatchStates = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchStatesOutput) SetNextToken(v string) *DescribeInstancePatchStatesOutput { + s.NextToken = &v + return s +} + +type DescribeInstancePatchesInput struct { + _ struct{} `type:"structure"` + + // Each entry in the array is a structure containing: + // + // Key (string, between 1 and 128 characters) + // + // Values (array of strings, each string between 1 and 256 characters) + Filters []*PatchOrchestratorFilter `type:"list"` + + // The ID of the instance whose patch state information should be retrieved. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The maximum number of patches to return (per page). + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancePatchesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInstancePatchesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInstancePatchesInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeInstancePatchesInput) SetFilters(v []*PatchOrchestratorFilter) *DescribeInstancePatchesInput { + s.Filters = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeInstancePatchesInput) SetInstanceId(v string) *DescribeInstancePatchesInput { + s.InstanceId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstancePatchesInput) SetMaxResults(v int64) *DescribeInstancePatchesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchesInput) SetNextToken(v string) *DescribeInstancePatchesInput { + s.NextToken = &v + return s +} + +type DescribeInstancePatchesOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Each entry in the array is a structure containing: + // + // Title (string) + // + // KBId (string) + // + // Classification (string) + // + // Severity (string) + // + // State (string: "INSTALLED", "INSTALLED OTHER", "MISSING", "NOT APPLICABLE", + // "FAILED") + // + // InstalledTime (DateTime) + // + // InstalledBy (string) + Patches []*PatchComplianceData `type:"list"` +} + +// String returns the string representation +func (s DescribeInstancePatchesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchesOutput) SetNextToken(v string) *DescribeInstancePatchesOutput { + s.NextToken = &v + return s +} + +// SetPatches sets the Patches field's value. +func (s *DescribeInstancePatchesOutput) SetPatches(v []*PatchComplianceData) *DescribeInstancePatchesOutput { + s.Patches = v + return s +} + +type DescribeInventoryDeletionsInput struct { + _ struct{} `type:"structure"` + + // Specify the delete inventory ID for which you want information. This ID was + // returned by the DeleteInventory action. + DeletionId *string `type:"string"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInventoryDeletionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInventoryDeletionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInventoryDeletionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInventoryDeletionsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeletionId sets the DeletionId field's value. +func (s *DescribeInventoryDeletionsInput) SetDeletionId(v string) *DescribeInventoryDeletionsInput { + s.DeletionId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInventoryDeletionsInput) SetMaxResults(v int64) *DescribeInventoryDeletionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInventoryDeletionsInput) SetNextToken(v string) *DescribeInventoryDeletionsInput { + s.NextToken = &v + return s +} + +type DescribeInventoryDeletionsOutput struct { + _ struct{} `type:"structure"` + + // A list of status items for deleted inventory. + InventoryDeletions []*InventoryDeletionStatusItem `type:"list"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInventoryDeletionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInventoryDeletionsOutput) GoString() string { + return s.String() +} + +// SetInventoryDeletions sets the InventoryDeletions field's value. +func (s *DescribeInventoryDeletionsOutput) SetInventoryDeletions(v []*InventoryDeletionStatusItem) *DescribeInventoryDeletionsOutput { + s.InventoryDeletions = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInventoryDeletionsOutput) SetNextToken(v string) *DescribeInventoryDeletionsOutput { + s.NextToken = &v + return s +} + +type DescribeMaintenanceWindowExecutionTaskInvocationsInput struct { + _ struct{} `type:"structure"` + + // Optional filters used to scope down the returned task invocations. The supported + // filter key is STATUS with the corresponding values PENDING, IN_PROGRESS, + // SUCCESS, FAILED, TIMED_OUT, CANCELLING, and CANCELLED. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The ID of the specific task in the Maintenance Window task that should be + // retrieved. + // + // TaskId is a required field + TaskId *string `min:"36" type:"string" required:"true"` + + // The ID of the Maintenance Window execution the task is part of. + // + // WindowExecutionId is a required field + WindowExecutionId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowExecutionTaskInvocationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowExecutionTaskInvocationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowExecutionTaskInvocationsInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 36)) + } + if s.WindowExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowExecutionId")) + } + if s.WindowExecutionId != nil && len(*s.WindowExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowExecutionId", 36)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowExecutionTaskInvocationsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsInput) SetMaxResults(v int64) *DescribeMaintenanceWindowExecutionTaskInvocationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsInput) SetNextToken(v string) *DescribeMaintenanceWindowExecutionTaskInvocationsInput { + s.NextToken = &v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsInput) SetTaskId(v string) *DescribeMaintenanceWindowExecutionTaskInvocationsInput { + s.TaskId = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsInput) SetWindowExecutionId(v string) *DescribeMaintenanceWindowExecutionTaskInvocationsInput { + s.WindowExecutionId = &v + return s +} + +type DescribeMaintenanceWindowExecutionTaskInvocationsOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the task invocation results per invocation. + WindowExecutionTaskInvocationIdentities []*MaintenanceWindowExecutionTaskInvocationIdentity `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowExecutionTaskInvocationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowExecutionTaskInvocationsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsOutput) SetNextToken(v string) *DescribeMaintenanceWindowExecutionTaskInvocationsOutput { + s.NextToken = &v + return s +} + +// SetWindowExecutionTaskInvocationIdentities sets the WindowExecutionTaskInvocationIdentities field's value. +func (s *DescribeMaintenanceWindowExecutionTaskInvocationsOutput) SetWindowExecutionTaskInvocationIdentities(v []*MaintenanceWindowExecutionTaskInvocationIdentity) *DescribeMaintenanceWindowExecutionTaskInvocationsOutput { + s.WindowExecutionTaskInvocationIdentities = v + return s +} + +type DescribeMaintenanceWindowExecutionTasksInput struct { + _ struct{} `type:"structure"` + + // Optional filters used to scope down the returned tasks. The supported filter + // key is STATUS with the corresponding values PENDING, IN_PROGRESS, SUCCESS, + // FAILED, TIMED_OUT, CANCELLING, and CANCELLED. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The ID of the Maintenance Window execution whose task executions should be + // retrieved. + // + // WindowExecutionId is a required field + WindowExecutionId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowExecutionTasksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowExecutionTasksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowExecutionTasksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowExecutionTasksInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.WindowExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowExecutionId")) + } + if s.WindowExecutionId != nil && len(*s.WindowExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowExecutionId", 36)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowExecutionTasksInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowExecutionTasksInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowExecutionTasksInput) SetMaxResults(v int64) *DescribeMaintenanceWindowExecutionTasksInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowExecutionTasksInput) SetNextToken(v string) *DescribeMaintenanceWindowExecutionTasksInput { + s.NextToken = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *DescribeMaintenanceWindowExecutionTasksInput) SetWindowExecutionId(v string) *DescribeMaintenanceWindowExecutionTasksInput { + s.WindowExecutionId = &v + return s +} + +type DescribeMaintenanceWindowExecutionTasksOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the task executions. + WindowExecutionTaskIdentities []*MaintenanceWindowExecutionTaskIdentity `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowExecutionTasksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowExecutionTasksOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowExecutionTasksOutput) SetNextToken(v string) *DescribeMaintenanceWindowExecutionTasksOutput { + s.NextToken = &v + return s +} + +// SetWindowExecutionTaskIdentities sets the WindowExecutionTaskIdentities field's value. +func (s *DescribeMaintenanceWindowExecutionTasksOutput) SetWindowExecutionTaskIdentities(v []*MaintenanceWindowExecutionTaskIdentity) *DescribeMaintenanceWindowExecutionTasksOutput { + s.WindowExecutionTaskIdentities = v + return s +} + +type DescribeMaintenanceWindowExecutionsInput struct { + _ struct{} `type:"structure"` + + // Each entry in the array is a structure containing: + // + // Key (string, between 1 and 128 characters) + // + // Values (array of strings, each string is between 1 and 256 characters) + // + // The supported Keys are ExecutedBefore and ExecutedAfter with the value being + // a date/time string such as 2016-11-04T05:00:00Z. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The ID of the Maintenance Window whose executions should be retrieved. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowExecutionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowExecutionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowExecutionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowExecutionsInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowExecutionsInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowExecutionsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowExecutionsInput) SetMaxResults(v int64) *DescribeMaintenanceWindowExecutionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowExecutionsInput) SetNextToken(v string) *DescribeMaintenanceWindowExecutionsInput { + s.NextToken = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *DescribeMaintenanceWindowExecutionsInput) SetWindowId(v string) *DescribeMaintenanceWindowExecutionsInput { + s.WindowId = &v + return s +} + +type DescribeMaintenanceWindowExecutionsOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the Maintenance Windows execution. + WindowExecutions []*MaintenanceWindowExecution `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowExecutionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowExecutionsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowExecutionsOutput) SetNextToken(v string) *DescribeMaintenanceWindowExecutionsOutput { + s.NextToken = &v + return s +} + +// SetWindowExecutions sets the WindowExecutions field's value. +func (s *DescribeMaintenanceWindowExecutionsOutput) SetWindowExecutions(v []*MaintenanceWindowExecution) *DescribeMaintenanceWindowExecutionsOutput { + s.WindowExecutions = v + return s +} + +type DescribeMaintenanceWindowTargetsInput struct { + _ struct{} `type:"structure"` + + // Optional filters that can be used to narrow down the scope of the returned + // window targets. The supported filter keys are Type, WindowTargetId and OwnerInformation. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The ID of the Maintenance Window whose targets should be retrieved. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowTargetsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowTargetsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowTargetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowTargetsInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowTargetsInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowTargetsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowTargetsInput) SetMaxResults(v int64) *DescribeMaintenanceWindowTargetsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowTargetsInput) SetNextToken(v string) *DescribeMaintenanceWindowTargetsInput { + s.NextToken = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *DescribeMaintenanceWindowTargetsInput) SetWindowId(v string) *DescribeMaintenanceWindowTargetsInput { + s.WindowId = &v + return s +} + +type DescribeMaintenanceWindowTargetsOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the targets in the Maintenance Window. + Targets []*MaintenanceWindowTarget `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowTargetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowTargetsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowTargetsOutput) SetNextToken(v string) *DescribeMaintenanceWindowTargetsOutput { + s.NextToken = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *DescribeMaintenanceWindowTargetsOutput) SetTargets(v []*MaintenanceWindowTarget) *DescribeMaintenanceWindowTargetsOutput { + s.Targets = v + return s +} + +type DescribeMaintenanceWindowTasksInput struct { + _ struct{} `type:"structure"` + + // Optional filters used to narrow down the scope of the returned tasks. The + // supported filter keys are WindowTaskId, TaskArn, Priority, and TaskType. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The ID of the Maintenance Window whose tasks should be retrieved. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowTasksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowTasksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowTasksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowTasksInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowTasksInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetMaxResults(v int64) *DescribeMaintenanceWindowTasksInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetNextToken(v string) *DescribeMaintenanceWindowTasksInput { + s.NextToken = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetWindowId(v string) *DescribeMaintenanceWindowTasksInput { + s.WindowId = &v + return s +} + +type DescribeMaintenanceWindowTasksOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the tasks in the Maintenance Window. + Tasks []*MaintenanceWindowTask `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowTasksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowTasksOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowTasksOutput) SetNextToken(v string) *DescribeMaintenanceWindowTasksOutput { + s.NextToken = &v + return s +} + +// SetTasks sets the Tasks field's value. +func (s *DescribeMaintenanceWindowTasksOutput) SetTasks(v []*MaintenanceWindowTask) *DescribeMaintenanceWindowTasksOutput { + s.Tasks = v + return s +} + +type DescribeMaintenanceWindowsInput struct { + _ struct{} `type:"structure"` + + // Optional filters used to narrow down the scope of the returned Maintenance + // Windows. Supported filter keys are Name and Enabled. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowsInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowsInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowsInput) SetMaxResults(v int64) *DescribeMaintenanceWindowsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowsInput) SetNextToken(v string) *DescribeMaintenanceWindowsInput { + s.NextToken = &v + return s +} + +type DescribeMaintenanceWindowsOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the Maintenance Windows. + WindowIdentities []*MaintenanceWindowIdentity `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowsOutput) SetNextToken(v string) *DescribeMaintenanceWindowsOutput { + s.NextToken = &v + return s +} + +// SetWindowIdentities sets the WindowIdentities field's value. +func (s *DescribeMaintenanceWindowsOutput) SetWindowIdentities(v []*MaintenanceWindowIdentity) *DescribeMaintenanceWindowsOutput { + s.WindowIdentities = v + return s +} + +type DescribeParametersInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of results. + Filters []*ParametersFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // Filters to limit the request results. + ParameterFilters []*ParameterStringFilter `type:"list"` +} + +// String returns the string representation +func (s DescribeParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeParametersInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ParameterFilters != nil { + for i, v := range s.ParameterFilters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterFilters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeParametersInput) SetFilters(v []*ParametersFilter) *DescribeParametersInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeParametersInput) SetMaxResults(v int64) *DescribeParametersInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeParametersInput) SetNextToken(v string) *DescribeParametersInput { + s.NextToken = &v + return s +} + +// SetParameterFilters sets the ParameterFilters field's value. +func (s *DescribeParametersInput) SetParameterFilters(v []*ParameterStringFilter) *DescribeParametersInput { + s.ParameterFilters = v + return s +} + +type DescribeParametersOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Parameters returned by the request. + Parameters []*ParameterMetadata `type:"list"` +} + +// String returns the string representation +func (s DescribeParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeParametersOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeParametersOutput) SetNextToken(v string) *DescribeParametersOutput { + s.NextToken = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *DescribeParametersOutput) SetParameters(v []*ParameterMetadata) *DescribeParametersOutput { + s.Parameters = v + return s +} + +type DescribePatchBaselinesInput struct { + _ struct{} `type:"structure"` + + // Each element in the array is a structure containing: + // + // Key: (string, "NAME_PREFIX" or "OWNER") + // + // Value: (array of strings, exactly 1 entry, between 1 and 255 characters) + Filters []*PatchOrchestratorFilter `type:"list"` + + // The maximum number of patch baselines to return (per page). + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribePatchBaselinesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePatchBaselinesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePatchBaselinesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePatchBaselinesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribePatchBaselinesInput) SetFilters(v []*PatchOrchestratorFilter) *DescribePatchBaselinesInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribePatchBaselinesInput) SetMaxResults(v int64) *DescribePatchBaselinesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePatchBaselinesInput) SetNextToken(v string) *DescribePatchBaselinesInput { + s.NextToken = &v + return s +} + +type DescribePatchBaselinesOutput struct { + _ struct{} `type:"structure"` + + // An array of PatchBaselineIdentity elements. + BaselineIdentities []*PatchBaselineIdentity `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribePatchBaselinesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePatchBaselinesOutput) GoString() string { + return s.String() +} + +// SetBaselineIdentities sets the BaselineIdentities field's value. +func (s *DescribePatchBaselinesOutput) SetBaselineIdentities(v []*PatchBaselineIdentity) *DescribePatchBaselinesOutput { + s.BaselineIdentities = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePatchBaselinesOutput) SetNextToken(v string) *DescribePatchBaselinesOutput { + s.NextToken = &v + return s +} + +type DescribePatchGroupStateInput struct { + _ struct{} `type:"structure"` + + // The name of the patch group whose patch snapshot should be retrieved. + // + // PatchGroup is a required field + PatchGroup *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribePatchGroupStateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePatchGroupStateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePatchGroupStateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePatchGroupStateInput"} + if s.PatchGroup == nil { + invalidParams.Add(request.NewErrParamRequired("PatchGroup")) + } + if s.PatchGroup != nil && len(*s.PatchGroup) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PatchGroup", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *DescribePatchGroupStateInput) SetPatchGroup(v string) *DescribePatchGroupStateInput { + s.PatchGroup = &v + return s +} + +type DescribePatchGroupStateOutput struct { + _ struct{} `type:"structure"` + + // The number of instances in the patch group. + Instances *int64 `type:"integer"` + + // The number of instances with patches from the patch baseline that failed + // to install. + InstancesWithFailedPatches *int64 `type:"integer"` + + // The number of instances with patches installed that aren't defined in the + // patch baseline. + InstancesWithInstalledOtherPatches *int64 `type:"integer"` + + // The number of instances with installed patches. + InstancesWithInstalledPatches *int64 `type:"integer"` + + // The number of instances with missing patches from the patch baseline. + InstancesWithMissingPatches *int64 `type:"integer"` + + // The number of instances with patches that aren't applicable. + InstancesWithNotApplicablePatches *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribePatchGroupStateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePatchGroupStateOutput) GoString() string { + return s.String() +} + +// SetInstances sets the Instances field's value. +func (s *DescribePatchGroupStateOutput) SetInstances(v int64) *DescribePatchGroupStateOutput { + s.Instances = &v + return s +} + +// SetInstancesWithFailedPatches sets the InstancesWithFailedPatches field's value. +func (s *DescribePatchGroupStateOutput) SetInstancesWithFailedPatches(v int64) *DescribePatchGroupStateOutput { + s.InstancesWithFailedPatches = &v + return s +} + +// SetInstancesWithInstalledOtherPatches sets the InstancesWithInstalledOtherPatches field's value. +func (s *DescribePatchGroupStateOutput) SetInstancesWithInstalledOtherPatches(v int64) *DescribePatchGroupStateOutput { + s.InstancesWithInstalledOtherPatches = &v + return s +} + +// SetInstancesWithInstalledPatches sets the InstancesWithInstalledPatches field's value. +func (s *DescribePatchGroupStateOutput) SetInstancesWithInstalledPatches(v int64) *DescribePatchGroupStateOutput { + s.InstancesWithInstalledPatches = &v + return s +} + +// SetInstancesWithMissingPatches sets the InstancesWithMissingPatches field's value. +func (s *DescribePatchGroupStateOutput) SetInstancesWithMissingPatches(v int64) *DescribePatchGroupStateOutput { + s.InstancesWithMissingPatches = &v + return s +} + +// SetInstancesWithNotApplicablePatches sets the InstancesWithNotApplicablePatches field's value. +func (s *DescribePatchGroupStateOutput) SetInstancesWithNotApplicablePatches(v int64) *DescribePatchGroupStateOutput { + s.InstancesWithNotApplicablePatches = &v + return s +} + +type DescribePatchGroupsInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of results. + Filters []*PatchOrchestratorFilter `type:"list"` + + // The maximum number of patch groups to return (per page). + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribePatchGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePatchGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePatchGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePatchGroupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribePatchGroupsInput) SetFilters(v []*PatchOrchestratorFilter) *DescribePatchGroupsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribePatchGroupsInput) SetMaxResults(v int64) *DescribePatchGroupsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePatchGroupsInput) SetNextToken(v string) *DescribePatchGroupsInput { + s.NextToken = &v + return s +} + +type DescribePatchGroupsOutput struct { + _ struct{} `type:"structure"` + + // Each entry in the array contains: + // + // PatchGroup: string (between 1 and 256 characters, Regex: ^([\p{L}\p{Z}\p{N}_.:/=+\-@]*)$) + // + // PatchBaselineIdentity: A PatchBaselineIdentity element. + Mappings []*PatchGroupPatchBaselineMapping `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribePatchGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePatchGroupsOutput) GoString() string { + return s.String() +} + +// SetMappings sets the Mappings field's value. +func (s *DescribePatchGroupsOutput) SetMappings(v []*PatchGroupPatchBaselineMapping) *DescribePatchGroupsOutput { + s.Mappings = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePatchGroupsOutput) SetNextToken(v string) *DescribePatchGroupsOutput { + s.NextToken = &v + return s +} + +// A default version of a document. +type DocumentDefaultVersionDescription struct { + _ struct{} `type:"structure"` + + // The default version of the document. + DefaultVersion *string `type:"string"` + + // The name of the document. + Name *string `type:"string"` +} + +// String returns the string representation +func (s DocumentDefaultVersionDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentDefaultVersionDescription) GoString() string { + return s.String() +} + +// SetDefaultVersion sets the DefaultVersion field's value. +func (s *DocumentDefaultVersionDescription) SetDefaultVersion(v string) *DocumentDefaultVersionDescription { + s.DefaultVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *DocumentDefaultVersionDescription) SetName(v string) *DocumentDefaultVersionDescription { + s.Name = &v + return s +} + +// Describes a Systems Manager document. +type DocumentDescription struct { + _ struct{} `type:"structure"` + + // The date when the document was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The default version. + DefaultVersion *string `type:"string"` + + // A description of the document. + Description *string `type:"string"` + + // The document format, either JSON or YAML. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The type of document. + DocumentType *string `type:"string" enum:"DocumentType"` + + // The document version. + DocumentVersion *string `type:"string"` + + // The Sha256 or Sha1 hash created by the system when the document was created. + // + // Sha1 hashes have been deprecated. + Hash *string `type:"string"` + + // Sha256 or Sha1. + // + // Sha1 hashes have been deprecated. + HashType *string `type:"string" enum:"DocumentHashType"` + + // The latest version of the document. + LatestVersion *string `type:"string"` + + // The name of the Systems Manager document. + Name *string `type:"string"` + + // The AWS user account that created the document. + Owner *string `type:"string"` + + // A description of the parameters for a document. + Parameters []*DocumentParameter `type:"list"` + + // The list of OS platforms compatible with this Systems Manager document. + PlatformTypes []*string `type:"list"` + + // The schema version. + SchemaVersion *string `type:"string"` + + // The SHA1 hash of the document, which you can use for verification. + Sha1 *string `type:"string"` + + // The status of the Systems Manager document. + Status *string `type:"string" enum:"DocumentStatus"` + + // The tags, or metadata, that have been applied to the document. + Tags []*Tag `type:"list"` + + // The target type which defines the kinds of resources the document can run + // on. For example, /AWS::EC2::Instance. For a list of valid resource types, + // see AWS Resource Types Reference (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide. + TargetType *string `type:"string"` +} + +// String returns the string representation +func (s DocumentDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentDescription) GoString() string { + return s.String() +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *DocumentDescription) SetCreatedDate(v time.Time) *DocumentDescription { + s.CreatedDate = &v + return s +} + +// SetDefaultVersion sets the DefaultVersion field's value. +func (s *DocumentDescription) SetDefaultVersion(v string) *DocumentDescription { + s.DefaultVersion = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *DocumentDescription) SetDescription(v string) *DocumentDescription { + s.Description = &v + return s +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *DocumentDescription) SetDocumentFormat(v string) *DocumentDescription { + s.DocumentFormat = &v + return s +} + +// SetDocumentType sets the DocumentType field's value. +func (s *DocumentDescription) SetDocumentType(v string) *DocumentDescription { + s.DocumentType = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *DocumentDescription) SetDocumentVersion(v string) *DocumentDescription { + s.DocumentVersion = &v + return s +} + +// SetHash sets the Hash field's value. +func (s *DocumentDescription) SetHash(v string) *DocumentDescription { + s.Hash = &v + return s +} + +// SetHashType sets the HashType field's value. +func (s *DocumentDescription) SetHashType(v string) *DocumentDescription { + s.HashType = &v + return s +} + +// SetLatestVersion sets the LatestVersion field's value. +func (s *DocumentDescription) SetLatestVersion(v string) *DocumentDescription { + s.LatestVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *DocumentDescription) SetName(v string) *DocumentDescription { + s.Name = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *DocumentDescription) SetOwner(v string) *DocumentDescription { + s.Owner = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *DocumentDescription) SetParameters(v []*DocumentParameter) *DocumentDescription { + s.Parameters = v + return s +} + +// SetPlatformTypes sets the PlatformTypes field's value. +func (s *DocumentDescription) SetPlatformTypes(v []*string) *DocumentDescription { + s.PlatformTypes = v + return s +} + +// SetSchemaVersion sets the SchemaVersion field's value. +func (s *DocumentDescription) SetSchemaVersion(v string) *DocumentDescription { + s.SchemaVersion = &v + return s +} + +// SetSha1 sets the Sha1 field's value. +func (s *DocumentDescription) SetSha1(v string) *DocumentDescription { + s.Sha1 = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DocumentDescription) SetStatus(v string) *DocumentDescription { + s.Status = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *DocumentDescription) SetTags(v []*Tag) *DocumentDescription { + s.Tags = v + return s +} + +// SetTargetType sets the TargetType field's value. +func (s *DocumentDescription) SetTargetType(v string) *DocumentDescription { + s.TargetType = &v + return s +} + +// Describes a filter. +type DocumentFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true" enum:"DocumentFilterKey"` + + // The value of the filter. + // + // Value is a required field + Value *string `locationName:"value" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DocumentFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DocumentFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DocumentFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *DocumentFilter) SetKey(v string) *DocumentFilter { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *DocumentFilter) SetValue(v string) *DocumentFilter { + s.Value = &v + return s +} + +// Describes the name of a Systems Manager document. +type DocumentIdentifier struct { + _ struct{} `type:"structure"` + + // The document format, either JSON or YAML. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The document type. + DocumentType *string `type:"string" enum:"DocumentType"` + + // The document version. + DocumentVersion *string `type:"string"` + + // The name of the Systems Manager document. + Name *string `type:"string"` + + // The AWS user account that created the document. + Owner *string `type:"string"` + + // The operating system platform. + PlatformTypes []*string `type:"list"` + + // The schema version. + SchemaVersion *string `type:"string"` + + // The tags, or metadata, that have been applied to the document. + Tags []*Tag `type:"list"` + + // The target type which defines the kinds of resources the document can run + // on. For example, /AWS::EC2::Instance. For a list of valid resource types, + // see AWS Resource Types Reference (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) + // in the AWS CloudFormation User Guide. + TargetType *string `type:"string"` +} + +// String returns the string representation +func (s DocumentIdentifier) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentIdentifier) GoString() string { + return s.String() +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *DocumentIdentifier) SetDocumentFormat(v string) *DocumentIdentifier { + s.DocumentFormat = &v + return s +} + +// SetDocumentType sets the DocumentType field's value. +func (s *DocumentIdentifier) SetDocumentType(v string) *DocumentIdentifier { + s.DocumentType = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *DocumentIdentifier) SetDocumentVersion(v string) *DocumentIdentifier { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *DocumentIdentifier) SetName(v string) *DocumentIdentifier { + s.Name = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *DocumentIdentifier) SetOwner(v string) *DocumentIdentifier { + s.Owner = &v + return s +} + +// SetPlatformTypes sets the PlatformTypes field's value. +func (s *DocumentIdentifier) SetPlatformTypes(v []*string) *DocumentIdentifier { + s.PlatformTypes = v + return s +} + +// SetSchemaVersion sets the SchemaVersion field's value. +func (s *DocumentIdentifier) SetSchemaVersion(v string) *DocumentIdentifier { + s.SchemaVersion = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *DocumentIdentifier) SetTags(v []*Tag) *DocumentIdentifier { + s.Tags = v + return s +} + +// SetTargetType sets the TargetType field's value. +func (s *DocumentIdentifier) SetTargetType(v string) *DocumentIdentifier { + s.TargetType = &v + return s +} + +// One or more filters. Use a filter to return a more specific list of documents. +// +// For keys, you can specify one or more tags that have been applied to a document. +// +// Other valid values include Owner, Name, PlatformTypes, and DocumentType. +// +// Note that only one Owner can be specified in a request. For example: Key=Owner,Values=Self. +// +// If you use Name as a key, you can use a name prefix to return a list of documents. +// For example, in the AWS CLI, to return a list of all documents that begin +// with Te, run the following command: +// +// aws ssm list-documents --filters Key=Name,Values=Te +// +// If you specify more than two keys, only documents that are identified by +// all the tags are returned in the results. If you specify more than two values +// for a key, documents that are identified by any of the values are returned +// in the results. +// +// To specify a custom key and value pair, use the format Key=tag:[tagName],Values=[valueName]. +// +// For example, if you created a Key called region and are using the AWS CLI +// to call the list-documents command: +// +// aws ssm list-documents --filters Key=tag:region,Values=east,west Key=Owner,Values=Self +type DocumentKeyValuesFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter key. + Key *string `min:"1" type:"string"` + + // The value for the filter key. + Values []*string `type:"list"` +} + +// String returns the string representation +func (s DocumentKeyValuesFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentKeyValuesFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DocumentKeyValuesFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DocumentKeyValuesFilter"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *DocumentKeyValuesFilter) SetKey(v string) *DocumentKeyValuesFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *DocumentKeyValuesFilter) SetValues(v []*string) *DocumentKeyValuesFilter { + s.Values = v + return s +} + +// Parameters specified in a System Manager document that execute on the server +// when the command is run. +type DocumentParameter struct { + _ struct{} `type:"structure"` + + // If specified, the default values for the parameters. Parameters without a + // default value are required. Parameters with a default value are optional. + DefaultValue *string `type:"string"` + + // A description of what the parameter does, how to use it, the default value, + // and whether or not the parameter is optional. + Description *string `type:"string"` + + // The name of the parameter. + Name *string `type:"string"` + + // The type of parameter. The type can be either String or StringList. + Type *string `type:"string" enum:"DocumentParameterType"` +} + +// String returns the string representation +func (s DocumentParameter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentParameter) GoString() string { + return s.String() +} + +// SetDefaultValue sets the DefaultValue field's value. +func (s *DocumentParameter) SetDefaultValue(v string) *DocumentParameter { + s.DefaultValue = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *DocumentParameter) SetDescription(v string) *DocumentParameter { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *DocumentParameter) SetName(v string) *DocumentParameter { + s.Name = &v + return s +} + +// SetType sets the Type field's value. +func (s *DocumentParameter) SetType(v string) *DocumentParameter { + s.Type = &v + return s +} + +// Version information about the document. +type DocumentVersionInfo struct { + _ struct{} `type:"structure"` + + // The date the document was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The document format, either JSON or YAML. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The document version. + DocumentVersion *string `type:"string"` + + // An identifier for the default version of the document. + IsDefaultVersion *bool `type:"boolean"` + + // The document name. + Name *string `type:"string"` +} + +// String returns the string representation +func (s DocumentVersionInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DocumentVersionInfo) GoString() string { + return s.String() +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *DocumentVersionInfo) SetCreatedDate(v time.Time) *DocumentVersionInfo { + s.CreatedDate = &v + return s +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *DocumentVersionInfo) SetDocumentFormat(v string) *DocumentVersionInfo { + s.DocumentFormat = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *DocumentVersionInfo) SetDocumentVersion(v string) *DocumentVersionInfo { + s.DocumentVersion = &v + return s +} + +// SetIsDefaultVersion sets the IsDefaultVersion field's value. +func (s *DocumentVersionInfo) SetIsDefaultVersion(v bool) *DocumentVersionInfo { + s.IsDefaultVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *DocumentVersionInfo) SetName(v string) *DocumentVersionInfo { + s.Name = &v + return s +} + +// The EffectivePatch structure defines metadata about a patch along with the +// approval state of the patch in a particular patch baseline. The approval +// state includes information about whether the patch is currently approved, +// due to be approved by a rule, explicitly approved, or explicitly rejected +// and the date the patch was or will be approved. +type EffectivePatch struct { + _ struct{} `type:"structure"` + + // Provides metadata for a patch, including information such as the KB ID, severity, + // classification and a URL for where more information can be obtained about + // the patch. + Patch *Patch `type:"structure"` + + // The status of the patch in a patch baseline. This includes information about + // whether the patch is currently approved, due to be approved by a rule, explicitly + // approved, or explicitly rejected and the date the patch was or will be approved. + PatchStatus *PatchStatus `type:"structure"` +} + +// String returns the string representation +func (s EffectivePatch) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EffectivePatch) GoString() string { + return s.String() +} + +// SetPatch sets the Patch field's value. +func (s *EffectivePatch) SetPatch(v *Patch) *EffectivePatch { + s.Patch = v + return s +} + +// SetPatchStatus sets the PatchStatus field's value. +func (s *EffectivePatch) SetPatchStatus(v *PatchStatus) *EffectivePatch { + s.PatchStatus = v + return s +} + +// Describes a failed association. +type FailedCreateAssociation struct { + _ struct{} `type:"structure"` + + // The association. + Entry *CreateAssociationBatchRequestEntry `type:"structure"` + + // The source of the failure. + Fault *string `type:"string" enum:"Fault"` + + // A description of the failure. + Message *string `type:"string"` +} + +// String returns the string representation +func (s FailedCreateAssociation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailedCreateAssociation) GoString() string { + return s.String() +} + +// SetEntry sets the Entry field's value. +func (s *FailedCreateAssociation) SetEntry(v *CreateAssociationBatchRequestEntry) *FailedCreateAssociation { + s.Entry = v + return s +} + +// SetFault sets the Fault field's value. +func (s *FailedCreateAssociation) SetFault(v string) *FailedCreateAssociation { + s.Fault = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *FailedCreateAssociation) SetMessage(v string) *FailedCreateAssociation { + s.Message = &v + return s +} + +// Information about an Automation failure. +type FailureDetails struct { + _ struct{} `type:"structure"` + + // Detailed information about the Automation step failure. + Details map[string][]*string `min:"1" type:"map"` + + // The stage of the Automation execution when the failure occurred. The stages + // include the following: InputValidation, PreVerification, Invocation, PostVerification. + FailureStage *string `type:"string"` + + // The type of Automation failure. Failure types include the following: Action, + // Permission, Throttling, Verification, Internal. + FailureType *string `type:"string"` +} + +// String returns the string representation +func (s FailureDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailureDetails) GoString() string { + return s.String() +} + +// SetDetails sets the Details field's value. +func (s *FailureDetails) SetDetails(v map[string][]*string) *FailureDetails { + s.Details = v + return s +} + +// SetFailureStage sets the FailureStage field's value. +func (s *FailureDetails) SetFailureStage(v string) *FailureDetails { + s.FailureStage = &v + return s +} + +// SetFailureType sets the FailureType field's value. +func (s *FailureDetails) SetFailureType(v string) *FailureDetails { + s.FailureType = &v + return s +} + +type GetAutomationExecutionInput struct { + _ struct{} `type:"structure"` + + // The unique identifier for an existing automation execution to examine. The + // execution ID is returned by StartAutomationExecution when the execution of + // an Automation document is initiated. + // + // AutomationExecutionId is a required field + AutomationExecutionId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetAutomationExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAutomationExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAutomationExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAutomationExecutionInput"} + if s.AutomationExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("AutomationExecutionId")) + } + if s.AutomationExecutionId != nil && len(*s.AutomationExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("AutomationExecutionId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *GetAutomationExecutionInput) SetAutomationExecutionId(v string) *GetAutomationExecutionInput { + s.AutomationExecutionId = &v + return s +} + +type GetAutomationExecutionOutput struct { + _ struct{} `type:"structure"` + + // Detailed information about the current state of an automation execution. + AutomationExecution *AutomationExecution `type:"structure"` +} + +// String returns the string representation +func (s GetAutomationExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAutomationExecutionOutput) GoString() string { + return s.String() +} + +// SetAutomationExecution sets the AutomationExecution field's value. +func (s *GetAutomationExecutionOutput) SetAutomationExecution(v *AutomationExecution) *GetAutomationExecutionOutput { + s.AutomationExecution = v + return s +} + +type GetCommandInvocationInput struct { + _ struct{} `type:"structure"` + + // (Required) The parent command ID of the invocation plugin. + // + // CommandId is a required field + CommandId *string `min:"36" type:"string" required:"true"` + + // (Required) The ID of the managed instance targeted by the command. A managed + // instance can be an Amazon EC2 instance or an instance in your hybrid environment + // that is configured for Systems Manager. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // (Optional) The name of the plugin for which you want detailed results. If + // the document contains only one plugin, the name can be omitted and the details + // will be returned. + PluginName *string `min:"4" type:"string"` +} + +// String returns the string representation +func (s GetCommandInvocationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCommandInvocationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCommandInvocationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCommandInvocationInput"} + if s.CommandId == nil { + invalidParams.Add(request.NewErrParamRequired("CommandId")) + } + if s.CommandId != nil && len(*s.CommandId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("CommandId", 36)) + } + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.PluginName != nil && len(*s.PluginName) < 4 { + invalidParams.Add(request.NewErrParamMinLen("PluginName", 4)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommandId sets the CommandId field's value. +func (s *GetCommandInvocationInput) SetCommandId(v string) *GetCommandInvocationInput { + s.CommandId = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *GetCommandInvocationInput) SetInstanceId(v string) *GetCommandInvocationInput { + s.InstanceId = &v + return s +} + +// SetPluginName sets the PluginName field's value. +func (s *GetCommandInvocationInput) SetPluginName(v string) *GetCommandInvocationInput { + s.PluginName = &v + return s +} + +type GetCommandInvocationOutput struct { + _ struct{} `type:"structure"` + + // The parent command ID of the invocation plugin. + CommandId *string `min:"36" type:"string"` + + // The comment text for the command. + Comment *string `type:"string"` + + // The name of the document that was executed. For example, AWS-RunShellScript. + DocumentName *string `type:"string"` + + // The SSM document version used in the request. + DocumentVersion *string `type:"string"` + + // Duration since ExecutionStartDateTime. + ExecutionElapsedTime *string `type:"string"` + + // The date and time the plugin was finished executing. Date and time are written + // in ISO 8601 format. For example, June 7, 2017 is represented as 2017-06-7. + // The following sample AWS CLI command uses the InvokedAfter filter. + // + // aws ssm list-commands --filters key=InvokedAfter,value=2017-06-07T00:00:00Z + // + // If the plugin has not started to execute, the string is empty. + ExecutionEndDateTime *string `type:"string"` + + // The date and time the plugin started executing. Date and time are written + // in ISO 8601 format. For example, June 7, 2017 is represented as 2017-06-7. + // The following sample AWS CLI command uses the InvokedBefore filter. + // + // aws ssm list-commands --filters key=InvokedBefore,value=2017-06-07T00:00:00Z + // + // If the plugin has not started to execute, the string is empty. + ExecutionStartDateTime *string `type:"string"` + + // The ID of the managed instance targeted by the command. A managed instance + // can be an Amazon EC2 instance or an instance in your hybrid environment that + // is configured for Systems Manager. + InstanceId *string `type:"string"` + + // The name of the plugin for which you want detailed results. For example, + // aws:RunShellScript is a plugin. + PluginName *string `min:"4" type:"string"` + + // The error level response code for the plugin script. If the response code + // is -1, then the command has not started executing on the instance, or it + // was not received by the instance. + ResponseCode *int64 `type:"integer"` + + // The first 8,000 characters written by the plugin to stderr. If the command + // has not finished executing, then this string is empty. + StandardErrorContent *string `type:"string"` + + // The URL for the complete text written by the plugin to stderr. If the command + // has not finished executing, then this string is empty. + StandardErrorUrl *string `type:"string"` + + // The first 24,000 characters written by the plugin to stdout. If the command + // has not finished executing, if ExecutionStatus is neither Succeeded nor Failed, + // then this string is empty. + StandardOutputContent *string `type:"string"` + + // The URL for the complete text written by the plugin to stdout in Amazon S3. + // If an Amazon S3 bucket was not specified, then this string is empty. + StandardOutputUrl *string `type:"string"` + + // The status of this invocation plugin. This status can be different than StatusDetails. + Status *string `type:"string" enum:"CommandInvocationStatus"` + + // A detailed status of the command execution for an invocation. StatusDetails + // includes more information than Status because it includes states resulting + // from error and concurrency control parameters. StatusDetails can show different + // results than Status. For more information about these statuses, see Run Command + // Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). + // StatusDetails can be one of the following values: + // + // * Pending: The command has not been sent to the instance. + // + // * In Progress: The command has been sent to the instance but has not reached + // a terminal state. + // + // * Delayed: The system attempted to send the command to the target, but + // the target was not available. The instance might not be available because + // of network issues, the instance was stopped, etc. The system will try + // to deliver the command again. + // + // * Success: The command or plugin was executed successfully. This is a + // terminal state. + // + // * Delivery Timed Out: The command was not delivered to the instance before + // the delivery timeout expired. Delivery timeouts do not count against the + // parent command's MaxErrors limit, but they do contribute to whether the + // parent command status is Success or Incomplete. This is a terminal state. + // + // * Execution Timed Out: The command started to execute on the instance, + // but the execution was not complete before the timeout expired. Execution + // timeouts count against the MaxErrors limit of the parent command. This + // is a terminal state. + // + // * Failed: The command wasn't executed successfully on the instance. For + // a plugin, this indicates that the result code was not zero. For a command + // invocation, this indicates that the result code for one or more plugins + // was not zero. Invocation failures count against the MaxErrors limit of + // the parent command. This is a terminal state. + // + // * Canceled: The command was terminated before it was completed. This is + // a terminal state. + // + // * Undeliverable: The command can't be delivered to the instance. The instance + // might not exist or might not be responding. Undeliverable invocations + // don't count against the parent command's MaxErrors limit and don't contribute + // to whether the parent command status is Success or Incomplete. This is + // a terminal state. + // + // * Terminated: The parent command exceeded its MaxErrors limit and subsequent + // command invocations were canceled by the system. This is a terminal state. + StatusDetails *string `type:"string"` +} + +// String returns the string representation +func (s GetCommandInvocationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCommandInvocationOutput) GoString() string { + return s.String() +} + +// SetCommandId sets the CommandId field's value. +func (s *GetCommandInvocationOutput) SetCommandId(v string) *GetCommandInvocationOutput { + s.CommandId = &v + return s +} + +// SetComment sets the Comment field's value. +func (s *GetCommandInvocationOutput) SetComment(v string) *GetCommandInvocationOutput { + s.Comment = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *GetCommandInvocationOutput) SetDocumentName(v string) *GetCommandInvocationOutput { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *GetCommandInvocationOutput) SetDocumentVersion(v string) *GetCommandInvocationOutput { + s.DocumentVersion = &v + return s +} + +// SetExecutionElapsedTime sets the ExecutionElapsedTime field's value. +func (s *GetCommandInvocationOutput) SetExecutionElapsedTime(v string) *GetCommandInvocationOutput { + s.ExecutionElapsedTime = &v + return s +} + +// SetExecutionEndDateTime sets the ExecutionEndDateTime field's value. +func (s *GetCommandInvocationOutput) SetExecutionEndDateTime(v string) *GetCommandInvocationOutput { + s.ExecutionEndDateTime = &v + return s +} + +// SetExecutionStartDateTime sets the ExecutionStartDateTime field's value. +func (s *GetCommandInvocationOutput) SetExecutionStartDateTime(v string) *GetCommandInvocationOutput { + s.ExecutionStartDateTime = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *GetCommandInvocationOutput) SetInstanceId(v string) *GetCommandInvocationOutput { + s.InstanceId = &v + return s +} + +// SetPluginName sets the PluginName field's value. +func (s *GetCommandInvocationOutput) SetPluginName(v string) *GetCommandInvocationOutput { + s.PluginName = &v + return s +} + +// SetResponseCode sets the ResponseCode field's value. +func (s *GetCommandInvocationOutput) SetResponseCode(v int64) *GetCommandInvocationOutput { + s.ResponseCode = &v + return s +} + +// SetStandardErrorContent sets the StandardErrorContent field's value. +func (s *GetCommandInvocationOutput) SetStandardErrorContent(v string) *GetCommandInvocationOutput { + s.StandardErrorContent = &v + return s +} + +// SetStandardErrorUrl sets the StandardErrorUrl field's value. +func (s *GetCommandInvocationOutput) SetStandardErrorUrl(v string) *GetCommandInvocationOutput { + s.StandardErrorUrl = &v + return s +} + +// SetStandardOutputContent sets the StandardOutputContent field's value. +func (s *GetCommandInvocationOutput) SetStandardOutputContent(v string) *GetCommandInvocationOutput { + s.StandardOutputContent = &v + return s +} + +// SetStandardOutputUrl sets the StandardOutputUrl field's value. +func (s *GetCommandInvocationOutput) SetStandardOutputUrl(v string) *GetCommandInvocationOutput { + s.StandardOutputUrl = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetCommandInvocationOutput) SetStatus(v string) *GetCommandInvocationOutput { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *GetCommandInvocationOutput) SetStatusDetails(v string) *GetCommandInvocationOutput { + s.StatusDetails = &v + return s +} + +type GetDefaultPatchBaselineInput struct { + _ struct{} `type:"structure"` + + // Returns the default patch baseline for the specified operating system. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` +} + +// String returns the string representation +func (s GetDefaultPatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDefaultPatchBaselineInput) GoString() string { + return s.String() +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *GetDefaultPatchBaselineInput) SetOperatingSystem(v string) *GetDefaultPatchBaselineInput { + s.OperatingSystem = &v + return s +} + +type GetDefaultPatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // The ID of the default patch baseline. + BaselineId *string `min:"20" type:"string"` + + // The operating system for the returned patch baseline. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` +} + +// String returns the string representation +func (s GetDefaultPatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDefaultPatchBaselineOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *GetDefaultPatchBaselineOutput) SetBaselineId(v string) *GetDefaultPatchBaselineOutput { + s.BaselineId = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *GetDefaultPatchBaselineOutput) SetOperatingSystem(v string) *GetDefaultPatchBaselineOutput { + s.OperatingSystem = &v + return s +} + +type GetDeployablePatchSnapshotForInstanceInput struct { + _ struct{} `type:"structure"` + + // The ID of the instance for which the appropriate patch snapshot should be + // retrieved. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The user-defined snapshot ID. + // + // SnapshotId is a required field + SnapshotId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDeployablePatchSnapshotForInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDeployablePatchSnapshotForInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDeployablePatchSnapshotForInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDeployablePatchSnapshotForInstanceInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.SnapshotId == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotId")) + } + if s.SnapshotId != nil && len(*s.SnapshotId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("SnapshotId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceId sets the InstanceId field's value. +func (s *GetDeployablePatchSnapshotForInstanceInput) SetInstanceId(v string) *GetDeployablePatchSnapshotForInstanceInput { + s.InstanceId = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *GetDeployablePatchSnapshotForInstanceInput) SetSnapshotId(v string) *GetDeployablePatchSnapshotForInstanceInput { + s.SnapshotId = &v + return s +} + +type GetDeployablePatchSnapshotForInstanceOutput struct { + _ struct{} `type:"structure"` + + // The ID of the instance. + InstanceId *string `type:"string"` + + // Returns the specific operating system (for example Windows Server 2012 or + // Amazon Linux 2015.09) on the instance for the specified patch snapshot. + Product *string `type:"string"` + + // A pre-signed Amazon S3 URL that can be used to download the patch snapshot. + SnapshotDownloadUrl *string `type:"string"` + + // The user-defined snapshot ID. + SnapshotId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s GetDeployablePatchSnapshotForInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDeployablePatchSnapshotForInstanceOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *GetDeployablePatchSnapshotForInstanceOutput) SetInstanceId(v string) *GetDeployablePatchSnapshotForInstanceOutput { + s.InstanceId = &v + return s +} + +// SetProduct sets the Product field's value. +func (s *GetDeployablePatchSnapshotForInstanceOutput) SetProduct(v string) *GetDeployablePatchSnapshotForInstanceOutput { + s.Product = &v + return s +} + +// SetSnapshotDownloadUrl sets the SnapshotDownloadUrl field's value. +func (s *GetDeployablePatchSnapshotForInstanceOutput) SetSnapshotDownloadUrl(v string) *GetDeployablePatchSnapshotForInstanceOutput { + s.SnapshotDownloadUrl = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *GetDeployablePatchSnapshotForInstanceOutput) SetSnapshotId(v string) *GetDeployablePatchSnapshotForInstanceOutput { + s.SnapshotId = &v + return s +} + +type GetDocumentInput struct { + _ struct{} `type:"structure"` + + // Returns the document in the specified format. The document format can be + // either JSON or YAML. JSON is the default format. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The document version for which you want information. + DocumentVersion *string `type:"string"` + + // The name of the Systems Manager document. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDocumentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDocumentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDocumentInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *GetDocumentInput) SetDocumentFormat(v string) *GetDocumentInput { + s.DocumentFormat = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *GetDocumentInput) SetDocumentVersion(v string) *GetDocumentInput { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetDocumentInput) SetName(v string) *GetDocumentInput { + s.Name = &v + return s +} + +type GetDocumentOutput struct { + _ struct{} `type:"structure"` + + // The contents of the Systems Manager document. + Content *string `min:"1" type:"string"` + + // The document format, either JSON or YAML. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The document type. + DocumentType *string `type:"string" enum:"DocumentType"` + + // The document version. + DocumentVersion *string `type:"string"` + + // The name of the Systems Manager document. + Name *string `type:"string"` +} + +// String returns the string representation +func (s GetDocumentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDocumentOutput) GoString() string { + return s.String() +} + +// SetContent sets the Content field's value. +func (s *GetDocumentOutput) SetContent(v string) *GetDocumentOutput { + s.Content = &v + return s +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *GetDocumentOutput) SetDocumentFormat(v string) *GetDocumentOutput { + s.DocumentFormat = &v + return s +} + +// SetDocumentType sets the DocumentType field's value. +func (s *GetDocumentOutput) SetDocumentType(v string) *GetDocumentOutput { + s.DocumentType = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *GetDocumentOutput) SetDocumentVersion(v string) *GetDocumentOutput { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetDocumentOutput) SetName(v string) *GetDocumentOutput { + s.Name = &v + return s +} + +type GetInventoryInput struct { + _ struct{} `type:"structure"` + + // Returns counts of inventory types based on one or more expressions. For example, + // if you aggregate by using an expression that uses the AWS:InstanceInformation.PlatformType + // type, you can see a count of how many Windows and Linux instances exist in + // your inventoried fleet. + Aggregators []*InventoryAggregator `min:"1" type:"list"` + + // One or more filters. Use a filter to return a more specific list of results. + Filters []*InventoryFilter `min:"1" type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The list of inventory item types to return. + ResultAttributes []*ResultAttribute `min:"1" type:"list"` +} + +// String returns the string representation +func (s GetInventoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInventoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInventoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInventoryInput"} + if s.Aggregators != nil && len(s.Aggregators) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Aggregators", 1)) + } + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ResultAttributes != nil && len(s.ResultAttributes) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResultAttributes", 1)) + } + if s.Aggregators != nil { + for i, v := range s.Aggregators { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Aggregators", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ResultAttributes != nil { + for i, v := range s.ResultAttributes { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ResultAttributes", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAggregators sets the Aggregators field's value. +func (s *GetInventoryInput) SetAggregators(v []*InventoryAggregator) *GetInventoryInput { + s.Aggregators = v + return s +} + +// SetFilters sets the Filters field's value. +func (s *GetInventoryInput) SetFilters(v []*InventoryFilter) *GetInventoryInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetInventoryInput) SetMaxResults(v int64) *GetInventoryInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetInventoryInput) SetNextToken(v string) *GetInventoryInput { + s.NextToken = &v + return s +} + +// SetResultAttributes sets the ResultAttributes field's value. +func (s *GetInventoryInput) SetResultAttributes(v []*ResultAttribute) *GetInventoryInput { + s.ResultAttributes = v + return s +} + +type GetInventoryOutput struct { + _ struct{} `type:"structure"` + + // Collection of inventory entities such as a collection of instance inventory. + Entities []*InventoryResultEntity `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s GetInventoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInventoryOutput) GoString() string { + return s.String() +} + +// SetEntities sets the Entities field's value. +func (s *GetInventoryOutput) SetEntities(v []*InventoryResultEntity) *GetInventoryOutput { + s.Entities = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetInventoryOutput) SetNextToken(v string) *GetInventoryOutput { + s.NextToken = &v + return s +} + +type GetInventorySchemaInput struct { + _ struct{} `type:"structure"` + + // Returns inventory schemas that support aggregation. For example, this call + // returns the AWS:InstanceInformation type, because it supports aggregation + // based on the PlatformName, PlatformType, and PlatformVersion attributes. + Aggregator *bool `type:"boolean"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"50" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // Returns the sub-type schema for a specified inventory type. + SubType *bool `type:"boolean"` + + // The type of inventory item to return. + TypeName *string `type:"string"` +} + +// String returns the string representation +func (s GetInventorySchemaInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInventorySchemaInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInventorySchemaInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInventorySchemaInput"} + if s.MaxResults != nil && *s.MaxResults < 50 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAggregator sets the Aggregator field's value. +func (s *GetInventorySchemaInput) SetAggregator(v bool) *GetInventorySchemaInput { + s.Aggregator = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetInventorySchemaInput) SetMaxResults(v int64) *GetInventorySchemaInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetInventorySchemaInput) SetNextToken(v string) *GetInventorySchemaInput { + s.NextToken = &v + return s +} + +// SetSubType sets the SubType field's value. +func (s *GetInventorySchemaInput) SetSubType(v bool) *GetInventorySchemaInput { + s.SubType = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *GetInventorySchemaInput) SetTypeName(v string) *GetInventorySchemaInput { + s.TypeName = &v + return s +} + +type GetInventorySchemaOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Inventory schemas returned by the request. + Schemas []*InventoryItemSchema `type:"list"` +} + +// String returns the string representation +func (s GetInventorySchemaOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInventorySchemaOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *GetInventorySchemaOutput) SetNextToken(v string) *GetInventorySchemaOutput { + s.NextToken = &v + return s +} + +// SetSchemas sets the Schemas field's value. +func (s *GetInventorySchemaOutput) SetSchemas(v []*InventoryItemSchema) *GetInventorySchemaOutput { + s.Schemas = v + return s +} + +type GetMaintenanceWindowExecutionInput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window execution that includes the task. + // + // WindowExecutionId is a required field + WindowExecutionId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetMaintenanceWindowExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMaintenanceWindowExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMaintenanceWindowExecutionInput"} + if s.WindowExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowExecutionId")) + } + if s.WindowExecutionId != nil && len(*s.WindowExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowExecutionId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *GetMaintenanceWindowExecutionInput) SetWindowExecutionId(v string) *GetMaintenanceWindowExecutionInput { + s.WindowExecutionId = &v + return s +} + +type GetMaintenanceWindowExecutionOutput struct { + _ struct{} `type:"structure"` + + // The time the Maintenance Window finished executing. + EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time the Maintenance Window started executing. + StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The status of the Maintenance Window execution. + Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` + + // The details explaining the Status. Only available for certain status values. + StatusDetails *string `type:"string"` + + // The ID of the task executions from the Maintenance Window execution. + TaskIds []*string `type:"list"` + + // The ID of the Maintenance Window execution. + WindowExecutionId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s GetMaintenanceWindowExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowExecutionOutput) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *GetMaintenanceWindowExecutionOutput) SetEndTime(v time.Time) *GetMaintenanceWindowExecutionOutput { + s.EndTime = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *GetMaintenanceWindowExecutionOutput) SetStartTime(v time.Time) *GetMaintenanceWindowExecutionOutput { + s.StartTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetMaintenanceWindowExecutionOutput) SetStatus(v string) *GetMaintenanceWindowExecutionOutput { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *GetMaintenanceWindowExecutionOutput) SetStatusDetails(v string) *GetMaintenanceWindowExecutionOutput { + s.StatusDetails = &v + return s +} + +// SetTaskIds sets the TaskIds field's value. +func (s *GetMaintenanceWindowExecutionOutput) SetTaskIds(v []*string) *GetMaintenanceWindowExecutionOutput { + s.TaskIds = v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *GetMaintenanceWindowExecutionOutput) SetWindowExecutionId(v string) *GetMaintenanceWindowExecutionOutput { + s.WindowExecutionId = &v + return s +} + +type GetMaintenanceWindowExecutionTaskInput struct { + _ struct{} `type:"structure"` + + // The ID of the specific task execution in the Maintenance Window task that + // should be retrieved. + // + // TaskId is a required field + TaskId *string `min:"36" type:"string" required:"true"` + + // The ID of the Maintenance Window execution that includes the task. + // + // WindowExecutionId is a required field + WindowExecutionId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetMaintenanceWindowExecutionTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowExecutionTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMaintenanceWindowExecutionTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMaintenanceWindowExecutionTaskInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 36)) + } + if s.WindowExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowExecutionId")) + } + if s.WindowExecutionId != nil && len(*s.WindowExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowExecutionId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTaskId sets the TaskId field's value. +func (s *GetMaintenanceWindowExecutionTaskInput) SetTaskId(v string) *GetMaintenanceWindowExecutionTaskInput { + s.TaskId = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskInput) SetWindowExecutionId(v string) *GetMaintenanceWindowExecutionTaskInput { + s.WindowExecutionId = &v + return s +} + +type GetMaintenanceWindowExecutionTaskInvocationInput struct { + _ struct{} `type:"structure"` + + // The invocation ID to retrieve. + // + // InvocationId is a required field + InvocationId *string `min:"36" type:"string" required:"true"` + + // The ID of the specific task in the Maintenance Window task that should be + // retrieved. + // + // TaskId is a required field + TaskId *string `min:"36" type:"string" required:"true"` + + // The ID of the Maintenance Window execution for which the task is a part. + // + // WindowExecutionId is a required field + WindowExecutionId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetMaintenanceWindowExecutionTaskInvocationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowExecutionTaskInvocationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMaintenanceWindowExecutionTaskInvocationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMaintenanceWindowExecutionTaskInvocationInput"} + if s.InvocationId == nil { + invalidParams.Add(request.NewErrParamRequired("InvocationId")) + } + if s.InvocationId != nil && len(*s.InvocationId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("InvocationId", 36)) + } + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 36)) + } + if s.WindowExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowExecutionId")) + } + if s.WindowExecutionId != nil && len(*s.WindowExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowExecutionId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInvocationId sets the InvocationId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationInput) SetInvocationId(v string) *GetMaintenanceWindowExecutionTaskInvocationInput { + s.InvocationId = &v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationInput) SetTaskId(v string) *GetMaintenanceWindowExecutionTaskInvocationInput { + s.TaskId = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationInput) SetWindowExecutionId(v string) *GetMaintenanceWindowExecutionTaskInvocationInput { + s.WindowExecutionId = &v + return s +} + +type GetMaintenanceWindowExecutionTaskInvocationOutput struct { + _ struct{} `type:"structure"` + + // The time that the task finished executing on the target. + EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The execution ID. + ExecutionId *string `type:"string"` + + // The invocation ID. + InvocationId *string `min:"36" type:"string"` + + // User-provided value to be included in any CloudWatch events raised while + // running tasks for these targets in this Maintenance Window. + OwnerInformation *string `min:"1" type:"string"` + + // The parameters used at the time that the task executed. + Parameters *string `type:"string"` + + // The time that the task started executing on the target. + StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The task status for an invocation. + Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` + + // The details explaining the status. Details are only available for certain + // status values. + StatusDetails *string `type:"string"` + + // The task execution ID. + TaskExecutionId *string `min:"36" type:"string"` + + // Retrieves the task type for a Maintenance Window. Task types include the + // following: LAMBDA, STEP_FUNCTION, AUTOMATION, RUN_COMMAND. + TaskType *string `type:"string" enum:"MaintenanceWindowTaskType"` + + // The Maintenance Window execution ID. + WindowExecutionId *string `min:"36" type:"string"` + + // The Maintenance Window target ID. + WindowTargetId *string `type:"string"` +} + +// String returns the string representation +func (s GetMaintenanceWindowExecutionTaskInvocationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowExecutionTaskInvocationOutput) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetEndTime(v time.Time) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.EndTime = &v + return s +} + +// SetExecutionId sets the ExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetExecutionId(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.ExecutionId = &v + return s +} + +// SetInvocationId sets the InvocationId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetInvocationId(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.InvocationId = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetOwnerInformation(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.OwnerInformation = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetParameters(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.Parameters = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetStartTime(v time.Time) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.StartTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetStatus(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetStatusDetails(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.StatusDetails = &v + return s +} + +// SetTaskExecutionId sets the TaskExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetTaskExecutionId(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.TaskExecutionId = &v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetTaskType(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.TaskType = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetWindowExecutionId(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.WindowExecutionId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *GetMaintenanceWindowExecutionTaskInvocationOutput) SetWindowTargetId(v string) *GetMaintenanceWindowExecutionTaskInvocationOutput { + s.WindowTargetId = &v + return s +} + +type GetMaintenanceWindowExecutionTaskOutput struct { + _ struct{} `type:"structure"` + + // The time the task execution completed. + EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The defined maximum number of task executions that could be run in parallel. + MaxConcurrency *string `min:"1" type:"string"` + + // The defined maximum number of task execution errors allowed before scheduling + // of the task execution would have been stopped. + MaxErrors *string `min:"1" type:"string"` + + // The priority of the task. + Priority *int64 `type:"integer"` + + // The role that was assumed when executing the task. + ServiceRole *string `type:"string"` + + // The time the task execution started. + StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The status of the task. + Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` + + // The details explaining the Status. Only available for certain status values. + StatusDetails *string `type:"string"` + + // The ARN of the executed task. + TaskArn *string `min:"1" type:"string"` + + // The ID of the specific task execution in the Maintenance Window task that + // was retrieved. + TaskExecutionId *string `min:"36" type:"string"` + + // The parameters passed to the task when it was executed. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // The map has the following format: + // + // Key: string, between 1 and 255 characters + // + // Value: an array of strings, each string is between 1 and 255 characters + TaskParameters []map[string]*MaintenanceWindowTaskParameterValueExpression `type:"list"` + + // The type of task executed. + Type *string `type:"string" enum:"MaintenanceWindowTaskType"` + + // The ID of the Maintenance Window execution that includes the task. + WindowExecutionId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s GetMaintenanceWindowExecutionTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowExecutionTaskOutput) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetEndTime(v time.Time) *GetMaintenanceWindowExecutionTaskOutput { + s.EndTime = &v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetMaxConcurrency(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetMaxErrors(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.MaxErrors = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetPriority(v int64) *GetMaintenanceWindowExecutionTaskOutput { + s.Priority = &v + return s +} + +// SetServiceRole sets the ServiceRole field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetServiceRole(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.ServiceRole = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetStartTime(v time.Time) *GetMaintenanceWindowExecutionTaskOutput { + s.StartTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetStatus(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetStatusDetails(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.StatusDetails = &v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetTaskArn(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.TaskArn = &v + return s +} + +// SetTaskExecutionId sets the TaskExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetTaskExecutionId(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.TaskExecutionId = &v + return s +} + +// SetTaskParameters sets the TaskParameters field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetTaskParameters(v []map[string]*MaintenanceWindowTaskParameterValueExpression) *GetMaintenanceWindowExecutionTaskOutput { + s.TaskParameters = v + return s +} + +// SetType sets the Type field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetType(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.Type = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *GetMaintenanceWindowExecutionTaskOutput) SetWindowExecutionId(v string) *GetMaintenanceWindowExecutionTaskOutput { + s.WindowExecutionId = &v + return s +} + +type GetMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // The ID of the desired Maintenance Window. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMaintenanceWindowInput"} + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWindowId sets the WindowId field's value. +func (s *GetMaintenanceWindowInput) SetWindowId(v string) *GetMaintenanceWindowInput { + s.WindowId = &v + return s +} + +type GetMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // Whether targets must be registered with the Maintenance Window before tasks + // can be defined for those targets. + AllowUnassociatedTargets *bool `type:"boolean"` + + // The date the Maintenance Window was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The number of hours before the end of the Maintenance Window that Systems + // Manager stops scheduling new tasks for execution. + Cutoff *int64 `type:"integer"` + + // The description of the Maintenance Window. + Description *string `min:"1" type:"string"` + + // The duration of the Maintenance Window in hours. + Duration *int64 `min:"1" type:"integer"` + + // Whether the Maintenance Windows is enabled. + Enabled *bool `type:"boolean"` + + // The date the Maintenance Window was last modified. + ModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the Maintenance Window. + Name *string `min:"3" type:"string"` + + // The schedule of the Maintenance Window in the form of a cron or rate expression. + Schedule *string `min:"1" type:"string"` + + // The ID of the created Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s GetMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetAllowUnassociatedTargets sets the AllowUnassociatedTargets field's value. +func (s *GetMaintenanceWindowOutput) SetAllowUnassociatedTargets(v bool) *GetMaintenanceWindowOutput { + s.AllowUnassociatedTargets = &v + return s +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *GetMaintenanceWindowOutput) SetCreatedDate(v time.Time) *GetMaintenanceWindowOutput { + s.CreatedDate = &v + return s +} + +// SetCutoff sets the Cutoff field's value. +func (s *GetMaintenanceWindowOutput) SetCutoff(v int64) *GetMaintenanceWindowOutput { + s.Cutoff = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *GetMaintenanceWindowOutput) SetDescription(v string) *GetMaintenanceWindowOutput { + s.Description = &v + return s +} + +// SetDuration sets the Duration field's value. +func (s *GetMaintenanceWindowOutput) SetDuration(v int64) *GetMaintenanceWindowOutput { + s.Duration = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *GetMaintenanceWindowOutput) SetEnabled(v bool) *GetMaintenanceWindowOutput { + s.Enabled = &v + return s +} + +// SetModifiedDate sets the ModifiedDate field's value. +func (s *GetMaintenanceWindowOutput) SetModifiedDate(v time.Time) *GetMaintenanceWindowOutput { + s.ModifiedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetMaintenanceWindowOutput) SetName(v string) *GetMaintenanceWindowOutput { + s.Name = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *GetMaintenanceWindowOutput) SetSchedule(v string) *GetMaintenanceWindowOutput { + s.Schedule = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *GetMaintenanceWindowOutput) SetWindowId(v string) *GetMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +type GetMaintenanceWindowTaskInput struct { + _ struct{} `type:"structure"` + + // The Maintenance Window ID that includes the task to retrieve. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` + + // The Maintenance Window task ID to retrieve. + // + // WindowTaskId is a required field + WindowTaskId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetMaintenanceWindowTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMaintenanceWindowTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMaintenanceWindowTaskInput"} + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.WindowTaskId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowTaskId")) + } + if s.WindowTaskId != nil && len(*s.WindowTaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowTaskId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWindowId sets the WindowId field's value. +func (s *GetMaintenanceWindowTaskInput) SetWindowId(v string) *GetMaintenanceWindowTaskInput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *GetMaintenanceWindowTaskInput) SetWindowTaskId(v string) *GetMaintenanceWindowTaskInput { + s.WindowTaskId = &v + return s +} + +type GetMaintenanceWindowTaskOutput struct { + _ struct{} `type:"structure"` + + // The retrieved task description. + Description *string `min:"1" type:"string"` + + // The location in Amazon S3 where the task results are logged. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + LoggingInfo *LoggingInfo `type:"structure"` + + // The maximum number of targets allowed to run this task in parallel. + MaxConcurrency *string `min:"1" type:"string"` + + // The maximum number of errors allowed before the task stops being scheduled. + MaxErrors *string `min:"1" type:"string"` + + // The retrieved task name. + Name *string `min:"3" type:"string"` + + // The priority of the task when it executes. The lower the number, the higher + // the priority. Tasks that have the same priority are scheduled in parallel. + Priority *int64 `type:"integer"` + + // The IAM service role to assume during task execution. + ServiceRoleArn *string `type:"string"` + + // The targets where the task should execute. + Targets []*Target `type:"list"` + + // The resource that the task used during execution. For RUN_COMMAND and AUTOMATION + // task types, the TaskArn is the Systems Manager Document name/ARN. For LAMBDA + // tasks, the value is the function name/ARN. For STEP_FUNCTION tasks, the value + // is the state machine ARN. + TaskArn *string `min:"1" type:"string"` + + // The parameters to pass to the task when it executes. + TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` + + // The parameters to pass to the task when it executes. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` + + // The type of task to execute. + TaskType *string `type:"string" enum:"MaintenanceWindowTaskType"` + + // The retrieved Maintenance Window ID. + WindowId *string `min:"20" type:"string"` + + // The retrieved Maintenance Window task ID. + WindowTaskId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s GetMaintenanceWindowTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMaintenanceWindowTaskOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *GetMaintenanceWindowTaskOutput) SetDescription(v string) *GetMaintenanceWindowTaskOutput { + s.Description = &v + return s +} + +// SetLoggingInfo sets the LoggingInfo field's value. +func (s *GetMaintenanceWindowTaskOutput) SetLoggingInfo(v *LoggingInfo) *GetMaintenanceWindowTaskOutput { + s.LoggingInfo = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *GetMaintenanceWindowTaskOutput) SetMaxConcurrency(v string) *GetMaintenanceWindowTaskOutput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *GetMaintenanceWindowTaskOutput) SetMaxErrors(v string) *GetMaintenanceWindowTaskOutput { + s.MaxErrors = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetMaintenanceWindowTaskOutput) SetName(v string) *GetMaintenanceWindowTaskOutput { + s.Name = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *GetMaintenanceWindowTaskOutput) SetPriority(v int64) *GetMaintenanceWindowTaskOutput { + s.Priority = &v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *GetMaintenanceWindowTaskOutput) SetServiceRoleArn(v string) *GetMaintenanceWindowTaskOutput { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *GetMaintenanceWindowTaskOutput) SetTargets(v []*Target) *GetMaintenanceWindowTaskOutput { + s.Targets = v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *GetMaintenanceWindowTaskOutput) SetTaskArn(v string) *GetMaintenanceWindowTaskOutput { + s.TaskArn = &v + return s +} + +// SetTaskInvocationParameters sets the TaskInvocationParameters field's value. +func (s *GetMaintenanceWindowTaskOutput) SetTaskInvocationParameters(v *MaintenanceWindowTaskInvocationParameters) *GetMaintenanceWindowTaskOutput { + s.TaskInvocationParameters = v + return s +} + +// SetTaskParameters sets the TaskParameters field's value. +func (s *GetMaintenanceWindowTaskOutput) SetTaskParameters(v map[string]*MaintenanceWindowTaskParameterValueExpression) *GetMaintenanceWindowTaskOutput { + s.TaskParameters = v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *GetMaintenanceWindowTaskOutput) SetTaskType(v string) *GetMaintenanceWindowTaskOutput { + s.TaskType = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *GetMaintenanceWindowTaskOutput) SetWindowId(v string) *GetMaintenanceWindowTaskOutput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *GetMaintenanceWindowTaskOutput) SetWindowTaskId(v string) *GetMaintenanceWindowTaskOutput { + s.WindowTaskId = &v + return s +} + +type GetParameterHistoryInput struct { + _ struct{} `type:"structure"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The name of a parameter you want to query. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // Return decrypted values for secure string parameters. This flag is ignored + // for String and StringList parameter types. + WithDecryption *bool `type:"boolean"` +} + +// String returns the string representation +func (s GetParameterHistoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParameterHistoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetParameterHistoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetParameterHistoryInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetParameterHistoryInput) SetMaxResults(v int64) *GetParameterHistoryInput { + s.MaxResults = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetParameterHistoryInput) SetName(v string) *GetParameterHistoryInput { + s.Name = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetParameterHistoryInput) SetNextToken(v string) *GetParameterHistoryInput { + s.NextToken = &v + return s +} + +// SetWithDecryption sets the WithDecryption field's value. +func (s *GetParameterHistoryInput) SetWithDecryption(v bool) *GetParameterHistoryInput { + s.WithDecryption = &v + return s +} + +type GetParameterHistoryOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // A list of parameters returned by the request. + Parameters []*ParameterHistory `type:"list"` +} + +// String returns the string representation +func (s GetParameterHistoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParameterHistoryOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *GetParameterHistoryOutput) SetNextToken(v string) *GetParameterHistoryOutput { + s.NextToken = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *GetParameterHistoryOutput) SetParameters(v []*ParameterHistory) *GetParameterHistoryOutput { + s.Parameters = v + return s +} + +type GetParameterInput struct { + _ struct{} `type:"structure"` + + // The name of the parameter you want to query. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // Return decrypted values for secure string parameters. This flag is ignored + // for String and StringList parameter types. + WithDecryption *bool `type:"boolean"` +} + +// String returns the string representation +func (s GetParameterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParameterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetParameterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetParameterInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *GetParameterInput) SetName(v string) *GetParameterInput { + s.Name = &v + return s +} + +// SetWithDecryption sets the WithDecryption field's value. +func (s *GetParameterInput) SetWithDecryption(v bool) *GetParameterInput { + s.WithDecryption = &v + return s +} + +type GetParameterOutput struct { + _ struct{} `type:"structure"` + + // Information about a parameter. + Parameter *Parameter `type:"structure"` +} + +// String returns the string representation +func (s GetParameterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParameterOutput) GoString() string { + return s.String() +} + +// SetParameter sets the Parameter field's value. +func (s *GetParameterOutput) SetParameter(v *Parameter) *GetParameterOutput { + s.Parameter = v + return s +} + +type GetParametersByPathInput struct { + _ struct{} `type:"structure"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` + + // Filters to limit the request results. + // + // You can't filter using the parameter name. + ParameterFilters []*ParameterStringFilter `type:"list"` + + // The hierarchy for the parameter. Hierarchies start with a forward slash (/) + // and end with the parameter name. A hierarchy can have a maximum of 15 levels. + // Here is an example of a hierarchy: /Finance/Prod/IAD/WinServ2016/license33 + // + // Path is a required field + Path *string `min:"1" type:"string" required:"true"` + + // Retrieve all parameters within a hierarchy. + // + // If a user has access to a path, then the user can access all levels of that + // path. For example, if a user has permission to access path /a, then the user + // can also access /a/b. Even if a user has explicitly been denied access in + // IAM for parameter /a, they can still call the GetParametersByPath API action + // recursively and view /a/b. + Recursive *bool `type:"boolean"` + + // Retrieve all parameters in a hierarchy with their value decrypted. + WithDecryption *bool `type:"boolean"` +} + +// String returns the string representation +func (s GetParametersByPathInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParametersByPathInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetParametersByPathInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetParametersByPathInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Path == nil { + invalidParams.Add(request.NewErrParamRequired("Path")) + } + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + if s.ParameterFilters != nil { + for i, v := range s.ParameterFilters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterFilters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetParametersByPathInput) SetMaxResults(v int64) *GetParametersByPathInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetParametersByPathInput) SetNextToken(v string) *GetParametersByPathInput { + s.NextToken = &v + return s +} + +// SetParameterFilters sets the ParameterFilters field's value. +func (s *GetParametersByPathInput) SetParameterFilters(v []*ParameterStringFilter) *GetParametersByPathInput { + s.ParameterFilters = v + return s +} + +// SetPath sets the Path field's value. +func (s *GetParametersByPathInput) SetPath(v string) *GetParametersByPathInput { + s.Path = &v + return s +} + +// SetRecursive sets the Recursive field's value. +func (s *GetParametersByPathInput) SetRecursive(v bool) *GetParametersByPathInput { + s.Recursive = &v + return s +} + +// SetWithDecryption sets the WithDecryption field's value. +func (s *GetParametersByPathInput) SetWithDecryption(v bool) *GetParametersByPathInput { + s.WithDecryption = &v + return s +} + +type GetParametersByPathOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` + + // A list of parameters found in the specified hierarchy. + Parameters []*Parameter `type:"list"` +} + +// String returns the string representation +func (s GetParametersByPathOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParametersByPathOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *GetParametersByPathOutput) SetNextToken(v string) *GetParametersByPathOutput { + s.NextToken = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *GetParametersByPathOutput) SetParameters(v []*Parameter) *GetParametersByPathOutput { + s.Parameters = v + return s +} + +type GetParametersInput struct { + _ struct{} `type:"structure"` + + // Names of the parameters for which you want to query information. + // + // Names is a required field + Names []*string `min:"1" type:"list" required:"true"` + + // Return decrypted secure string value. Return decrypted values for secure + // string parameters. This flag is ignored for String and StringList parameter + // types. + WithDecryption *bool `type:"boolean"` +} + +// String returns the string representation +func (s GetParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetParametersInput"} + if s.Names == nil { + invalidParams.Add(request.NewErrParamRequired("Names")) + } + if s.Names != nil && len(s.Names) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Names", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNames sets the Names field's value. +func (s *GetParametersInput) SetNames(v []*string) *GetParametersInput { + s.Names = v + return s +} + +// SetWithDecryption sets the WithDecryption field's value. +func (s *GetParametersInput) SetWithDecryption(v bool) *GetParametersInput { + s.WithDecryption = &v + return s +} + +type GetParametersOutput struct { + _ struct{} `type:"structure"` + + // A list of parameters that are not formatted correctly or do not run when + // executed. + InvalidParameters []*string `min:"1" type:"list"` + + // A list of details for a parameter. + Parameters []*Parameter `type:"list"` +} + +// String returns the string representation +func (s GetParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetParametersOutput) GoString() string { + return s.String() +} + +// SetInvalidParameters sets the InvalidParameters field's value. +func (s *GetParametersOutput) SetInvalidParameters(v []*string) *GetParametersOutput { + s.InvalidParameters = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *GetParametersOutput) SetParameters(v []*Parameter) *GetParametersOutput { + s.Parameters = v + return s +} + +type GetPatchBaselineForPatchGroupInput struct { + _ struct{} `type:"structure"` + + // Returns he operating system rule specified for patch groups using the patch + // baseline. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` + + // The name of the patch group whose patch baseline should be retrieved. + // + // PatchGroup is a required field + PatchGroup *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetPatchBaselineForPatchGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPatchBaselineForPatchGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPatchBaselineForPatchGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPatchBaselineForPatchGroupInput"} + if s.PatchGroup == nil { + invalidParams.Add(request.NewErrParamRequired("PatchGroup")) + } + if s.PatchGroup != nil && len(*s.PatchGroup) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PatchGroup", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *GetPatchBaselineForPatchGroupInput) SetOperatingSystem(v string) *GetPatchBaselineForPatchGroupInput { + s.OperatingSystem = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *GetPatchBaselineForPatchGroupInput) SetPatchGroup(v string) *GetPatchBaselineForPatchGroupInput { + s.PatchGroup = &v + return s +} + +type GetPatchBaselineForPatchGroupOutput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline that should be used for the patch group. + BaselineId *string `min:"20" type:"string"` + + // The operating system rule specified for patch groups using the patch baseline. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` + + // The name of the patch group. + PatchGroup *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetPatchBaselineForPatchGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPatchBaselineForPatchGroupOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *GetPatchBaselineForPatchGroupOutput) SetBaselineId(v string) *GetPatchBaselineForPatchGroupOutput { + s.BaselineId = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *GetPatchBaselineForPatchGroupOutput) SetOperatingSystem(v string) *GetPatchBaselineForPatchGroupOutput { + s.OperatingSystem = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *GetPatchBaselineForPatchGroupOutput) SetPatchGroup(v string) *GetPatchBaselineForPatchGroupOutput { + s.PatchGroup = &v + return s +} + +type GetPatchBaselineInput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline to retrieve. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetPatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPatchBaselineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPatchBaselineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPatchBaselineInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBaselineId sets the BaselineId field's value. +func (s *GetPatchBaselineInput) SetBaselineId(v string) *GetPatchBaselineInput { + s.BaselineId = &v + return s +} + +type GetPatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // A set of rules used to include patches in the baseline. + ApprovalRules *PatchRuleGroup `type:"structure"` + + // A list of explicitly approved patches for the baseline. + ApprovedPatches []*string `type:"list"` + + // Returns the specified compliance severity level for approved patches in the + // patch baseline. + ApprovedPatchesComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` + + // Indicates whether the list of approved patches includes non-security updates + // that should be applied to the instances. The default value is 'false'. Applies + // to Linux instances only. + ApprovedPatchesEnableNonSecurity *bool `type:"boolean"` + + // The ID of the retrieved patch baseline. + BaselineId *string `min:"20" type:"string"` + + // The date the patch baseline was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // A description of the patch baseline. + Description *string `min:"1" type:"string"` + + // A set of global filters used to exclude patches from the baseline. + GlobalFilters *PatchFilterGroup `type:"structure"` + + // The date the patch baseline was last modified. + ModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the patch baseline. + Name *string `min:"3" type:"string"` + + // Returns the operating system specified for the patch baseline. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` + + // Patch groups included in the patch baseline. + PatchGroups []*string `type:"list"` + + // A list of explicitly rejected patches for the baseline. + RejectedPatches []*string `type:"list"` + + // Information about the patches to use to update the instances, including target + // operating systems and source repositories. Applies to Linux instances only. + Sources []*PatchSource `type:"list"` +} + +// String returns the string representation +func (s GetPatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPatchBaselineOutput) GoString() string { + return s.String() +} + +// SetApprovalRules sets the ApprovalRules field's value. +func (s *GetPatchBaselineOutput) SetApprovalRules(v *PatchRuleGroup) *GetPatchBaselineOutput { + s.ApprovalRules = v + return s +} + +// SetApprovedPatches sets the ApprovedPatches field's value. +func (s *GetPatchBaselineOutput) SetApprovedPatches(v []*string) *GetPatchBaselineOutput { + s.ApprovedPatches = v + return s +} + +// SetApprovedPatchesComplianceLevel sets the ApprovedPatchesComplianceLevel field's value. +func (s *GetPatchBaselineOutput) SetApprovedPatchesComplianceLevel(v string) *GetPatchBaselineOutput { + s.ApprovedPatchesComplianceLevel = &v + return s +} + +// SetApprovedPatchesEnableNonSecurity sets the ApprovedPatchesEnableNonSecurity field's value. +func (s *GetPatchBaselineOutput) SetApprovedPatchesEnableNonSecurity(v bool) *GetPatchBaselineOutput { + s.ApprovedPatchesEnableNonSecurity = &v + return s +} + +// SetBaselineId sets the BaselineId field's value. +func (s *GetPatchBaselineOutput) SetBaselineId(v string) *GetPatchBaselineOutput { + s.BaselineId = &v + return s +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *GetPatchBaselineOutput) SetCreatedDate(v time.Time) *GetPatchBaselineOutput { + s.CreatedDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *GetPatchBaselineOutput) SetDescription(v string) *GetPatchBaselineOutput { + s.Description = &v + return s +} + +// SetGlobalFilters sets the GlobalFilters field's value. +func (s *GetPatchBaselineOutput) SetGlobalFilters(v *PatchFilterGroup) *GetPatchBaselineOutput { + s.GlobalFilters = v + return s +} + +// SetModifiedDate sets the ModifiedDate field's value. +func (s *GetPatchBaselineOutput) SetModifiedDate(v time.Time) *GetPatchBaselineOutput { + s.ModifiedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetPatchBaselineOutput) SetName(v string) *GetPatchBaselineOutput { + s.Name = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *GetPatchBaselineOutput) SetOperatingSystem(v string) *GetPatchBaselineOutput { + s.OperatingSystem = &v + return s +} + +// SetPatchGroups sets the PatchGroups field's value. +func (s *GetPatchBaselineOutput) SetPatchGroups(v []*string) *GetPatchBaselineOutput { + s.PatchGroups = v + return s +} + +// SetRejectedPatches sets the RejectedPatches field's value. +func (s *GetPatchBaselineOutput) SetRejectedPatches(v []*string) *GetPatchBaselineOutput { + s.RejectedPatches = v + return s +} + +// SetSources sets the Sources field's value. +func (s *GetPatchBaselineOutput) SetSources(v []*PatchSource) *GetPatchBaselineOutput { + s.Sources = v + return s +} + +// Status information about the aggregated associations. +type InstanceAggregatedAssociationOverview struct { + _ struct{} `type:"structure"` + + // Detailed status information about the aggregated associations. + DetailedStatus *string `type:"string"` + + // The number of associations for the instance(s). + InstanceAssociationStatusAggregatedCount map[string]*int64 `type:"map"` +} + +// String returns the string representation +func (s InstanceAggregatedAssociationOverview) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceAggregatedAssociationOverview) GoString() string { + return s.String() +} + +// SetDetailedStatus sets the DetailedStatus field's value. +func (s *InstanceAggregatedAssociationOverview) SetDetailedStatus(v string) *InstanceAggregatedAssociationOverview { + s.DetailedStatus = &v + return s +} + +// SetInstanceAssociationStatusAggregatedCount sets the InstanceAssociationStatusAggregatedCount field's value. +func (s *InstanceAggregatedAssociationOverview) SetInstanceAssociationStatusAggregatedCount(v map[string]*int64) *InstanceAggregatedAssociationOverview { + s.InstanceAssociationStatusAggregatedCount = v + return s +} + +// One or more association documents on the instance. +type InstanceAssociation struct { + _ struct{} `type:"structure"` + + // The association ID. + AssociationId *string `type:"string"` + + // Version information for the association on the instance. + AssociationVersion *string `type:"string"` + + // The content of the association document for the instance(s). + Content *string `min:"1" type:"string"` + + // The instance ID. + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s InstanceAssociation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceAssociation) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *InstanceAssociation) SetAssociationId(v string) *InstanceAssociation { + s.AssociationId = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *InstanceAssociation) SetAssociationVersion(v string) *InstanceAssociation { + s.AssociationVersion = &v + return s +} + +// SetContent sets the Content field's value. +func (s *InstanceAssociation) SetContent(v string) *InstanceAssociation { + s.Content = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceAssociation) SetInstanceId(v string) *InstanceAssociation { + s.InstanceId = &v + return s +} + +// An Amazon S3 bucket where you want to store the results of this request. +type InstanceAssociationOutputLocation struct { + _ struct{} `type:"structure"` + + // An Amazon S3 bucket where you want to store the results of this request. + S3Location *S3OutputLocation `type:"structure"` +} + +// String returns the string representation +func (s InstanceAssociationOutputLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceAssociationOutputLocation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceAssociationOutputLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceAssociationOutputLocation"} + if s.S3Location != nil { + if err := s.S3Location.Validate(); err != nil { + invalidParams.AddNested("S3Location", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3Location sets the S3Location field's value. +func (s *InstanceAssociationOutputLocation) SetS3Location(v *S3OutputLocation) *InstanceAssociationOutputLocation { + s.S3Location = v + return s +} + +// The URL of Amazon S3 bucket where you want to store the results of this request. +type InstanceAssociationOutputUrl struct { + _ struct{} `type:"structure"` + + // The URL of Amazon S3 bucket where you want to store the results of this request. + S3OutputUrl *S3OutputUrl `type:"structure"` +} + +// String returns the string representation +func (s InstanceAssociationOutputUrl) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceAssociationOutputUrl) GoString() string { + return s.String() +} + +// SetS3OutputUrl sets the S3OutputUrl field's value. +func (s *InstanceAssociationOutputUrl) SetS3OutputUrl(v *S3OutputUrl) *InstanceAssociationOutputUrl { + s.S3OutputUrl = v + return s +} + +// Status information about the instance association. +type InstanceAssociationStatusInfo struct { + _ struct{} `type:"structure"` + + // The association ID. + AssociationId *string `type:"string"` + + // The name of the association applied to the instance. + AssociationName *string `type:"string"` + + // The version of the association applied to the instance. + AssociationVersion *string `type:"string"` + + // Detailed status information about the instance association. + DetailedStatus *string `type:"string"` + + // The association document verions. + DocumentVersion *string `type:"string"` + + // An error code returned by the request to create the association. + ErrorCode *string `type:"string"` + + // The date the instance association executed. + ExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Summary information about association execution. + ExecutionSummary *string `min:"1" type:"string"` + + // The instance ID where the association was created. + InstanceId *string `type:"string"` + + // The name of the association. + Name *string `type:"string"` + + // A URL for an Amazon S3 bucket where you want to store the results of this + // request. + OutputUrl *InstanceAssociationOutputUrl `type:"structure"` + + // Status information about the instance association. + Status *string `type:"string"` +} + +// String returns the string representation +func (s InstanceAssociationStatusInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceAssociationStatusInfo) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *InstanceAssociationStatusInfo) SetAssociationId(v string) *InstanceAssociationStatusInfo { + s.AssociationId = &v + return s +} + +// SetAssociationName sets the AssociationName field's value. +func (s *InstanceAssociationStatusInfo) SetAssociationName(v string) *InstanceAssociationStatusInfo { + s.AssociationName = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *InstanceAssociationStatusInfo) SetAssociationVersion(v string) *InstanceAssociationStatusInfo { + s.AssociationVersion = &v + return s +} + +// SetDetailedStatus sets the DetailedStatus field's value. +func (s *InstanceAssociationStatusInfo) SetDetailedStatus(v string) *InstanceAssociationStatusInfo { + s.DetailedStatus = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *InstanceAssociationStatusInfo) SetDocumentVersion(v string) *InstanceAssociationStatusInfo { + s.DocumentVersion = &v + return s +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *InstanceAssociationStatusInfo) SetErrorCode(v string) *InstanceAssociationStatusInfo { + s.ErrorCode = &v + return s +} + +// SetExecutionDate sets the ExecutionDate field's value. +func (s *InstanceAssociationStatusInfo) SetExecutionDate(v time.Time) *InstanceAssociationStatusInfo { + s.ExecutionDate = &v + return s +} + +// SetExecutionSummary sets the ExecutionSummary field's value. +func (s *InstanceAssociationStatusInfo) SetExecutionSummary(v string) *InstanceAssociationStatusInfo { + s.ExecutionSummary = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceAssociationStatusInfo) SetInstanceId(v string) *InstanceAssociationStatusInfo { + s.InstanceId = &v + return s +} + +// SetName sets the Name field's value. +func (s *InstanceAssociationStatusInfo) SetName(v string) *InstanceAssociationStatusInfo { + s.Name = &v + return s +} + +// SetOutputUrl sets the OutputUrl field's value. +func (s *InstanceAssociationStatusInfo) SetOutputUrl(v *InstanceAssociationOutputUrl) *InstanceAssociationStatusInfo { + s.OutputUrl = v + return s +} + +// SetStatus sets the Status field's value. +func (s *InstanceAssociationStatusInfo) SetStatus(v string) *InstanceAssociationStatusInfo { + s.Status = &v + return s +} + +// Describes a filter for a specific list of instances. +type InstanceInformation struct { + _ struct{} `type:"structure"` + + // The activation ID created by Systems Manager when the server or VM was registered. + ActivationId *string `type:"string"` + + // The version of the SSM Agent running on your Linux instance. + AgentVersion *string `type:"string"` + + // Information about the association. + AssociationOverview *InstanceAggregatedAssociationOverview `type:"structure"` + + // The status of the association. + AssociationStatus *string `type:"string"` + + // The fully qualified host name of the managed instance. + ComputerName *string `min:"1" type:"string"` + + // The IP address of the managed instance. + IPAddress *string `min:"1" type:"string"` + + // The Amazon Identity and Access Management (IAM) role assigned to EC2 instances + // or managed instances. + IamRole *string `type:"string"` + + // The instance ID. + InstanceId *string `type:"string"` + + // Indicates whether latest version of the SSM Agent is running on your instance. + // Some older versions of Windows Server use the EC2Config service to process + // SSM requests. For this reason, this field does not indicate whether or not + // the latest version is installed on Windows managed instances. + IsLatestVersion *bool `type:"boolean"` + + // The date the association was last executed. + LastAssociationExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The date and time when agent last pinged Systems Manager service. + LastPingDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The last date the association was successfully run. + LastSuccessfulAssociationExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the managed instance. + Name *string `type:"string"` + + // Connection status of the SSM Agent. + PingStatus *string `type:"string" enum:"PingStatus"` + + // The name of the operating system platform running on your instance. + PlatformName *string `type:"string"` + + // The operating system platform type. + PlatformType *string `type:"string" enum:"PlatformType"` + + // The version of the OS platform running on your instance. + PlatformVersion *string `type:"string"` + + // The date the server or VM was registered with AWS as a managed instance. + RegistrationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The type of instance. Instances are either EC2 instances or managed instances. + ResourceType *string `type:"string" enum:"ResourceType"` +} + +// String returns the string representation +func (s InstanceInformation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceInformation) GoString() string { + return s.String() +} + +// SetActivationId sets the ActivationId field's value. +func (s *InstanceInformation) SetActivationId(v string) *InstanceInformation { + s.ActivationId = &v + return s +} + +// SetAgentVersion sets the AgentVersion field's value. +func (s *InstanceInformation) SetAgentVersion(v string) *InstanceInformation { + s.AgentVersion = &v + return s +} + +// SetAssociationOverview sets the AssociationOverview field's value. +func (s *InstanceInformation) SetAssociationOverview(v *InstanceAggregatedAssociationOverview) *InstanceInformation { + s.AssociationOverview = v + return s +} + +// SetAssociationStatus sets the AssociationStatus field's value. +func (s *InstanceInformation) SetAssociationStatus(v string) *InstanceInformation { + s.AssociationStatus = &v + return s +} + +// SetComputerName sets the ComputerName field's value. +func (s *InstanceInformation) SetComputerName(v string) *InstanceInformation { + s.ComputerName = &v + return s +} + +// SetIPAddress sets the IPAddress field's value. +func (s *InstanceInformation) SetIPAddress(v string) *InstanceInformation { + s.IPAddress = &v + return s +} + +// SetIamRole sets the IamRole field's value. +func (s *InstanceInformation) SetIamRole(v string) *InstanceInformation { + s.IamRole = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceInformation) SetInstanceId(v string) *InstanceInformation { + s.InstanceId = &v + return s +} + +// SetIsLatestVersion sets the IsLatestVersion field's value. +func (s *InstanceInformation) SetIsLatestVersion(v bool) *InstanceInformation { + s.IsLatestVersion = &v + return s +} + +// SetLastAssociationExecutionDate sets the LastAssociationExecutionDate field's value. +func (s *InstanceInformation) SetLastAssociationExecutionDate(v time.Time) *InstanceInformation { + s.LastAssociationExecutionDate = &v + return s +} + +// SetLastPingDateTime sets the LastPingDateTime field's value. +func (s *InstanceInformation) SetLastPingDateTime(v time.Time) *InstanceInformation { + s.LastPingDateTime = &v + return s +} + +// SetLastSuccessfulAssociationExecutionDate sets the LastSuccessfulAssociationExecutionDate field's value. +func (s *InstanceInformation) SetLastSuccessfulAssociationExecutionDate(v time.Time) *InstanceInformation { + s.LastSuccessfulAssociationExecutionDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *InstanceInformation) SetName(v string) *InstanceInformation { + s.Name = &v + return s +} + +// SetPingStatus sets the PingStatus field's value. +func (s *InstanceInformation) SetPingStatus(v string) *InstanceInformation { + s.PingStatus = &v + return s +} + +// SetPlatformName sets the PlatformName field's value. +func (s *InstanceInformation) SetPlatformName(v string) *InstanceInformation { + s.PlatformName = &v + return s +} + +// SetPlatformType sets the PlatformType field's value. +func (s *InstanceInformation) SetPlatformType(v string) *InstanceInformation { + s.PlatformType = &v + return s +} + +// SetPlatformVersion sets the PlatformVersion field's value. +func (s *InstanceInformation) SetPlatformVersion(v string) *InstanceInformation { + s.PlatformVersion = &v + return s +} + +// SetRegistrationDate sets the RegistrationDate field's value. +func (s *InstanceInformation) SetRegistrationDate(v time.Time) *InstanceInformation { + s.RegistrationDate = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *InstanceInformation) SetResourceType(v string) *InstanceInformation { + s.ResourceType = &v + return s +} + +// Describes a filter for a specific list of instances. +type InstanceInformationFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true" enum:"InstanceInformationFilterKey"` + + // The filter values. + // + // ValueSet is a required field + ValueSet []*string `locationName:"valueSet" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s InstanceInformationFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceInformationFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceInformationFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceInformationFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.ValueSet == nil { + invalidParams.Add(request.NewErrParamRequired("ValueSet")) + } + if s.ValueSet != nil && len(s.ValueSet) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ValueSet", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *InstanceInformationFilter) SetKey(v string) *InstanceInformationFilter { + s.Key = &v + return s +} + +// SetValueSet sets the ValueSet field's value. +func (s *InstanceInformationFilter) SetValueSet(v []*string) *InstanceInformationFilter { + s.ValueSet = v + return s +} + +// The filters to describe or get information about your managed instances. +type InstanceInformationStringFilter struct { + _ struct{} `type:"structure"` + + // The filter key name to describe your instances. For example: + // + // "InstanceIds"|"AgentVersion"|"PingStatus"|"PlatformTypes"|"ActivationIds"|"IamRole"|"ResourceType"|"AssociationStatus"|"Tag + // Key" + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The filter values. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s InstanceInformationStringFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceInformationStringFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceInformationStringFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceInformationStringFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *InstanceInformationStringFilter) SetKey(v string) *InstanceInformationStringFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *InstanceInformationStringFilter) SetValues(v []*string) *InstanceInformationStringFilter { + s.Values = v + return s +} + +// Defines the high-level patch compliance state for a managed instance, providing +// information about the number of installed, missing, not applicable, and failed +// patches along with metadata about the operation when this information was +// gathered for the instance. +type InstancePatchState struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline used to patch the instance. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` + + // The number of patches from the patch baseline that were attempted to be installed + // during the last patching operation, but failed to install. + FailedCount *int64 `type:"integer"` + + // The number of patches from the patch baseline that are installed on the instance. + InstalledCount *int64 `type:"integer"` + + // The number of patches not specified in the patch baseline that are installed + // on the instance. + InstalledOtherCount *int64 `type:"integer"` + + // The ID of the managed instance the high-level patch compliance information + // was collected for. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The number of patches from the patch baseline that are applicable for the + // instance but aren't currently installed. + MissingCount *int64 `type:"integer"` + + // The number of patches from the patch baseline that aren't applicable for + // the instance and hence aren't installed on the instance. + NotApplicableCount *int64 `type:"integer"` + + // The type of patching operation that was performed: SCAN (assess patch compliance + // state) or INSTALL (install missing patches). + // + // Operation is a required field + Operation *string `type:"string" required:"true" enum:"PatchOperationType"` + + // The time the most recent patching operation completed on the instance. + // + // OperationEndTime is a required field + OperationEndTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // The time the most recent patching operation was started on the instance. + // + // OperationStartTime is a required field + OperationStartTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // Placeholder information. This field will always be empty in the current release + // of the service. + OwnerInformation *string `min:"1" type:"string"` + + // The name of the patch group the managed instance belongs to. + // + // PatchGroup is a required field + PatchGroup *string `min:"1" type:"string" required:"true"` + + // The ID of the patch baseline snapshot used during the patching operation + // when this compliance data was collected. + SnapshotId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s InstancePatchState) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstancePatchState) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *InstancePatchState) SetBaselineId(v string) *InstancePatchState { + s.BaselineId = &v + return s +} + +// SetFailedCount sets the FailedCount field's value. +func (s *InstancePatchState) SetFailedCount(v int64) *InstancePatchState { + s.FailedCount = &v + return s +} + +// SetInstalledCount sets the InstalledCount field's value. +func (s *InstancePatchState) SetInstalledCount(v int64) *InstancePatchState { + s.InstalledCount = &v + return s +} + +// SetInstalledOtherCount sets the InstalledOtherCount field's value. +func (s *InstancePatchState) SetInstalledOtherCount(v int64) *InstancePatchState { + s.InstalledOtherCount = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstancePatchState) SetInstanceId(v string) *InstancePatchState { + s.InstanceId = &v + return s +} + +// SetMissingCount sets the MissingCount field's value. +func (s *InstancePatchState) SetMissingCount(v int64) *InstancePatchState { + s.MissingCount = &v + return s +} + +// SetNotApplicableCount sets the NotApplicableCount field's value. +func (s *InstancePatchState) SetNotApplicableCount(v int64) *InstancePatchState { + s.NotApplicableCount = &v + return s +} + +// SetOperation sets the Operation field's value. +func (s *InstancePatchState) SetOperation(v string) *InstancePatchState { + s.Operation = &v + return s +} + +// SetOperationEndTime sets the OperationEndTime field's value. +func (s *InstancePatchState) SetOperationEndTime(v time.Time) *InstancePatchState { + s.OperationEndTime = &v + return s +} + +// SetOperationStartTime sets the OperationStartTime field's value. +func (s *InstancePatchState) SetOperationStartTime(v time.Time) *InstancePatchState { + s.OperationStartTime = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *InstancePatchState) SetOwnerInformation(v string) *InstancePatchState { + s.OwnerInformation = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *InstancePatchState) SetPatchGroup(v string) *InstancePatchState { + s.PatchGroup = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *InstancePatchState) SetSnapshotId(v string) *InstancePatchState { + s.SnapshotId = &v + return s +} + +// Defines a filter used in DescribeInstancePatchStatesForPatchGroup used to +// scope down the information returned by the API. +type InstancePatchStateFilter struct { + _ struct{} `type:"structure"` + + // The key for the filter. Supported values are FailedCount, InstalledCount, + // InstalledOtherCount, MissingCount and NotApplicableCount. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The type of comparison that should be performed for the value: Equal, NotEqual, + // LessThan or GreaterThan. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"InstancePatchStateOperatorType"` + + // The value for the filter, must be an integer greater than or equal to 0. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s InstancePatchStateFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstancePatchStateFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstancePatchStateFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstancePatchStateFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *InstancePatchStateFilter) SetKey(v string) *InstancePatchStateFilter { + s.Key = &v + return s +} + +// SetType sets the Type field's value. +func (s *InstancePatchStateFilter) SetType(v string) *InstancePatchStateFilter { + s.Type = &v + return s +} + +// SetValues sets the Values field's value. +func (s *InstancePatchStateFilter) SetValues(v []*string) *InstancePatchStateFilter { + s.Values = v + return s +} + +// Specifies the inventory type and attribute for the aggregation execution. +type InventoryAggregator struct { + _ struct{} `type:"structure"` + + // Nested aggregators to further refine aggregation for an inventory type. + Aggregators []*InventoryAggregator `min:"1" type:"list"` + + // The inventory type and attribute name for aggregation. + Expression *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InventoryAggregator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryAggregator) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryAggregator) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryAggregator"} + if s.Aggregators != nil && len(s.Aggregators) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Aggregators", 1)) + } + if s.Expression != nil && len(*s.Expression) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Expression", 1)) + } + if s.Aggregators != nil { + for i, v := range s.Aggregators { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Aggregators", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAggregators sets the Aggregators field's value. +func (s *InventoryAggregator) SetAggregators(v []*InventoryAggregator) *InventoryAggregator { + s.Aggregators = v + return s +} + +// SetExpression sets the Expression field's value. +func (s *InventoryAggregator) SetExpression(v string) *InventoryAggregator { + s.Expression = &v + return s +} + +// Status information returned by the DeleteInventory action. +type InventoryDeletionStatusItem struct { + _ struct{} `type:"structure"` + + // The deletion ID returned by the DeleteInventory action. + DeletionId *string `type:"string"` + + // The UTC timestamp when the delete operation started. + DeletionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Information about the delete operation. For more information about this summary, + // see Understanding the Delete Inventory Summary (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-delete.html#sysman-inventory-delete-summary). + DeletionSummary *InventoryDeletionSummary `type:"structure"` + + // The status of the operation. Possible values are InProgress and Complete. + LastStatus *string `type:"string" enum:"InventoryDeletionStatus"` + + // Information about the status. + LastStatusMessage *string `type:"string"` + + // The UTC timestamp of when the last status report. + LastStatusUpdateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the inventory data type. + TypeName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InventoryDeletionStatusItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDeletionStatusItem) GoString() string { + return s.String() +} + +// SetDeletionId sets the DeletionId field's value. +func (s *InventoryDeletionStatusItem) SetDeletionId(v string) *InventoryDeletionStatusItem { + s.DeletionId = &v + return s +} + +// SetDeletionStartTime sets the DeletionStartTime field's value. +func (s *InventoryDeletionStatusItem) SetDeletionStartTime(v time.Time) *InventoryDeletionStatusItem { + s.DeletionStartTime = &v + return s +} + +// SetDeletionSummary sets the DeletionSummary field's value. +func (s *InventoryDeletionStatusItem) SetDeletionSummary(v *InventoryDeletionSummary) *InventoryDeletionStatusItem { + s.DeletionSummary = v + return s +} + +// SetLastStatus sets the LastStatus field's value. +func (s *InventoryDeletionStatusItem) SetLastStatus(v string) *InventoryDeletionStatusItem { + s.LastStatus = &v + return s +} + +// SetLastStatusMessage sets the LastStatusMessage field's value. +func (s *InventoryDeletionStatusItem) SetLastStatusMessage(v string) *InventoryDeletionStatusItem { + s.LastStatusMessage = &v + return s +} + +// SetLastStatusUpdateTime sets the LastStatusUpdateTime field's value. +func (s *InventoryDeletionStatusItem) SetLastStatusUpdateTime(v time.Time) *InventoryDeletionStatusItem { + s.LastStatusUpdateTime = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *InventoryDeletionStatusItem) SetTypeName(v string) *InventoryDeletionStatusItem { + s.TypeName = &v + return s +} + +// Information about the delete operation. +type InventoryDeletionSummary struct { + _ struct{} `type:"structure"` + + // Remaining number of items to delete. + RemainingCount *int64 `type:"integer"` + + // A list of counts and versions for deleted items. + SummaryItems []*InventoryDeletionSummaryItem `type:"list"` + + // The total number of items to delete. This count does not change during the + // delete operation. + TotalCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s InventoryDeletionSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDeletionSummary) GoString() string { + return s.String() +} + +// SetRemainingCount sets the RemainingCount field's value. +func (s *InventoryDeletionSummary) SetRemainingCount(v int64) *InventoryDeletionSummary { + s.RemainingCount = &v + return s +} + +// SetSummaryItems sets the SummaryItems field's value. +func (s *InventoryDeletionSummary) SetSummaryItems(v []*InventoryDeletionSummaryItem) *InventoryDeletionSummary { + s.SummaryItems = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *InventoryDeletionSummary) SetTotalCount(v int64) *InventoryDeletionSummary { + s.TotalCount = &v + return s +} + +// Either a count, remaining count, or a version number in a delete inventory +// summary. +type InventoryDeletionSummaryItem struct { + _ struct{} `type:"structure"` + + // A count of the number of deleted items. + Count *int64 `type:"integer"` + + // The remaining number of items to delete. + RemainingCount *int64 `type:"integer"` + + // The inventory type version. + Version *string `type:"string"` +} + +// String returns the string representation +func (s InventoryDeletionSummaryItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDeletionSummaryItem) GoString() string { + return s.String() +} + +// SetCount sets the Count field's value. +func (s *InventoryDeletionSummaryItem) SetCount(v int64) *InventoryDeletionSummaryItem { + s.Count = &v + return s +} + +// SetRemainingCount sets the RemainingCount field's value. +func (s *InventoryDeletionSummaryItem) SetRemainingCount(v int64) *InventoryDeletionSummaryItem { + s.RemainingCount = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *InventoryDeletionSummaryItem) SetVersion(v string) *InventoryDeletionSummaryItem { + s.Version = &v + return s +} + +// One or more filters. Use a filter to return a more specific list of results. +type InventoryFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter key. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The type of filter. Valid values include the following: "Equal"|"NotEqual"|"BeginWith"|"LessThan"|"GreaterThan" + Type *string `type:"string" enum:"InventoryQueryOperatorType"` + + // Inventory filter values. Example: inventory filter where instance IDs are + // specified as values Key=AWS:InstanceInformation.InstanceId,Values= i-a12b3c4d5e6g, + // i-1a2b3c4d5e6,Type=Equal + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s InventoryFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *InventoryFilter) SetKey(v string) *InventoryFilter { + s.Key = &v + return s +} + +// SetType sets the Type field's value. +func (s *InventoryFilter) SetType(v string) *InventoryFilter { + s.Type = &v + return s +} + +// SetValues sets the Values field's value. +func (s *InventoryFilter) SetValues(v []*string) *InventoryFilter { + s.Values = v + return s +} + +// Information collected from managed instances based on your inventory policy +// document +type InventoryItem struct { + _ struct{} `type:"structure"` + + // The time the inventory information was collected. + // + // CaptureTime is a required field + CaptureTime *string `type:"string" required:"true"` + + // The inventory data of the inventory type. + Content []map[string]*string `type:"list"` + + // MD5 hash of the inventory item type contents. The content hash is used to + // determine whether to update inventory information. The PutInventory API does + // not update the inventory item type contents if the MD5 hash has not changed + // since last update. + ContentHash *string `type:"string"` + + // A map of associated properties for a specified inventory type. For example, + // with this attribute, you can specify the ExecutionId, ExecutionType, ComplianceType + // properties of the AWS:ComplianceItem type. + Context map[string]*string `type:"map"` + + // The schema version for the inventory item. + // + // SchemaVersion is a required field + SchemaVersion *string `type:"string" required:"true"` + + // The name of the inventory type. Default inventory item type names start with + // AWS. Custom inventory type names will start with Custom. Default inventory + // item types include the following: AWS:AWSComponent, AWS:Application, AWS:InstanceInformation, + // AWS:Network, and AWS:WindowsUpdate. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s InventoryItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryItem) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryItem) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryItem"} + if s.CaptureTime == nil { + invalidParams.Add(request.NewErrParamRequired("CaptureTime")) + } + if s.SchemaVersion == nil { + invalidParams.Add(request.NewErrParamRequired("SchemaVersion")) + } + if s.TypeName == nil { + invalidParams.Add(request.NewErrParamRequired("TypeName")) + } + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCaptureTime sets the CaptureTime field's value. +func (s *InventoryItem) SetCaptureTime(v string) *InventoryItem { + s.CaptureTime = &v + return s +} + +// SetContent sets the Content field's value. +func (s *InventoryItem) SetContent(v []map[string]*string) *InventoryItem { + s.Content = v + return s +} + +// SetContentHash sets the ContentHash field's value. +func (s *InventoryItem) SetContentHash(v string) *InventoryItem { + s.ContentHash = &v + return s +} + +// SetContext sets the Context field's value. +func (s *InventoryItem) SetContext(v map[string]*string) *InventoryItem { + s.Context = v + return s +} + +// SetSchemaVersion sets the SchemaVersion field's value. +func (s *InventoryItem) SetSchemaVersion(v string) *InventoryItem { + s.SchemaVersion = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *InventoryItem) SetTypeName(v string) *InventoryItem { + s.TypeName = &v + return s +} + +// Attributes are the entries within the inventory item content. It contains +// name and value. +type InventoryItemAttribute struct { + _ struct{} `type:"structure"` + + // The data type of the inventory item attribute. + // + // DataType is a required field + DataType *string `type:"string" required:"true" enum:"InventoryAttributeDataType"` + + // Name of the inventory item attribute. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s InventoryItemAttribute) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryItemAttribute) GoString() string { + return s.String() +} + +// SetDataType sets the DataType field's value. +func (s *InventoryItemAttribute) SetDataType(v string) *InventoryItemAttribute { + s.DataType = &v + return s +} + +// SetName sets the Name field's value. +func (s *InventoryItemAttribute) SetName(v string) *InventoryItemAttribute { + s.Name = &v + return s +} + +// The inventory item schema definition. Users can use this to compose inventory +// query filters. +type InventoryItemSchema struct { + _ struct{} `type:"structure"` + + // The schema attributes for inventory. This contains data type and attribute + // name. + // + // Attributes is a required field + Attributes []*InventoryItemAttribute `min:"1" type:"list" required:"true"` + + // The alias name of the inventory type. The alias name is used for display + // purposes. + DisplayName *string `type:"string"` + + // The name of the inventory type. Default inventory item type names start with + // AWS. Custom inventory type names will start with Custom. Default inventory + // item types include the following: AWS:AWSComponent, AWS:Application, AWS:InstanceInformation, + // AWS:Network, and AWS:WindowsUpdate. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` + + // The schema version for the inventory item. + Version *string `type:"string"` +} + +// String returns the string representation +func (s InventoryItemSchema) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryItemSchema) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *InventoryItemSchema) SetAttributes(v []*InventoryItemAttribute) *InventoryItemSchema { + s.Attributes = v + return s +} + +// SetDisplayName sets the DisplayName field's value. +func (s *InventoryItemSchema) SetDisplayName(v string) *InventoryItemSchema { + s.DisplayName = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *InventoryItemSchema) SetTypeName(v string) *InventoryItemSchema { + s.TypeName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *InventoryItemSchema) SetVersion(v string) *InventoryItemSchema { + s.Version = &v + return s +} + +// Inventory query results. +type InventoryResultEntity struct { + _ struct{} `type:"structure"` + + // The data section in the inventory result entity JSON. + Data map[string]*InventoryResultItem `type:"map"` + + // ID of the inventory result entity. For example, for managed instance inventory + // the result will be the managed instance ID. For EC2 instance inventory, the + // result will be the instance ID. + Id *string `type:"string"` +} + +// String returns the string representation +func (s InventoryResultEntity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryResultEntity) GoString() string { + return s.String() +} + +// SetData sets the Data field's value. +func (s *InventoryResultEntity) SetData(v map[string]*InventoryResultItem) *InventoryResultEntity { + s.Data = v + return s +} + +// SetId sets the Id field's value. +func (s *InventoryResultEntity) SetId(v string) *InventoryResultEntity { + s.Id = &v + return s +} + +// The inventory result item. +type InventoryResultItem struct { + _ struct{} `type:"structure"` + + // The time inventory item data was captured. + CaptureTime *string `type:"string"` + + // Contains all the inventory data of the item type. Results include attribute + // names and values. + // + // Content is a required field + Content []map[string]*string `type:"list" required:"true"` + + // MD5 hash of the inventory item type contents. The content hash is used to + // determine whether to update inventory information. The PutInventory API does + // not update the inventory item type contents if the MD5 hash has not changed + // since last update. + ContentHash *string `type:"string"` + + // The schema version for the inventory result item/ + // + // SchemaVersion is a required field + SchemaVersion *string `type:"string" required:"true"` + + // The name of the inventory result item type. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s InventoryResultItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryResultItem) GoString() string { + return s.String() +} + +// SetCaptureTime sets the CaptureTime field's value. +func (s *InventoryResultItem) SetCaptureTime(v string) *InventoryResultItem { + s.CaptureTime = &v + return s +} + +// SetContent sets the Content field's value. +func (s *InventoryResultItem) SetContent(v []map[string]*string) *InventoryResultItem { + s.Content = v + return s +} + +// SetContentHash sets the ContentHash field's value. +func (s *InventoryResultItem) SetContentHash(v string) *InventoryResultItem { + s.ContentHash = &v + return s +} + +// SetSchemaVersion sets the SchemaVersion field's value. +func (s *InventoryResultItem) SetSchemaVersion(v string) *InventoryResultItem { + s.SchemaVersion = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *InventoryResultItem) SetTypeName(v string) *InventoryResultItem { + s.TypeName = &v + return s +} + +type ListAssociationVersionsInput struct { + _ struct{} `type:"structure"` + + // The association ID for which you want to view all versions. + // + // AssociationId is a required field + AssociationId *string `type:"string" required:"true"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListAssociationVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAssociationVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAssociationVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAssociationVersionsInput"} + if s.AssociationId == nil { + invalidParams.Add(request.NewErrParamRequired("AssociationId")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationId sets the AssociationId field's value. +func (s *ListAssociationVersionsInput) SetAssociationId(v string) *ListAssociationVersionsInput { + s.AssociationId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListAssociationVersionsInput) SetMaxResults(v int64) *ListAssociationVersionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAssociationVersionsInput) SetNextToken(v string) *ListAssociationVersionsInput { + s.NextToken = &v + return s +} + +type ListAssociationVersionsOutput struct { + _ struct{} `type:"structure"` + + // Information about all versions of the association for the specified association + // ID. + AssociationVersions []*AssociationVersionInfo `min:"1" type:"list"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListAssociationVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAssociationVersionsOutput) GoString() string { + return s.String() +} + +// SetAssociationVersions sets the AssociationVersions field's value. +func (s *ListAssociationVersionsOutput) SetAssociationVersions(v []*AssociationVersionInfo) *ListAssociationVersionsOutput { + s.AssociationVersions = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAssociationVersionsOutput) SetNextToken(v string) *ListAssociationVersionsOutput { + s.NextToken = &v + return s +} + +type ListAssociationsInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of results. + AssociationFilterList []*AssociationFilter `min:"1" type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListAssociationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAssociationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAssociationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAssociationsInput"} + if s.AssociationFilterList != nil && len(s.AssociationFilterList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AssociationFilterList", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.AssociationFilterList != nil { + for i, v := range s.AssociationFilterList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AssociationFilterList", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationFilterList sets the AssociationFilterList field's value. +func (s *ListAssociationsInput) SetAssociationFilterList(v []*AssociationFilter) *ListAssociationsInput { + s.AssociationFilterList = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListAssociationsInput) SetMaxResults(v int64) *ListAssociationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAssociationsInput) SetNextToken(v string) *ListAssociationsInput { + s.NextToken = &v + return s +} + +type ListAssociationsOutput struct { + _ struct{} `type:"structure"` + + // The associations. + Associations []*Association `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListAssociationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAssociationsOutput) GoString() string { + return s.String() +} + +// SetAssociations sets the Associations field's value. +func (s *ListAssociationsOutput) SetAssociations(v []*Association) *ListAssociationsOutput { + s.Associations = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAssociationsOutput) SetNextToken(v string) *ListAssociationsOutput { + s.NextToken = &v + return s +} + +type ListCommandInvocationsInput struct { + _ struct{} `type:"structure"` + + // (Optional) The invocations for a specific command ID. + CommandId *string `min:"36" type:"string"` + + // (Optional) If set this returns the response of the command executions and + // any command output. By default this is set to False. + Details *bool `type:"boolean"` + + // (Optional) One or more filters. Use a filter to return a more specific list + // of results. + Filters []*CommandFilter `min:"1" type:"list"` + + // (Optional) The command execution details for a specific instance ID. + InstanceId *string `type:"string"` + + // (Optional) The maximum number of items to return for this call. The call + // also returns a token that you can specify in a subsequent call to get the + // next set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // (Optional) The token for the next set of items to return. (You received this + // token from a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListCommandInvocationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListCommandInvocationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListCommandInvocationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCommandInvocationsInput"} + if s.CommandId != nil && len(*s.CommandId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("CommandId", 36)) + } + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommandId sets the CommandId field's value. +func (s *ListCommandInvocationsInput) SetCommandId(v string) *ListCommandInvocationsInput { + s.CommandId = &v + return s +} + +// SetDetails sets the Details field's value. +func (s *ListCommandInvocationsInput) SetDetails(v bool) *ListCommandInvocationsInput { + s.Details = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *ListCommandInvocationsInput) SetFilters(v []*CommandFilter) *ListCommandInvocationsInput { + s.Filters = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ListCommandInvocationsInput) SetInstanceId(v string) *ListCommandInvocationsInput { + s.InstanceId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListCommandInvocationsInput) SetMaxResults(v int64) *ListCommandInvocationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCommandInvocationsInput) SetNextToken(v string) *ListCommandInvocationsInput { + s.NextToken = &v + return s +} + +type ListCommandInvocationsOutput struct { + _ struct{} `type:"structure"` + + // (Optional) A list of all invocations. + CommandInvocations []*CommandInvocation `type:"list"` + + // (Optional) The token for the next set of items to return. (You received this + // token from a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListCommandInvocationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListCommandInvocationsOutput) GoString() string { + return s.String() +} + +// SetCommandInvocations sets the CommandInvocations field's value. +func (s *ListCommandInvocationsOutput) SetCommandInvocations(v []*CommandInvocation) *ListCommandInvocationsOutput { + s.CommandInvocations = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCommandInvocationsOutput) SetNextToken(v string) *ListCommandInvocationsOutput { + s.NextToken = &v + return s +} + +type ListCommandsInput struct { + _ struct{} `type:"structure"` + + // (Optional) If provided, lists only the specified command. + CommandId *string `min:"36" type:"string"` + + // (Optional) One or more filters. Use a filter to return a more specific list + // of results. + Filters []*CommandFilter `min:"1" type:"list"` + + // (Optional) Lists commands issued against this instance ID. + InstanceId *string `type:"string"` + + // (Optional) The maximum number of items to return for this call. The call + // also returns a token that you can specify in a subsequent call to get the + // next set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // (Optional) The token for the next set of items to return. (You received this + // token from a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListCommandsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListCommandsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListCommandsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCommandsInput"} + if s.CommandId != nil && len(*s.CommandId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("CommandId", 36)) + } + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommandId sets the CommandId field's value. +func (s *ListCommandsInput) SetCommandId(v string) *ListCommandsInput { + s.CommandId = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *ListCommandsInput) SetFilters(v []*CommandFilter) *ListCommandsInput { + s.Filters = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ListCommandsInput) SetInstanceId(v string) *ListCommandsInput { + s.InstanceId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListCommandsInput) SetMaxResults(v int64) *ListCommandsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCommandsInput) SetNextToken(v string) *ListCommandsInput { + s.NextToken = &v + return s +} + +type ListCommandsOutput struct { + _ struct{} `type:"structure"` + + // (Optional) The list of commands requested by the user. + Commands []*Command `type:"list"` + + // (Optional) The token for the next set of items to return. (You received this + // token from a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListCommandsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListCommandsOutput) GoString() string { + return s.String() +} + +// SetCommands sets the Commands field's value. +func (s *ListCommandsOutput) SetCommands(v []*Command) *ListCommandsOutput { + s.Commands = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCommandsOutput) SetNextToken(v string) *ListCommandsOutput { + s.NextToken = &v + return s +} + +type ListComplianceItemsInput struct { + _ struct{} `type:"structure"` + + // One or more compliance filters. Use a filter to return a more specific list + // of results. + Filters []*ComplianceStringFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` + + // The ID for the resources from which to get compliance information. Currently, + // you can only specify one resource ID. + ResourceIds []*string `min:"1" type:"list"` + + // The type of resource from which to get compliance information. Currently, + // the only supported resource type is ManagedInstance. + ResourceTypes []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s ListComplianceItemsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListComplianceItemsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListComplianceItemsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListComplianceItemsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ResourceIds != nil && len(s.ResourceIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceIds", 1)) + } + if s.ResourceTypes != nil && len(s.ResourceTypes) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceTypes", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListComplianceItemsInput) SetFilters(v []*ComplianceStringFilter) *ListComplianceItemsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListComplianceItemsInput) SetMaxResults(v int64) *ListComplianceItemsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListComplianceItemsInput) SetNextToken(v string) *ListComplianceItemsInput { + s.NextToken = &v + return s +} + +// SetResourceIds sets the ResourceIds field's value. +func (s *ListComplianceItemsInput) SetResourceIds(v []*string) *ListComplianceItemsInput { + s.ResourceIds = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *ListComplianceItemsInput) SetResourceTypes(v []*string) *ListComplianceItemsInput { + s.ResourceTypes = v + return s +} + +type ListComplianceItemsOutput struct { + _ struct{} `type:"structure"` + + // A list of compliance information for the specified resource ID. + ComplianceItems []*ComplianceItem `type:"list"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListComplianceItemsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListComplianceItemsOutput) GoString() string { + return s.String() +} + +// SetComplianceItems sets the ComplianceItems field's value. +func (s *ListComplianceItemsOutput) SetComplianceItems(v []*ComplianceItem) *ListComplianceItemsOutput { + s.ComplianceItems = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListComplianceItemsOutput) SetNextToken(v string) *ListComplianceItemsOutput { + s.NextToken = &v + return s +} + +type ListComplianceSummariesInput struct { + _ struct{} `type:"structure"` + + // One or more compliance or inventory filters. Use a filter to return a more + // specific list of results. + Filters []*ComplianceStringFilter `type:"list"` + + // The maximum number of items to return for this call. Currently, you can specify + // null or 50. The call also returns a token that you can specify in a subsequent + // call to get the next set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListComplianceSummariesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListComplianceSummariesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListComplianceSummariesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListComplianceSummariesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListComplianceSummariesInput) SetFilters(v []*ComplianceStringFilter) *ListComplianceSummariesInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListComplianceSummariesInput) SetMaxResults(v int64) *ListComplianceSummariesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListComplianceSummariesInput) SetNextToken(v string) *ListComplianceSummariesInput { + s.NextToken = &v + return s +} + +type ListComplianceSummariesOutput struct { + _ struct{} `type:"structure"` + + // A list of compliant and non-compliant summary counts based on compliance + // types. For example, this call returns State Manager associations, patches, + // or custom compliance types according to the filter criteria that you specified. + ComplianceSummaryItems []*ComplianceSummaryItem `type:"list"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListComplianceSummariesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListComplianceSummariesOutput) GoString() string { + return s.String() +} + +// SetComplianceSummaryItems sets the ComplianceSummaryItems field's value. +func (s *ListComplianceSummariesOutput) SetComplianceSummaryItems(v []*ComplianceSummaryItem) *ListComplianceSummariesOutput { + s.ComplianceSummaryItems = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListComplianceSummariesOutput) SetNextToken(v string) *ListComplianceSummariesOutput { + s.NextToken = &v + return s +} + +type ListDocumentVersionsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The name of the document about which you want version information. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListDocumentVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListDocumentVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListDocumentVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListDocumentVersionsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListDocumentVersionsInput) SetMaxResults(v int64) *ListDocumentVersionsInput { + s.MaxResults = &v + return s +} + +// SetName sets the Name field's value. +func (s *ListDocumentVersionsInput) SetName(v string) *ListDocumentVersionsInput { + s.Name = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListDocumentVersionsInput) SetNextToken(v string) *ListDocumentVersionsInput { + s.NextToken = &v + return s +} + +type ListDocumentVersionsOutput struct { + _ struct{} `type:"structure"` + + // The document versions. + DocumentVersions []*DocumentVersionInfo `min:"1" type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListDocumentVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListDocumentVersionsOutput) GoString() string { + return s.String() +} + +// SetDocumentVersions sets the DocumentVersions field's value. +func (s *ListDocumentVersionsOutput) SetDocumentVersions(v []*DocumentVersionInfo) *ListDocumentVersionsOutput { + s.DocumentVersions = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListDocumentVersionsOutput) SetNextToken(v string) *ListDocumentVersionsOutput { + s.NextToken = &v + return s +} + +type ListDocumentsInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of results. + DocumentFilterList []*DocumentFilter `min:"1" type:"list"` + + // One or more filters. Use a filter to return a more specific list of results. + Filters []*DocumentKeyValuesFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListDocumentsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListDocumentsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListDocumentsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListDocumentsInput"} + if s.DocumentFilterList != nil && len(s.DocumentFilterList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DocumentFilterList", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.DocumentFilterList != nil { + for i, v := range s.DocumentFilterList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "DocumentFilterList", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDocumentFilterList sets the DocumentFilterList field's value. +func (s *ListDocumentsInput) SetDocumentFilterList(v []*DocumentFilter) *ListDocumentsInput { + s.DocumentFilterList = v + return s +} + +// SetFilters sets the Filters field's value. +func (s *ListDocumentsInput) SetFilters(v []*DocumentKeyValuesFilter) *ListDocumentsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListDocumentsInput) SetMaxResults(v int64) *ListDocumentsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListDocumentsInput) SetNextToken(v string) *ListDocumentsInput { + s.NextToken = &v + return s +} + +type ListDocumentsOutput struct { + _ struct{} `type:"structure"` + + // The names of the Systems Manager documents. + DocumentIdentifiers []*DocumentIdentifier `type:"list"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListDocumentsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListDocumentsOutput) GoString() string { + return s.String() +} + +// SetDocumentIdentifiers sets the DocumentIdentifiers field's value. +func (s *ListDocumentsOutput) SetDocumentIdentifiers(v []*DocumentIdentifier) *ListDocumentsOutput { + s.DocumentIdentifiers = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListDocumentsOutput) SetNextToken(v string) *ListDocumentsOutput { + s.NextToken = &v + return s +} + +type ListInventoryEntriesInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of results. + Filters []*InventoryFilter `min:"1" type:"list"` + + // The instance ID for which you want inventory information. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The type of inventory item for which you want information. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListInventoryEntriesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInventoryEntriesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListInventoryEntriesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInventoryEntriesInput"} + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.TypeName == nil { + invalidParams.Add(request.NewErrParamRequired("TypeName")) + } + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListInventoryEntriesInput) SetFilters(v []*InventoryFilter) *ListInventoryEntriesInput { + s.Filters = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ListInventoryEntriesInput) SetInstanceId(v string) *ListInventoryEntriesInput { + s.InstanceId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListInventoryEntriesInput) SetMaxResults(v int64) *ListInventoryEntriesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListInventoryEntriesInput) SetNextToken(v string) *ListInventoryEntriesInput { + s.NextToken = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *ListInventoryEntriesInput) SetTypeName(v string) *ListInventoryEntriesInput { + s.TypeName = &v + return s +} + +type ListInventoryEntriesOutput struct { + _ struct{} `type:"structure"` + + // The time that inventory information was collected for the instance(s). + CaptureTime *string `type:"string"` + + // A list of inventory items on the instance(s). + Entries []map[string]*string `type:"list"` + + // The instance ID targeted by the request to query inventory information. + InstanceId *string `type:"string"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // The inventory schema version used by the instance(s). + SchemaVersion *string `type:"string"` + + // The type of inventory item returned by the request. + TypeName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListInventoryEntriesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInventoryEntriesOutput) GoString() string { + return s.String() +} + +// SetCaptureTime sets the CaptureTime field's value. +func (s *ListInventoryEntriesOutput) SetCaptureTime(v string) *ListInventoryEntriesOutput { + s.CaptureTime = &v + return s +} + +// SetEntries sets the Entries field's value. +func (s *ListInventoryEntriesOutput) SetEntries(v []map[string]*string) *ListInventoryEntriesOutput { + s.Entries = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ListInventoryEntriesOutput) SetInstanceId(v string) *ListInventoryEntriesOutput { + s.InstanceId = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListInventoryEntriesOutput) SetNextToken(v string) *ListInventoryEntriesOutput { + s.NextToken = &v + return s +} + +// SetSchemaVersion sets the SchemaVersion field's value. +func (s *ListInventoryEntriesOutput) SetSchemaVersion(v string) *ListInventoryEntriesOutput { + s.SchemaVersion = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *ListInventoryEntriesOutput) SetTypeName(v string) *ListInventoryEntriesOutput { + s.TypeName = &v + return s +} + +type ListResourceComplianceSummariesInput struct { + _ struct{} `type:"structure"` + + // One or more filters. Use a filter to return a more specific list of results. + Filters []*ComplianceStringFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListResourceComplianceSummariesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListResourceComplianceSummariesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListResourceComplianceSummariesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListResourceComplianceSummariesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListResourceComplianceSummariesInput) SetFilters(v []*ComplianceStringFilter) *ListResourceComplianceSummariesInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListResourceComplianceSummariesInput) SetMaxResults(v int64) *ListResourceComplianceSummariesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListResourceComplianceSummariesInput) SetNextToken(v string) *ListResourceComplianceSummariesInput { + s.NextToken = &v + return s +} + +type ListResourceComplianceSummariesOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` + + // A summary count for specified or targeted managed instances. Summary count + // includes information about compliant and non-compliant State Manager associations, + // patch status, or custom items according to the filter criteria that you specify. + ResourceComplianceSummaryItems []*ResourceComplianceSummaryItem `type:"list"` +} + +// String returns the string representation +func (s ListResourceComplianceSummariesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListResourceComplianceSummariesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListResourceComplianceSummariesOutput) SetNextToken(v string) *ListResourceComplianceSummariesOutput { + s.NextToken = &v + return s +} + +// SetResourceComplianceSummaryItems sets the ResourceComplianceSummaryItems field's value. +func (s *ListResourceComplianceSummariesOutput) SetResourceComplianceSummaryItems(v []*ResourceComplianceSummaryItem) *ListResourceComplianceSummariesOutput { + s.ResourceComplianceSummaryItems = v + return s +} + +type ListResourceDataSyncInput struct { + _ struct{} `type:"structure"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListResourceDataSyncInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListResourceDataSyncInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListResourceDataSyncInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListResourceDataSyncInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListResourceDataSyncInput) SetMaxResults(v int64) *ListResourceDataSyncInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListResourceDataSyncInput) SetNextToken(v string) *ListResourceDataSyncInput { + s.NextToken = &v + return s +} + +type ListResourceDataSyncOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` + + // A list of your current Resource Data Sync configurations and their statuses. + ResourceDataSyncItems []*ResourceDataSyncItem `type:"list"` +} + +// String returns the string representation +func (s ListResourceDataSyncOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListResourceDataSyncOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListResourceDataSyncOutput) SetNextToken(v string) *ListResourceDataSyncOutput { + s.NextToken = &v + return s +} + +// SetResourceDataSyncItems sets the ResourceDataSyncItems field's value. +func (s *ListResourceDataSyncOutput) SetResourceDataSyncItems(v []*ResourceDataSyncItem) *ListResourceDataSyncOutput { + s.ResourceDataSyncItems = v + return s +} + +type ListTagsForResourceInput struct { + _ struct{} `type:"structure"` + + // The resource ID for which you want to see a list of tags. + // + // ResourceId is a required field + ResourceId *string `type:"string" required:"true"` + + // Returns a list of tags for a specific resource type. + // + // ResourceType is a required field + ResourceType *string `type:"string" required:"true" enum:"ResourceTypeForTagging"` +} + +// String returns the string representation +func (s ListTagsForResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *ListTagsForResourceInput) SetResourceId(v string) *ListTagsForResourceInput { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ListTagsForResourceInput) SetResourceType(v string) *ListTagsForResourceInput { + s.ResourceType = &v + return s +} + +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` + + // A list of tags. + TagList []*Tag `type:"list"` +} + +// String returns the string representation +func (s ListTagsForResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceOutput) GoString() string { + return s.String() +} + +// SetTagList sets the TagList field's value. +func (s *ListTagsForResourceOutput) SetTagList(v []*Tag) *ListTagsForResourceOutput { + s.TagList = v + return s +} + +// Information about an Amazon S3 bucket to write instance-level logs to. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +type LoggingInfo struct { + _ struct{} `type:"structure"` + + // The name of an Amazon S3 bucket where execution logs are stored . + // + // S3BucketName is a required field + S3BucketName *string `min:"3" type:"string" required:"true"` + + // (Optional) The Amazon S3 bucket subfolder. + S3KeyPrefix *string `type:"string"` + + // The region where the Amazon S3 bucket is located. + // + // S3Region is a required field + S3Region *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s LoggingInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoggingInfo) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LoggingInfo) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LoggingInfo"} + if s.S3BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("S3BucketName")) + } + if s.S3BucketName != nil && len(*s.S3BucketName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("S3BucketName", 3)) + } + if s.S3Region == nil { + invalidParams.Add(request.NewErrParamRequired("S3Region")) + } + if s.S3Region != nil && len(*s.S3Region) < 3 { + invalidParams.Add(request.NewErrParamMinLen("S3Region", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3BucketName sets the S3BucketName field's value. +func (s *LoggingInfo) SetS3BucketName(v string) *LoggingInfo { + s.S3BucketName = &v + return s +} + +// SetS3KeyPrefix sets the S3KeyPrefix field's value. +func (s *LoggingInfo) SetS3KeyPrefix(v string) *LoggingInfo { + s.S3KeyPrefix = &v + return s +} + +// SetS3Region sets the S3Region field's value. +func (s *LoggingInfo) SetS3Region(v string) *LoggingInfo { + s.S3Region = &v + return s +} + +// The parameters for an AUTOMATION task type. +type MaintenanceWindowAutomationParameters struct { + _ struct{} `type:"structure"` + + // The version of an Automation document to use during task execution. + DocumentVersion *string `type:"string"` + + // The parameters for the AUTOMATION task. + // + // For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow + // and UpdateMaintenanceWindowTask. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // For AUTOMATION task types, Systems Manager ignores any values specified for + // these parameters. + Parameters map[string][]*string `min:"1" type:"map"` +} + +// String returns the string representation +func (s MaintenanceWindowAutomationParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowAutomationParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MaintenanceWindowAutomationParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MaintenanceWindowAutomationParameters"} + if s.Parameters != nil && len(s.Parameters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Parameters", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *MaintenanceWindowAutomationParameters) SetDocumentVersion(v string) *MaintenanceWindowAutomationParameters { + s.DocumentVersion = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *MaintenanceWindowAutomationParameters) SetParameters(v map[string][]*string) *MaintenanceWindowAutomationParameters { + s.Parameters = v + return s +} + +// Describes the information about an execution of a Maintenance Window. +type MaintenanceWindowExecution struct { + _ struct{} `type:"structure"` + + // The time the execution finished. + EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time the execution started. + StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The status of the execution. + Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` + + // The details explaining the Status. Only available for certain status values. + StatusDetails *string `type:"string"` + + // The ID of the Maintenance Window execution. + WindowExecutionId *string `min:"36" type:"string"` + + // The ID of the Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowExecution) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowExecution) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *MaintenanceWindowExecution) SetEndTime(v time.Time) *MaintenanceWindowExecution { + s.EndTime = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *MaintenanceWindowExecution) SetStartTime(v time.Time) *MaintenanceWindowExecution { + s.StartTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *MaintenanceWindowExecution) SetStatus(v string) *MaintenanceWindowExecution { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *MaintenanceWindowExecution) SetStatusDetails(v string) *MaintenanceWindowExecution { + s.StatusDetails = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *MaintenanceWindowExecution) SetWindowExecutionId(v string) *MaintenanceWindowExecution { + s.WindowExecutionId = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *MaintenanceWindowExecution) SetWindowId(v string) *MaintenanceWindowExecution { + s.WindowId = &v + return s +} + +// Information about a task execution performed as part of a Maintenance Window +// execution. +type MaintenanceWindowExecutionTaskIdentity struct { + _ struct{} `type:"structure"` + + // The time the task execution finished. + EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time the task execution started. + StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The status of the task execution. + Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` + + // The details explaining the status of the task execution. Only available for + // certain status values. + StatusDetails *string `type:"string"` + + // The ARN of the executed task. + TaskArn *string `min:"1" type:"string"` + + // The ID of the specific task execution in the Maintenance Window execution. + TaskExecutionId *string `min:"36" type:"string"` + + // The type of executed task. + TaskType *string `type:"string" enum:"MaintenanceWindowTaskType"` + + // The ID of the Maintenance Window execution that ran the task. + WindowExecutionId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowExecutionTaskIdentity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowExecutionTaskIdentity) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetEndTime(v time.Time) *MaintenanceWindowExecutionTaskIdentity { + s.EndTime = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetStartTime(v time.Time) *MaintenanceWindowExecutionTaskIdentity { + s.StartTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetStatus(v string) *MaintenanceWindowExecutionTaskIdentity { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetStatusDetails(v string) *MaintenanceWindowExecutionTaskIdentity { + s.StatusDetails = &v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetTaskArn(v string) *MaintenanceWindowExecutionTaskIdentity { + s.TaskArn = &v + return s +} + +// SetTaskExecutionId sets the TaskExecutionId field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetTaskExecutionId(v string) *MaintenanceWindowExecutionTaskIdentity { + s.TaskExecutionId = &v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetTaskType(v string) *MaintenanceWindowExecutionTaskIdentity { + s.TaskType = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *MaintenanceWindowExecutionTaskIdentity) SetWindowExecutionId(v string) *MaintenanceWindowExecutionTaskIdentity { + s.WindowExecutionId = &v + return s +} + +// Describes the information about a task invocation for a particular target +// as part of a task execution performed as part of a Maintenance Window execution. +type MaintenanceWindowExecutionTaskInvocationIdentity struct { + _ struct{} `type:"structure"` + + // The time the invocation finished. + EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The ID of the action performed in the service that actually handled the task + // invocation. If the task type is RUN_COMMAND, this value is the command ID. + ExecutionId *string `type:"string"` + + // The ID of the task invocation. + InvocationId *string `min:"36" type:"string"` + + // User-provided value that was specified when the target was registered with + // the Maintenance Window. This was also included in any CloudWatch events raised + // during the task invocation. + OwnerInformation *string `min:"1" type:"string"` + + // The parameters that were provided for the invocation when it was executed. + Parameters *string `type:"string"` + + // The time the invocation started. + StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The status of the task invocation. + Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` + + // The details explaining the status of the task invocation. Only available + // for certain Status values. + StatusDetails *string `type:"string"` + + // The ID of the specific task execution in the Maintenance Window execution. + TaskExecutionId *string `min:"36" type:"string"` + + // The task type. + TaskType *string `type:"string" enum:"MaintenanceWindowTaskType"` + + // The ID of the Maintenance Window execution that ran the task. + WindowExecutionId *string `min:"36" type:"string"` + + // The ID of the target definition in this Maintenance Window the invocation + // was performed for. + WindowTargetId *string `type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowExecutionTaskInvocationIdentity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowExecutionTaskInvocationIdentity) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetEndTime(v time.Time) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.EndTime = &v + return s +} + +// SetExecutionId sets the ExecutionId field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetExecutionId(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.ExecutionId = &v + return s +} + +// SetInvocationId sets the InvocationId field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetInvocationId(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.InvocationId = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetOwnerInformation(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.OwnerInformation = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetParameters(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.Parameters = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetStartTime(v time.Time) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.StartTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetStatus(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.Status = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetStatusDetails(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.StatusDetails = &v + return s +} + +// SetTaskExecutionId sets the TaskExecutionId field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetTaskExecutionId(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.TaskExecutionId = &v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetTaskType(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.TaskType = &v + return s +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetWindowExecutionId(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.WindowExecutionId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetWindowTargetId(v string) *MaintenanceWindowExecutionTaskInvocationIdentity { + s.WindowTargetId = &v + return s +} + +// Filter used in the request. +type MaintenanceWindowFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + Key *string `min:"1" type:"string"` + + // The filter values. + Values []*string `type:"list"` +} + +// String returns the string representation +func (s MaintenanceWindowFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MaintenanceWindowFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MaintenanceWindowFilter"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *MaintenanceWindowFilter) SetKey(v string) *MaintenanceWindowFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *MaintenanceWindowFilter) SetValues(v []*string) *MaintenanceWindowFilter { + s.Values = v + return s +} + +// Information about the Maintenance Window. +type MaintenanceWindowIdentity struct { + _ struct{} `type:"structure"` + + // The number of hours before the end of the Maintenance Window that Systems + // Manager stops scheduling new tasks for execution. + Cutoff *int64 `type:"integer"` + + // A description of the Maintenance Window. + Description *string `min:"1" type:"string"` + + // The duration of the Maintenance Window in hours. + Duration *int64 `min:"1" type:"integer"` + + // Whether the Maintenance Window is enabled. + Enabled *bool `type:"boolean"` + + // The name of the Maintenance Window. + Name *string `min:"3" type:"string"` + + // The ID of the Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowIdentity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowIdentity) GoString() string { + return s.String() +} + +// SetCutoff sets the Cutoff field's value. +func (s *MaintenanceWindowIdentity) SetCutoff(v int64) *MaintenanceWindowIdentity { + s.Cutoff = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *MaintenanceWindowIdentity) SetDescription(v string) *MaintenanceWindowIdentity { + s.Description = &v + return s +} + +// SetDuration sets the Duration field's value. +func (s *MaintenanceWindowIdentity) SetDuration(v int64) *MaintenanceWindowIdentity { + s.Duration = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *MaintenanceWindowIdentity) SetEnabled(v bool) *MaintenanceWindowIdentity { + s.Enabled = &v + return s +} + +// SetName sets the Name field's value. +func (s *MaintenanceWindowIdentity) SetName(v string) *MaintenanceWindowIdentity { + s.Name = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *MaintenanceWindowIdentity) SetWindowId(v string) *MaintenanceWindowIdentity { + s.WindowId = &v + return s +} + +// The parameters for a LAMBDA task type. +// +// For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow +// and UpdateMaintenanceWindowTask. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// TaskParameters has been deprecated. To specify parameters to pass to a task +// when it runs, instead use the Parameters option in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// For Lambda tasks, Systems Manager ignores any values specified for TaskParameters +// and LoggingInfo. +type MaintenanceWindowLambdaParameters struct { + _ struct{} `type:"structure"` + + // Pass client-specific information to the Lambda function that you are invoking. + // You can then process the client information in your Lambda function as you + // choose through the context variable. + ClientContext *string `min:"1" type:"string"` + + // JSON to provide to your Lambda function as input. + // + // Payload is automatically base64 encoded/decoded by the SDK. + Payload []byte `type:"blob"` + + // (Optional) Specify a Lambda function version or alias name. If you specify + // a function version, the action uses the qualified function ARN to invoke + // a specific Lambda function. If you specify an alias name, the action uses + // the alias ARN to invoke the Lambda function version to which the alias points. + Qualifier *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowLambdaParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowLambdaParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MaintenanceWindowLambdaParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MaintenanceWindowLambdaParameters"} + if s.ClientContext != nil && len(*s.ClientContext) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientContext", 1)) + } + if s.Qualifier != nil && len(*s.Qualifier) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Qualifier", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientContext sets the ClientContext field's value. +func (s *MaintenanceWindowLambdaParameters) SetClientContext(v string) *MaintenanceWindowLambdaParameters { + s.ClientContext = &v + return s +} + +// SetPayload sets the Payload field's value. +func (s *MaintenanceWindowLambdaParameters) SetPayload(v []byte) *MaintenanceWindowLambdaParameters { + s.Payload = v + return s +} + +// SetQualifier sets the Qualifier field's value. +func (s *MaintenanceWindowLambdaParameters) SetQualifier(v string) *MaintenanceWindowLambdaParameters { + s.Qualifier = &v + return s +} + +// The parameters for a RUN_COMMAND task type. +// +// For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow +// and UpdateMaintenanceWindowTask. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// TaskParameters has been deprecated. To specify parameters to pass to a task +// when it runs, instead use the Parameters option in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// For Run Command tasks, Systems Manager uses specified values for TaskParameters +// and LoggingInfo only if no values are specified for TaskInvocationParameters. +type MaintenanceWindowRunCommandParameters struct { + _ struct{} `type:"structure"` + + // Information about the command(s) to execute. + Comment *string `type:"string"` + + // The SHA-256 or SHA-1 hash created by the system when the document was created. + // SHA-1 hashes have been deprecated. + DocumentHash *string `type:"string"` + + // SHA-256 or SHA-1. SHA-1 hashes have been deprecated. + DocumentHashType *string `type:"string" enum:"DocumentHashType"` + + // Configurations for sending notifications about command status changes on + // a per-instance basis. + NotificationConfig *NotificationConfig `type:"structure"` + + // The name of the Amazon S3 bucket. + OutputS3BucketName *string `min:"3" type:"string"` + + // The Amazon S3 bucket subfolder. + OutputS3KeyPrefix *string `type:"string"` + + // The parameters for the RUN_COMMAND task execution. + Parameters map[string][]*string `type:"map"` + + // The IAM service role to assume during task execution. + ServiceRoleArn *string `type:"string"` + + // If this time is reached and the command has not already started executing, + // it doesn not execute. + TimeoutSeconds *int64 `min:"30" type:"integer"` +} + +// String returns the string representation +func (s MaintenanceWindowRunCommandParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowRunCommandParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MaintenanceWindowRunCommandParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MaintenanceWindowRunCommandParameters"} + if s.OutputS3BucketName != nil && len(*s.OutputS3BucketName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("OutputS3BucketName", 3)) + } + if s.TimeoutSeconds != nil && *s.TimeoutSeconds < 30 { + invalidParams.Add(request.NewErrParamMinValue("TimeoutSeconds", 30)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetComment sets the Comment field's value. +func (s *MaintenanceWindowRunCommandParameters) SetComment(v string) *MaintenanceWindowRunCommandParameters { + s.Comment = &v + return s +} + +// SetDocumentHash sets the DocumentHash field's value. +func (s *MaintenanceWindowRunCommandParameters) SetDocumentHash(v string) *MaintenanceWindowRunCommandParameters { + s.DocumentHash = &v + return s +} + +// SetDocumentHashType sets the DocumentHashType field's value. +func (s *MaintenanceWindowRunCommandParameters) SetDocumentHashType(v string) *MaintenanceWindowRunCommandParameters { + s.DocumentHashType = &v + return s +} + +// SetNotificationConfig sets the NotificationConfig field's value. +func (s *MaintenanceWindowRunCommandParameters) SetNotificationConfig(v *NotificationConfig) *MaintenanceWindowRunCommandParameters { + s.NotificationConfig = v + return s +} + +// SetOutputS3BucketName sets the OutputS3BucketName field's value. +func (s *MaintenanceWindowRunCommandParameters) SetOutputS3BucketName(v string) *MaintenanceWindowRunCommandParameters { + s.OutputS3BucketName = &v + return s +} + +// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. +func (s *MaintenanceWindowRunCommandParameters) SetOutputS3KeyPrefix(v string) *MaintenanceWindowRunCommandParameters { + s.OutputS3KeyPrefix = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *MaintenanceWindowRunCommandParameters) SetParameters(v map[string][]*string) *MaintenanceWindowRunCommandParameters { + s.Parameters = v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *MaintenanceWindowRunCommandParameters) SetServiceRoleArn(v string) *MaintenanceWindowRunCommandParameters { + s.ServiceRoleArn = &v + return s +} + +// SetTimeoutSeconds sets the TimeoutSeconds field's value. +func (s *MaintenanceWindowRunCommandParameters) SetTimeoutSeconds(v int64) *MaintenanceWindowRunCommandParameters { + s.TimeoutSeconds = &v + return s +} + +// The parameters for a STEP_FUNCTION task. +// +// For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow +// and UpdateMaintenanceWindowTask. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// TaskParameters has been deprecated. To specify parameters to pass to a task +// when it runs, instead use the Parameters option in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// For Step Functions tasks, Systems Manager ignores any values specified for +// TaskParameters and LoggingInfo. +type MaintenanceWindowStepFunctionsParameters struct { + _ struct{} `type:"structure"` + + // The inputs for the STEP_FUNCTION task. + Input *string `type:"string"` + + // The name of the STEP_FUNCTION task. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowStepFunctionsParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowStepFunctionsParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MaintenanceWindowStepFunctionsParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MaintenanceWindowStepFunctionsParameters"} + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInput sets the Input field's value. +func (s *MaintenanceWindowStepFunctionsParameters) SetInput(v string) *MaintenanceWindowStepFunctionsParameters { + s.Input = &v + return s +} + +// SetName sets the Name field's value. +func (s *MaintenanceWindowStepFunctionsParameters) SetName(v string) *MaintenanceWindowStepFunctionsParameters { + s.Name = &v + return s +} + +// The target registered with the Maintenance Window. +type MaintenanceWindowTarget struct { + _ struct{} `type:"structure"` + + // A description of the target. + Description *string `min:"1" type:"string"` + + // The target name. + Name *string `min:"3" type:"string"` + + // User-provided value that will be included in any CloudWatch events raised + // while running tasks for these targets in this Maintenance Window. + OwnerInformation *string `min:"1" type:"string"` + + // The type of target. + ResourceType *string `type:"string" enum:"MaintenanceWindowResourceType"` + + // The targets (either instances or tags). Instances are specified using Key=instanceids,Values=,. + // Tags are specified using Key=,Values=. + Targets []*Target `type:"list"` + + // The Maintenance Window ID where the target is registered. + WindowId *string `min:"20" type:"string"` + + // The ID of the target. + WindowTargetId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowTarget) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *MaintenanceWindowTarget) SetDescription(v string) *MaintenanceWindowTarget { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *MaintenanceWindowTarget) SetName(v string) *MaintenanceWindowTarget { + s.Name = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *MaintenanceWindowTarget) SetOwnerInformation(v string) *MaintenanceWindowTarget { + s.OwnerInformation = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *MaintenanceWindowTarget) SetResourceType(v string) *MaintenanceWindowTarget { + s.ResourceType = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *MaintenanceWindowTarget) SetTargets(v []*Target) *MaintenanceWindowTarget { + s.Targets = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *MaintenanceWindowTarget) SetWindowId(v string) *MaintenanceWindowTarget { + s.WindowId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *MaintenanceWindowTarget) SetWindowTargetId(v string) *MaintenanceWindowTarget { + s.WindowTargetId = &v + return s +} + +// Information about a task defined for a Maintenance Window. +type MaintenanceWindowTask struct { + _ struct{} `type:"structure"` + + // A description of the task. + Description *string `min:"1" type:"string"` + + // Information about an Amazon S3 bucket to write task-level logs to. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + LoggingInfo *LoggingInfo `type:"structure"` + + // The maximum number of targets this task can be run for in parallel. + MaxConcurrency *string `min:"1" type:"string"` + + // The maximum number of errors allowed before this task stops being scheduled. + MaxErrors *string `min:"1" type:"string"` + + // The task name. + Name *string `min:"3" type:"string"` + + // The priority of the task in the Maintenance Window. The lower the number, + // the higher the priority. Tasks that have the same priority are scheduled + // in parallel. + Priority *int64 `type:"integer"` + + // The role that should be assumed when executing the task + ServiceRoleArn *string `type:"string"` + + // The targets (either instances or tags). Instances are specified using Key=instanceids,Values=,. + // Tags are specified using Key=,Values=. + Targets []*Target `type:"list"` + + // The resource that the task uses during execution. For RUN_COMMAND and AUTOMATION + // task types, TaskArn is the Systems Manager document name or ARN. For LAMBDA + // tasks, it's the function name or ARN. For STEP_FUNCTION tasks, it's the state + // machine ARN. + TaskArn *string `min:"1" type:"string"` + + // The parameters that should be passed to the task when it is executed. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` + + // The type of task. The type can be one of the following: RUN_COMMAND, AUTOMATION, + // LAMBDA, or STEP_FUNCTION. + Type *string `type:"string" enum:"MaintenanceWindowTaskType"` + + // The Maintenance Window ID where the task is registered. + WindowId *string `min:"20" type:"string"` + + // The task ID. + WindowTaskId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowTask) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowTask) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *MaintenanceWindowTask) SetDescription(v string) *MaintenanceWindowTask { + s.Description = &v + return s +} + +// SetLoggingInfo sets the LoggingInfo field's value. +func (s *MaintenanceWindowTask) SetLoggingInfo(v *LoggingInfo) *MaintenanceWindowTask { + s.LoggingInfo = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *MaintenanceWindowTask) SetMaxConcurrency(v string) *MaintenanceWindowTask { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *MaintenanceWindowTask) SetMaxErrors(v string) *MaintenanceWindowTask { + s.MaxErrors = &v + return s +} + +// SetName sets the Name field's value. +func (s *MaintenanceWindowTask) SetName(v string) *MaintenanceWindowTask { + s.Name = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *MaintenanceWindowTask) SetPriority(v int64) *MaintenanceWindowTask { + s.Priority = &v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *MaintenanceWindowTask) SetServiceRoleArn(v string) *MaintenanceWindowTask { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *MaintenanceWindowTask) SetTargets(v []*Target) *MaintenanceWindowTask { + s.Targets = v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *MaintenanceWindowTask) SetTaskArn(v string) *MaintenanceWindowTask { + s.TaskArn = &v + return s +} + +// SetTaskParameters sets the TaskParameters field's value. +func (s *MaintenanceWindowTask) SetTaskParameters(v map[string]*MaintenanceWindowTaskParameterValueExpression) *MaintenanceWindowTask { + s.TaskParameters = v + return s +} + +// SetType sets the Type field's value. +func (s *MaintenanceWindowTask) SetType(v string) *MaintenanceWindowTask { + s.Type = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *MaintenanceWindowTask) SetWindowId(v string) *MaintenanceWindowTask { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *MaintenanceWindowTask) SetWindowTaskId(v string) *MaintenanceWindowTask { + s.WindowTaskId = &v + return s +} + +// The parameters for task execution. +type MaintenanceWindowTaskInvocationParameters struct { + _ struct{} `type:"structure"` + + // The parameters for an AUTOMATION task type. + Automation *MaintenanceWindowAutomationParameters `type:"structure"` + + // The parameters for a LAMBDA task type. + Lambda *MaintenanceWindowLambdaParameters `type:"structure"` + + // The parameters for a RUN_COMMAND task type. + RunCommand *MaintenanceWindowRunCommandParameters `type:"structure"` + + // The parameters for a STEP_FUNCTION task type. + StepFunctions *MaintenanceWindowStepFunctionsParameters `type:"structure"` +} + +// String returns the string representation +func (s MaintenanceWindowTaskInvocationParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowTaskInvocationParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MaintenanceWindowTaskInvocationParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MaintenanceWindowTaskInvocationParameters"} + if s.Automation != nil { + if err := s.Automation.Validate(); err != nil { + invalidParams.AddNested("Automation", err.(request.ErrInvalidParams)) + } + } + if s.Lambda != nil { + if err := s.Lambda.Validate(); err != nil { + invalidParams.AddNested("Lambda", err.(request.ErrInvalidParams)) + } + } + if s.RunCommand != nil { + if err := s.RunCommand.Validate(); err != nil { + invalidParams.AddNested("RunCommand", err.(request.ErrInvalidParams)) + } + } + if s.StepFunctions != nil { + if err := s.StepFunctions.Validate(); err != nil { + invalidParams.AddNested("StepFunctions", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutomation sets the Automation field's value. +func (s *MaintenanceWindowTaskInvocationParameters) SetAutomation(v *MaintenanceWindowAutomationParameters) *MaintenanceWindowTaskInvocationParameters { + s.Automation = v + return s +} + +// SetLambda sets the Lambda field's value. +func (s *MaintenanceWindowTaskInvocationParameters) SetLambda(v *MaintenanceWindowLambdaParameters) *MaintenanceWindowTaskInvocationParameters { + s.Lambda = v + return s +} + +// SetRunCommand sets the RunCommand field's value. +func (s *MaintenanceWindowTaskInvocationParameters) SetRunCommand(v *MaintenanceWindowRunCommandParameters) *MaintenanceWindowTaskInvocationParameters { + s.RunCommand = v + return s +} + +// SetStepFunctions sets the StepFunctions field's value. +func (s *MaintenanceWindowTaskInvocationParameters) SetStepFunctions(v *MaintenanceWindowStepFunctionsParameters) *MaintenanceWindowTaskInvocationParameters { + s.StepFunctions = v + return s +} + +// Defines the values for a task parameter. +type MaintenanceWindowTaskParameterValueExpression struct { + _ struct{} `type:"structure"` + + // This field contains an array of 0 or more strings, each 1 to 255 characters + // in length. + Values []*string `type:"list"` +} + +// String returns the string representation +func (s MaintenanceWindowTaskParameterValueExpression) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowTaskParameterValueExpression) GoString() string { + return s.String() +} + +// SetValues sets the Values field's value. +func (s *MaintenanceWindowTaskParameterValueExpression) SetValues(v []*string) *MaintenanceWindowTaskParameterValueExpression { + s.Values = v + return s +} + +type ModifyDocumentPermissionInput struct { + _ struct{} `type:"structure"` + + // The AWS user accounts that should have access to the document. The account + // IDs can either be a group of account IDs or All. + AccountIdsToAdd []*string `type:"list"` + + // The AWS user accounts that should no longer have access to the document. + // The AWS user account can either be a group of account IDs or All. This action + // has a higher priority than AccountIdsToAdd. If you specify an account ID + // to add and the same ID to remove, the system removes access to the document. + AccountIdsToRemove []*string `type:"list"` + + // The name of the document that you want to share. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // The permission type for the document. The permission type can be Share. + // + // PermissionType is a required field + PermissionType *string `type:"string" required:"true" enum:"DocumentPermissionType"` +} + +// String returns the string representation +func (s ModifyDocumentPermissionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDocumentPermissionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDocumentPermissionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDocumentPermissionInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.PermissionType == nil { + invalidParams.Add(request.NewErrParamRequired("PermissionType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountIdsToAdd sets the AccountIdsToAdd field's value. +func (s *ModifyDocumentPermissionInput) SetAccountIdsToAdd(v []*string) *ModifyDocumentPermissionInput { + s.AccountIdsToAdd = v + return s +} + +// SetAccountIdsToRemove sets the AccountIdsToRemove field's value. +func (s *ModifyDocumentPermissionInput) SetAccountIdsToRemove(v []*string) *ModifyDocumentPermissionInput { + s.AccountIdsToRemove = v + return s +} + +// SetName sets the Name field's value. +func (s *ModifyDocumentPermissionInput) SetName(v string) *ModifyDocumentPermissionInput { + s.Name = &v + return s +} + +// SetPermissionType sets the PermissionType field's value. +func (s *ModifyDocumentPermissionInput) SetPermissionType(v string) *ModifyDocumentPermissionInput { + s.PermissionType = &v + return s +} + +type ModifyDocumentPermissionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ModifyDocumentPermissionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDocumentPermissionOutput) GoString() string { + return s.String() +} + +// A summary of resources that are not compliant. The summary is organized according +// to resource type. +type NonCompliantSummary struct { + _ struct{} `type:"structure"` + + // The total number of compliance items that are not compliant. + NonCompliantCount *int64 `type:"integer"` + + // A summary of the non-compliance severity by compliance type + SeveritySummary *SeveritySummary `type:"structure"` +} + +// String returns the string representation +func (s NonCompliantSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NonCompliantSummary) GoString() string { + return s.String() +} + +// SetNonCompliantCount sets the NonCompliantCount field's value. +func (s *NonCompliantSummary) SetNonCompliantCount(v int64) *NonCompliantSummary { + s.NonCompliantCount = &v + return s +} + +// SetSeveritySummary sets the SeveritySummary field's value. +func (s *NonCompliantSummary) SetSeveritySummary(v *SeveritySummary) *NonCompliantSummary { + s.SeveritySummary = v + return s +} + +// Configurations for sending notifications. +type NotificationConfig struct { + _ struct{} `type:"structure"` + + // An Amazon Resource Name (ARN) for a Simple Notification Service (SNS) topic. + // Run Command pushes notifications about command status changes to this topic. + NotificationArn *string `type:"string"` + + // The different events for which you can receive notifications. These events + // include the following: All (events), InProgress, Success, TimedOut, Cancelled, + // Failed. To learn more about these events, see Setting Up Events and Notifications + // (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) + // in the AWS Systems Manager User Guide. + NotificationEvents []*string `type:"list"` + + // Command: Receive notification when the status of a command changes. Invocation: + // For commands sent to multiple instances, receive notification on a per-instance + // basis when the status of a command changes. + NotificationType *string `type:"string" enum:"NotificationType"` +} + +// String returns the string representation +func (s NotificationConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotificationConfig) GoString() string { + return s.String() +} + +// SetNotificationArn sets the NotificationArn field's value. +func (s *NotificationConfig) SetNotificationArn(v string) *NotificationConfig { + s.NotificationArn = &v + return s +} + +// SetNotificationEvents sets the NotificationEvents field's value. +func (s *NotificationConfig) SetNotificationEvents(v []*string) *NotificationConfig { + s.NotificationEvents = v + return s +} + +// SetNotificationType sets the NotificationType field's value. +func (s *NotificationConfig) SetNotificationType(v string) *NotificationConfig { + s.NotificationType = &v + return s +} + +// An Amazon EC2 Systems Manager parameter in Parameter Store. +type Parameter struct { + _ struct{} `type:"structure"` + + // The name of the parameter. + Name *string `min:"1" type:"string"` + + // The type of parameter. Valid values include the following: String, String + // list, Secure string. + Type *string `type:"string" enum:"ParameterType"` + + // The parameter value. + Value *string `min:"1" type:"string"` + + // The parameter version. + Version *int64 `type:"long"` +} + +// String returns the string representation +func (s Parameter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Parameter) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *Parameter) SetName(v string) *Parameter { + s.Name = &v + return s +} + +// SetType sets the Type field's value. +func (s *Parameter) SetType(v string) *Parameter { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Parameter) SetValue(v string) *Parameter { + s.Value = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *Parameter) SetVersion(v int64) *Parameter { + s.Version = &v + return s +} + +// Information about parameter usage. +type ParameterHistory struct { + _ struct{} `type:"structure"` + + // Parameter names can include the following letters and symbols. + // + // a-zA-Z0-9_.- + AllowedPattern *string `type:"string"` + + // Information about the parameter. + Description *string `type:"string"` + + // The ID of the query key used for this parameter. + KeyId *string `min:"1" type:"string"` + + // Date the parameter was last changed or updated. + LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Amazon Resource Name (ARN) of the AWS user who last changed the parameter. + LastModifiedUser *string `type:"string"` + + // The name of the parameter. + Name *string `min:"1" type:"string"` + + // The type of parameter used. + Type *string `type:"string" enum:"ParameterType"` + + // The parameter value. + Value *string `min:"1" type:"string"` + + // The parameter version. + Version *int64 `type:"long"` +} + +// String returns the string representation +func (s ParameterHistory) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterHistory) GoString() string { + return s.String() +} + +// SetAllowedPattern sets the AllowedPattern field's value. +func (s *ParameterHistory) SetAllowedPattern(v string) *ParameterHistory { + s.AllowedPattern = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ParameterHistory) SetDescription(v string) *ParameterHistory { + s.Description = &v + return s +} + +// SetKeyId sets the KeyId field's value. +func (s *ParameterHistory) SetKeyId(v string) *ParameterHistory { + s.KeyId = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *ParameterHistory) SetLastModifiedDate(v time.Time) *ParameterHistory { + s.LastModifiedDate = &v + return s +} + +// SetLastModifiedUser sets the LastModifiedUser field's value. +func (s *ParameterHistory) SetLastModifiedUser(v string) *ParameterHistory { + s.LastModifiedUser = &v + return s +} + +// SetName sets the Name field's value. +func (s *ParameterHistory) SetName(v string) *ParameterHistory { + s.Name = &v + return s +} + +// SetType sets the Type field's value. +func (s *ParameterHistory) SetType(v string) *ParameterHistory { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *ParameterHistory) SetValue(v string) *ParameterHistory { + s.Value = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *ParameterHistory) SetVersion(v int64) *ParameterHistory { + s.Version = &v + return s +} + +// Metada includes information like the ARN of the last user and the date/time +// the parameter was last used. +type ParameterMetadata struct { + _ struct{} `type:"structure"` + + // A parameter name can include only the following letters and symbols. + // + // a-zA-Z0-9_.- + AllowedPattern *string `type:"string"` + + // Description of the parameter actions. + Description *string `type:"string"` + + // The ID of the query key used for this parameter. + KeyId *string `min:"1" type:"string"` + + // Date the parameter was last changed or updated. + LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Amazon Resource Name (ARN) of the AWS user who last changed the parameter. + LastModifiedUser *string `type:"string"` + + // The parameter name. + Name *string `min:"1" type:"string"` + + // The type of parameter. Valid parameter types include the following: String, + // String list, Secure string. + Type *string `type:"string" enum:"ParameterType"` + + // The parameter version. + Version *int64 `type:"long"` +} + +// String returns the string representation +func (s ParameterMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterMetadata) GoString() string { + return s.String() +} + +// SetAllowedPattern sets the AllowedPattern field's value. +func (s *ParameterMetadata) SetAllowedPattern(v string) *ParameterMetadata { + s.AllowedPattern = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ParameterMetadata) SetDescription(v string) *ParameterMetadata { + s.Description = &v + return s +} + +// SetKeyId sets the KeyId field's value. +func (s *ParameterMetadata) SetKeyId(v string) *ParameterMetadata { + s.KeyId = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *ParameterMetadata) SetLastModifiedDate(v time.Time) *ParameterMetadata { + s.LastModifiedDate = &v + return s +} + +// SetLastModifiedUser sets the LastModifiedUser field's value. +func (s *ParameterMetadata) SetLastModifiedUser(v string) *ParameterMetadata { + s.LastModifiedUser = &v + return s +} + +// SetName sets the Name field's value. +func (s *ParameterMetadata) SetName(v string) *ParameterMetadata { + s.Name = &v + return s +} + +// SetType sets the Type field's value. +func (s *ParameterMetadata) SetType(v string) *ParameterMetadata { + s.Type = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *ParameterMetadata) SetVersion(v int64) *ParameterMetadata { + s.Version = &v + return s +} + +// One or more filters. Use a filter to return a more specific list of results. +type ParameterStringFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // Valid options are Equals and BeginsWith. For Path filter, valid options are + // Recursive and OneLevel. + Option *string `min:"1" type:"string"` + + // The value you want to search for. + Values []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s ParameterStringFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterStringFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParameterStringFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParameterStringFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Option != nil && len(*s.Option) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Option", 1)) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ParameterStringFilter) SetKey(v string) *ParameterStringFilter { + s.Key = &v + return s +} + +// SetOption sets the Option field's value. +func (s *ParameterStringFilter) SetOption(v string) *ParameterStringFilter { + s.Option = &v + return s +} + +// SetValues sets the Values field's value. +func (s *ParameterStringFilter) SetValues(v []*string) *ParameterStringFilter { + s.Values = v + return s +} + +// This data type is deprecated. Instead, use ParameterStringFilter. +type ParametersFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `type:"string" required:"true" enum:"ParametersFilterKey"` + + // The filter values. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s ParametersFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParametersFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParametersFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParametersFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ParametersFilter) SetKey(v string) *ParametersFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *ParametersFilter) SetValues(v []*string) *ParametersFilter { + s.Values = v + return s +} + +// Represents metadata about a patch. +type Patch struct { + _ struct{} `type:"structure"` + + // The classification of the patch (for example, SecurityUpdates, Updates, CriticalUpdates). + Classification *string `type:"string"` + + // The URL where more information can be obtained about the patch. + ContentUrl *string `type:"string"` + + // The description of the patch. + Description *string `type:"string"` + + // The ID of the patch (this is different than the Microsoft Knowledge Base + // ID). + Id *string `min:"1" type:"string"` + + // The Microsoft Knowledge Base ID of the patch. + KbNumber *string `type:"string"` + + // The language of the patch if it's language-specific. + Language *string `type:"string"` + + // The ID of the MSRC bulletin the patch is related to. + MsrcNumber *string `type:"string"` + + // The severity of the patch (for example Critical, Important, Moderate). + MsrcSeverity *string `type:"string"` + + // The specific product the patch is applicable for (for example, WindowsServer2016). + Product *string `type:"string"` + + // The product family the patch is applicable for (for example, Windows). + ProductFamily *string `type:"string"` + + // The date the patch was released. + ReleaseDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The title of the patch. + Title *string `type:"string"` + + // The name of the vendor providing the patch. + Vendor *string `type:"string"` +} + +// String returns the string representation +func (s Patch) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Patch) GoString() string { + return s.String() +} + +// SetClassification sets the Classification field's value. +func (s *Patch) SetClassification(v string) *Patch { + s.Classification = &v + return s +} + +// SetContentUrl sets the ContentUrl field's value. +func (s *Patch) SetContentUrl(v string) *Patch { + s.ContentUrl = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Patch) SetDescription(v string) *Patch { + s.Description = &v + return s +} + +// SetId sets the Id field's value. +func (s *Patch) SetId(v string) *Patch { + s.Id = &v + return s +} + +// SetKbNumber sets the KbNumber field's value. +func (s *Patch) SetKbNumber(v string) *Patch { + s.KbNumber = &v + return s +} + +// SetLanguage sets the Language field's value. +func (s *Patch) SetLanguage(v string) *Patch { + s.Language = &v + return s +} + +// SetMsrcNumber sets the MsrcNumber field's value. +func (s *Patch) SetMsrcNumber(v string) *Patch { + s.MsrcNumber = &v + return s +} + +// SetMsrcSeverity sets the MsrcSeverity field's value. +func (s *Patch) SetMsrcSeverity(v string) *Patch { + s.MsrcSeverity = &v + return s +} + +// SetProduct sets the Product field's value. +func (s *Patch) SetProduct(v string) *Patch { + s.Product = &v + return s +} + +// SetProductFamily sets the ProductFamily field's value. +func (s *Patch) SetProductFamily(v string) *Patch { + s.ProductFamily = &v + return s +} + +// SetReleaseDate sets the ReleaseDate field's value. +func (s *Patch) SetReleaseDate(v time.Time) *Patch { + s.ReleaseDate = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *Patch) SetTitle(v string) *Patch { + s.Title = &v + return s +} + +// SetVendor sets the Vendor field's value. +func (s *Patch) SetVendor(v string) *Patch { + s.Vendor = &v + return s +} + +// Defines the basic information about a patch baseline. +type PatchBaselineIdentity struct { + _ struct{} `type:"structure"` + + // The description of the patch baseline. + BaselineDescription *string `min:"1" type:"string"` + + // The ID of the patch baseline. + BaselineId *string `min:"20" type:"string"` + + // The name of the patch baseline. + BaselineName *string `min:"3" type:"string"` + + // Whether this is the default baseline. Note that Systems Manager supports + // creating multiple default patch baselines. For example, you can create a + // default patch baseline for each operating system. + DefaultBaseline *bool `type:"boolean"` + + // Defines the operating system the patch baseline applies to. The Default value + // is WINDOWS. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` +} + +// String returns the string representation +func (s PatchBaselineIdentity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchBaselineIdentity) GoString() string { + return s.String() +} + +// SetBaselineDescription sets the BaselineDescription field's value. +func (s *PatchBaselineIdentity) SetBaselineDescription(v string) *PatchBaselineIdentity { + s.BaselineDescription = &v + return s +} + +// SetBaselineId sets the BaselineId field's value. +func (s *PatchBaselineIdentity) SetBaselineId(v string) *PatchBaselineIdentity { + s.BaselineId = &v + return s +} + +// SetBaselineName sets the BaselineName field's value. +func (s *PatchBaselineIdentity) SetBaselineName(v string) *PatchBaselineIdentity { + s.BaselineName = &v + return s +} + +// SetDefaultBaseline sets the DefaultBaseline field's value. +func (s *PatchBaselineIdentity) SetDefaultBaseline(v bool) *PatchBaselineIdentity { + s.DefaultBaseline = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *PatchBaselineIdentity) SetOperatingSystem(v string) *PatchBaselineIdentity { + s.OperatingSystem = &v + return s +} + +// Information about the state of a patch on a particular instance as it relates +// to the patch baseline used to patch the instance. +type PatchComplianceData struct { + _ struct{} `type:"structure"` + + // The classification of the patch (for example, SecurityUpdates, Updates, CriticalUpdates). + // + // Classification is a required field + Classification *string `type:"string" required:"true"` + + // The date/time the patch was installed on the instance. Note that not all + // operating systems provide this level of information. + // + // InstalledTime is a required field + InstalledTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + + // The operating system-specific ID of the patch. + // + // KBId is a required field + KBId *string `type:"string" required:"true"` + + // The severity of the patch (for example, Critical, Important, Moderate). + // + // Severity is a required field + Severity *string `type:"string" required:"true"` + + // The state of the patch on the instance (INSTALLED, INSTALLED_OTHER, MISSING, + // NOT_APPLICABLE or FAILED). + // + // State is a required field + State *string `type:"string" required:"true" enum:"PatchComplianceDataState"` + + // The title of the patch. + // + // Title is a required field + Title *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s PatchComplianceData) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchComplianceData) GoString() string { + return s.String() +} + +// SetClassification sets the Classification field's value. +func (s *PatchComplianceData) SetClassification(v string) *PatchComplianceData { + s.Classification = &v + return s +} + +// SetInstalledTime sets the InstalledTime field's value. +func (s *PatchComplianceData) SetInstalledTime(v time.Time) *PatchComplianceData { + s.InstalledTime = &v + return s +} + +// SetKBId sets the KBId field's value. +func (s *PatchComplianceData) SetKBId(v string) *PatchComplianceData { + s.KBId = &v + return s +} + +// SetSeverity sets the Severity field's value. +func (s *PatchComplianceData) SetSeverity(v string) *PatchComplianceData { + s.Severity = &v + return s +} + +// SetState sets the State field's value. +func (s *PatchComplianceData) SetState(v string) *PatchComplianceData { + s.State = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { + s.Title = &v + return s +} + +// Defines a patch filter. +// +// A patch filter consists of key/value pairs, but not all keys are valid for +// all operating system types. For example, the key PRODUCT is valid for all +// supported operating system types. The key MSRC_SEVERITY, however, is valid +// only for Windows operating systems, and the key SECTION is valid only for +// Ubuntu operating systems. +// +// Refer to the following sections for information about which keys may be used +// with each major operating system, and which values are valid for each key. +// +// Windows Operating Systems +// +// The supported keys for Windows operating systems are PRODUCT, CLASSIFICATION, +// and MSRC_SEVERITY. See the following lists for valid values for each of these +// keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * Windows7 +// +// * Windows8 +// +// * Windows8.1 +// +// * Windows8Embedded +// +// * Windows10 +// +// * Windows10LTSB +// +// * WindowsServer2008 +// +// * WindowsServer2008R2 +// +// * WindowsServer2012 +// +// * WindowsServer2012R2 +// +// * WindowsServer2016 +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * CriticalUpdates +// +// * DefinitionUpdates +// +// * Drivers +// +// * FeaturePacks +// +// * SecurityUpdates +// +// * ServicePacks +// +// * Tools +// +// * UpdateRollups +// +// * Updates +// +// * Upgrades +// +// Supported key:MSRC_SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Moderate +// +// * Low +// +// * Unspecified +// +// Ubuntu Operating Systems +// +// The supported keys for Ubuntu operating systems are PRODUCT, PRIORITY, and +// SECTION. See the following lists for valid values for each of these keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * Ubuntu14.04 +// +// * Ubuntu16.04 +// +// Supported key:PRIORITY +// +// Supported values: +// +// * Required +// +// * Important +// +// * Standard +// +// * Optional +// +// * Extra +// +// Supported key:SECTION +// +// Only the length of the key value is validated. Minimum length is 1. Maximum +// length is 64. +// +// Amazon Linux Operating Systems +// +// The supported keys for Amazon Linux operating systems are PRODUCT, CLASSIFICATION, +// and SEVERITY. See the following lists for valid values for each of these +// keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * AmazonLinux2012.03 +// +// * AmazonLinux2012.09 +// +// * AmazonLinux2013.03 +// +// * AmazonLinux2013.09 +// +// * AmazonLinux2014.03 +// +// * AmazonLinux2014.09 +// +// * AmazonLinux2015.03 +// +// * AmazonLinux2015.09 +// +// * AmazonLinux2016.03 +// +// * AmazonLinux2016.09 +// +// * AmazonLinux2017.03 +// +// * AmazonLinux2017.09 +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * Security +// +// * Bugfix +// +// * Enhancement +// +// * Recommended +// +// * Newpackage +// +// Supported key:SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Medium +// +// * Low +// +// RedHat Enterprise Linux (RHEL) Operating Systems +// +// The supported keys for RedHat Enterprise Linux operating systems are PRODUCT, +// CLASSIFICATION, and SEVERITY. See the following lists for valid values for +// each of these keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * RedhatEnterpriseLinux6.5 +// +// * RedhatEnterpriseLinux6.6 +// +// * RedhatEnterpriseLinux6.7 +// +// * RedhatEnterpriseLinux6.8 +// +// * RedhatEnterpriseLinux6.9 +// +// * RedhatEnterpriseLinux7.0 +// +// * RedhatEnterpriseLinux7.1 +// +// * RedhatEnterpriseLinux7.2 +// +// * RedhatEnterpriseLinux7.3 +// +// * RedhatEnterpriseLinux7.4 +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * Security +// +// * Bugfix +// +// * Enhancement +// +// * Recommended +// +// * Newpackage +// +// Supported key:SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Medium +// +// * Low +// +// SUSE Linux Enterprise Server (SLES) Operating Systems +// +// The supported keys for SLES operating systems are PRODUCT, CLASSIFICATION, +// and SEVERITY. See the following lists for valid values for each of these +// keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * Suse12.0 +// +// * Suse12.1 +// +// * Suse12.2 +// +// * Suse12.3 +// +// * Suse12.4 +// +// * Suse12.5 +// +// * Suse12.6 +// +// * Suse12.7 +// +// * Suse12.8 +// +// * Suse12.9 +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * Security +// +// * Recommended +// +// * Optional +// +// * Feature +// +// * Document +// +// * Yast +// +// Supported key:SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Moderate +// +// * Low +// +// CentOS Operating Systems +// +// The supported keys for CentOS operating systems are PRODUCT, CLASSIFICATION, +// and SEVERITY. See the following lists for valid values for each of these +// keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * CentOS6.5 +// +// * CentOS6.6 +// +// * CentOS6.7 +// +// * CentOS6.8 +// +// * CentOS6.9 +// +// * CentOS7.0 +// +// * CentOS7.1 +// +// * CentOS7.2 +// +// * CentOS7.3 +// +// * CentOS7.4 +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * Security +// +// * Bugfix +// +// * Enhancement +// +// * Recommended +// +// * Newpackage +// +// Supported key:SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Medium +// +// * Low +type PatchFilter struct { + _ struct{} `type:"structure"` + + // The key for the filter. + // + // See PatchFilter for lists of valid keys for each operating system type. + // + // Key is a required field + Key *string `type:"string" required:"true" enum:"PatchFilterKey"` + + // The value for the filter key. + // + // See PatchFilter for lists of valid values for each key based on operating + // system type. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s PatchFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PatchFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PatchFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *PatchFilter) SetKey(v string) *PatchFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *PatchFilter) SetValues(v []*string) *PatchFilter { + s.Values = v + return s +} + +// A set of patch filters, typically used for approval rules. +type PatchFilterGroup struct { + _ struct{} `type:"structure"` + + // The set of patch filters that make up the group. + // + // PatchFilters is a required field + PatchFilters []*PatchFilter `type:"list" required:"true"` +} + +// String returns the string representation +func (s PatchFilterGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchFilterGroup) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PatchFilterGroup) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PatchFilterGroup"} + if s.PatchFilters == nil { + invalidParams.Add(request.NewErrParamRequired("PatchFilters")) + } + if s.PatchFilters != nil { + for i, v := range s.PatchFilters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PatchFilters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPatchFilters sets the PatchFilters field's value. +func (s *PatchFilterGroup) SetPatchFilters(v []*PatchFilter) *PatchFilterGroup { + s.PatchFilters = v + return s +} + +// The mapping between a patch group and the patch baseline the patch group +// is registered with. +type PatchGroupPatchBaselineMapping struct { + _ struct{} `type:"structure"` + + // The patch baseline the patch group is registered with. + BaselineIdentity *PatchBaselineIdentity `type:"structure"` + + // The name of the patch group registered with the patch baseline. + PatchGroup *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PatchGroupPatchBaselineMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchGroupPatchBaselineMapping) GoString() string { + return s.String() +} + +// SetBaselineIdentity sets the BaselineIdentity field's value. +func (s *PatchGroupPatchBaselineMapping) SetBaselineIdentity(v *PatchBaselineIdentity) *PatchGroupPatchBaselineMapping { + s.BaselineIdentity = v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *PatchGroupPatchBaselineMapping) SetPatchGroup(v string) *PatchGroupPatchBaselineMapping { + s.PatchGroup = &v + return s +} + +// Defines a filter used in Patch Manager APIs. +type PatchOrchestratorFilter struct { + _ struct{} `type:"structure"` + + // The key for the filter. + Key *string `min:"1" type:"string"` + + // The value for the filter. + Values []*string `type:"list"` +} + +// String returns the string representation +func (s PatchOrchestratorFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchOrchestratorFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PatchOrchestratorFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PatchOrchestratorFilter"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *PatchOrchestratorFilter) SetKey(v string) *PatchOrchestratorFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *PatchOrchestratorFilter) SetValues(v []*string) *PatchOrchestratorFilter { + s.Values = v + return s +} + +// Defines an approval rule for a patch baseline. +type PatchRule struct { + _ struct{} `type:"structure"` + + // The number of days after the release date of each patch matched by the rule + // the patch is marked as approved in the patch baseline. + // + // ApproveAfterDays is a required field + ApproveAfterDays *int64 `type:"integer" required:"true"` + + // A compliance severity level for all approved patches in a patch baseline. + // Valid compliance severity levels include the following: Unspecified, Critical, + // High, Medium, Low, and Informational. + ComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` + + // For instances identified by the approval rule filters, enables a patch baseline + // to apply non-security updates available in the specified repository. The + // default value is 'false'. Applies to Linux instances only. + EnableNonSecurity *bool `type:"boolean"` + + // The patch filter group that defines the criteria for the rule. + // + // PatchFilterGroup is a required field + PatchFilterGroup *PatchFilterGroup `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PatchRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PatchRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PatchRule"} + if s.ApproveAfterDays == nil { + invalidParams.Add(request.NewErrParamRequired("ApproveAfterDays")) + } + if s.PatchFilterGroup == nil { + invalidParams.Add(request.NewErrParamRequired("PatchFilterGroup")) + } + if s.PatchFilterGroup != nil { + if err := s.PatchFilterGroup.Validate(); err != nil { + invalidParams.AddNested("PatchFilterGroup", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApproveAfterDays sets the ApproveAfterDays field's value. +func (s *PatchRule) SetApproveAfterDays(v int64) *PatchRule { + s.ApproveAfterDays = &v + return s +} + +// SetComplianceLevel sets the ComplianceLevel field's value. +func (s *PatchRule) SetComplianceLevel(v string) *PatchRule { + s.ComplianceLevel = &v + return s +} + +// SetEnableNonSecurity sets the EnableNonSecurity field's value. +func (s *PatchRule) SetEnableNonSecurity(v bool) *PatchRule { + s.EnableNonSecurity = &v + return s +} + +// SetPatchFilterGroup sets the PatchFilterGroup field's value. +func (s *PatchRule) SetPatchFilterGroup(v *PatchFilterGroup) *PatchRule { + s.PatchFilterGroup = v + return s +} + +// A set of rules defining the approval rules for a patch baseline. +type PatchRuleGroup struct { + _ struct{} `type:"structure"` + + // The rules that make up the rule group. + // + // PatchRules is a required field + PatchRules []*PatchRule `type:"list" required:"true"` +} + +// String returns the string representation +func (s PatchRuleGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchRuleGroup) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PatchRuleGroup) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PatchRuleGroup"} + if s.PatchRules == nil { + invalidParams.Add(request.NewErrParamRequired("PatchRules")) + } + if s.PatchRules != nil { + for i, v := range s.PatchRules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PatchRules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPatchRules sets the PatchRules field's value. +func (s *PatchRuleGroup) SetPatchRules(v []*PatchRule) *PatchRuleGroup { + s.PatchRules = v + return s +} + +// Information about the patches to use to update the instances, including target +// operating systems and source repository. Applies to Linux instances only. +type PatchSource struct { + _ struct{} `type:"structure"` + + // The value of the yum repo configuration. For example: + // + // cachedir=/var/cache/yum/$basesearch + // + // $releasever + // + // keepcache=0 + // + // debualevel=2 + // + // Configuration is a required field + Configuration *string `min:"1" type:"string" required:"true"` + + // The name specified to identify the patch source. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // The specific operating system versions a patch repository applies to, such + // as "Ubuntu16.04", "AmazonLinux2016.09", "RedhatEnterpriseLinux7.2" or "Suse12.7". + // For lists of supported product values, see PatchFilter. + // + // Products is a required field + Products []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s PatchSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchSource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PatchSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PatchSource"} + if s.Configuration == nil { + invalidParams.Add(request.NewErrParamRequired("Configuration")) + } + if s.Configuration != nil && len(*s.Configuration) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Configuration", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Products == nil { + invalidParams.Add(request.NewErrParamRequired("Products")) + } + if s.Products != nil && len(s.Products) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Products", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConfiguration sets the Configuration field's value. +func (s *PatchSource) SetConfiguration(v string) *PatchSource { + s.Configuration = &v + return s +} + +// SetName sets the Name field's value. +func (s *PatchSource) SetName(v string) *PatchSource { + s.Name = &v + return s +} + +// SetProducts sets the Products field's value. +func (s *PatchSource) SetProducts(v []*string) *PatchSource { + s.Products = v + return s +} + +// Information about the approval status of a patch. +type PatchStatus struct { + _ struct{} `type:"structure"` + + // The date the patch was approved (or will be approved if the status is PENDING_APPROVAL). + ApprovalDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The compliance severity level for a patch. + ComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` + + // The approval status of a patch (APPROVED, PENDING_APPROVAL, EXPLICIT_APPROVED, + // EXPLICIT_REJECTED). + DeploymentStatus *string `type:"string" enum:"PatchDeploymentStatus"` +} + +// String returns the string representation +func (s PatchStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PatchStatus) GoString() string { + return s.String() +} + +// SetApprovalDate sets the ApprovalDate field's value. +func (s *PatchStatus) SetApprovalDate(v time.Time) *PatchStatus { + s.ApprovalDate = &v + return s +} + +// SetComplianceLevel sets the ComplianceLevel field's value. +func (s *PatchStatus) SetComplianceLevel(v string) *PatchStatus { + s.ComplianceLevel = &v + return s +} + +// SetDeploymentStatus sets the DeploymentStatus field's value. +func (s *PatchStatus) SetDeploymentStatus(v string) *PatchStatus { + s.DeploymentStatus = &v + return s +} + +type PutComplianceItemsInput struct { + _ struct{} `type:"structure"` + + // Specify the compliance type. For example, specify Association (for a State + // Manager association), Patch, or Custom:string. + // + // ComplianceType is a required field + ComplianceType *string `min:"1" type:"string" required:"true"` + + // A summary of the call execution that includes an execution ID, the type of + // execution (for example, Command), and the date/time of the execution using + // a datetime object that is saved in the following format: yyyy-MM-dd'T'HH:mm:ss'Z'. + // + // ExecutionSummary is a required field + ExecutionSummary *ComplianceExecutionSummary `type:"structure" required:"true"` + + // MD5 or SHA-256 content hash. The content hash is used to determine if existing + // information should be overwritten or ignored. If the content hashes match, + // the request to put compliance information is ignored. + ItemContentHash *string `type:"string"` + + // Information about the compliance as defined by the resource type. For example, + // for a patch compliance type, Items includes information about the PatchSeverity, + // Classification, etc. + // + // Items is a required field + Items []*ComplianceItemEntry `type:"list" required:"true"` + + // Specify an ID for this resource. For a managed instance, this is the instance + // ID. + // + // ResourceId is a required field + ResourceId *string `min:"1" type:"string" required:"true"` + + // Specify the type of resource. ManagedInstance is currently the only supported + // resource type. + // + // ResourceType is a required field + ResourceType *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutComplianceItemsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutComplianceItemsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutComplianceItemsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutComplianceItemsInput"} + if s.ComplianceType == nil { + invalidParams.Add(request.NewErrParamRequired("ComplianceType")) + } + if s.ComplianceType != nil && len(*s.ComplianceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ComplianceType", 1)) + } + if s.ExecutionSummary == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionSummary")) + } + if s.Items == nil { + invalidParams.Add(request.NewErrParamRequired("Items")) + } + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + if s.ResourceType != nil && len(*s.ResourceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceType", 1)) + } + if s.ExecutionSummary != nil { + if err := s.ExecutionSummary.Validate(); err != nil { + invalidParams.AddNested("ExecutionSummary", err.(request.ErrInvalidParams)) + } + } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *PutComplianceItemsInput) SetComplianceType(v string) *PutComplianceItemsInput { + s.ComplianceType = &v + return s +} + +// SetExecutionSummary sets the ExecutionSummary field's value. +func (s *PutComplianceItemsInput) SetExecutionSummary(v *ComplianceExecutionSummary) *PutComplianceItemsInput { + s.ExecutionSummary = v + return s +} + +// SetItemContentHash sets the ItemContentHash field's value. +func (s *PutComplianceItemsInput) SetItemContentHash(v string) *PutComplianceItemsInput { + s.ItemContentHash = &v + return s +} + +// SetItems sets the Items field's value. +func (s *PutComplianceItemsInput) SetItems(v []*ComplianceItemEntry) *PutComplianceItemsInput { + s.Items = v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *PutComplianceItemsInput) SetResourceId(v string) *PutComplianceItemsInput { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *PutComplianceItemsInput) SetResourceType(v string) *PutComplianceItemsInput { + s.ResourceType = &v + return s +} + +type PutComplianceItemsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutComplianceItemsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutComplianceItemsOutput) GoString() string { + return s.String() +} + +type PutInventoryInput struct { + _ struct{} `type:"structure"` + + // One or more instance IDs where you want to add or update inventory items. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The inventory items that you want to add or update on instances. + // + // Items is a required field + Items []*InventoryItem `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s PutInventoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutInventoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutInventoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutInventoryInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.Items == nil { + invalidParams.Add(request.NewErrParamRequired("Items")) + } + if s.Items != nil && len(s.Items) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Items", 1)) + } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceId sets the InstanceId field's value. +func (s *PutInventoryInput) SetInstanceId(v string) *PutInventoryInput { + s.InstanceId = &v + return s +} + +// SetItems sets the Items field's value. +func (s *PutInventoryInput) SetItems(v []*InventoryItem) *PutInventoryInput { + s.Items = v + return s +} + +type PutInventoryOutput struct { + _ struct{} `type:"structure"` + + // Information about the request. + Message *string `type:"string"` +} + +// String returns the string representation +func (s PutInventoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutInventoryOutput) GoString() string { + return s.String() +} + +// SetMessage sets the Message field's value. +func (s *PutInventoryOutput) SetMessage(v string) *PutInventoryOutput { + s.Message = &v + return s +} + +type PutParameterInput struct { + _ struct{} `type:"structure"` + + // A regular expression used to validate the parameter value. For example, for + // String types with values restricted to numbers, you can specify the following: + // AllowedPattern=^\d+$ + AllowedPattern *string `type:"string"` + + // Information about the parameter that you want to add to the system. + // + // Do not enter personally identifiable information in this field. + Description *string `type:"string"` + + // The KMS Key ID that you want to use to encrypt a parameter when you choose + // the SecureString data type. If you don't specify a key ID, the system uses + // the default key associated with your AWS account. + KeyId *string `min:"1" type:"string"` + + // The fully qualified name of the parameter that you want to add to the system. + // The fully qualified name includes the complete hierarchy of the parameter + // path and name. For example: /Dev/DBServer/MySQL/db-string13 + // + // For information about parameter name requirements and restrictions, see About + // Creating Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-su-create.html#sysman-paramstore-su-create-about) + // in the AWS Systems Manager User Guide. + // + // The maximum length constraint listed below includes capacity for additional + // system attributes that are not part of the name. The maximum length for the + // fully qualified parameter name is 1011 characters. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // Overwrite an existing parameter. If not specified, will default to "false". + Overwrite *bool `type:"boolean"` + + // The type of parameter that you want to add to the system. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"ParameterType"` + + // The parameter value that you want to add to the system. + // + // Value is a required field + Value *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutParameterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutParameterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutParameterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutParameterInput"} + if s.KeyId != nil && len(*s.KeyId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KeyId", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowedPattern sets the AllowedPattern field's value. +func (s *PutParameterInput) SetAllowedPattern(v string) *PutParameterInput { + s.AllowedPattern = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *PutParameterInput) SetDescription(v string) *PutParameterInput { + s.Description = &v + return s +} + +// SetKeyId sets the KeyId field's value. +func (s *PutParameterInput) SetKeyId(v string) *PutParameterInput { + s.KeyId = &v + return s +} + +// SetName sets the Name field's value. +func (s *PutParameterInput) SetName(v string) *PutParameterInput { + s.Name = &v + return s +} + +// SetOverwrite sets the Overwrite field's value. +func (s *PutParameterInput) SetOverwrite(v bool) *PutParameterInput { + s.Overwrite = &v + return s +} + +// SetType sets the Type field's value. +func (s *PutParameterInput) SetType(v string) *PutParameterInput { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *PutParameterInput) SetValue(v string) *PutParameterInput { + s.Value = &v + return s +} + +type PutParameterOutput struct { + _ struct{} `type:"structure"` + + // The new version number of a parameter. If you edit a parameter value, Parameter + // Store automatically creates a new version and assigns this new version a + // unique ID. You can reference a parameter version ID in API actions or in + // Systems Manager documents (SSM documents). By default, if you don't specify + // a specific version, the system returns the latest parameter value when a + // parameter is called. + Version *int64 `type:"long"` +} + +// String returns the string representation +func (s PutParameterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutParameterOutput) GoString() string { + return s.String() +} + +// SetVersion sets the Version field's value. +func (s *PutParameterOutput) SetVersion(v int64) *PutParameterOutput { + s.Version = &v + return s +} + +type RegisterDefaultPatchBaselineInput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline that should be the default patch baseline. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s RegisterDefaultPatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterDefaultPatchBaselineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegisterDefaultPatchBaselineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterDefaultPatchBaselineInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBaselineId sets the BaselineId field's value. +func (s *RegisterDefaultPatchBaselineInput) SetBaselineId(v string) *RegisterDefaultPatchBaselineInput { + s.BaselineId = &v + return s +} + +type RegisterDefaultPatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // The ID of the default patch baseline. + BaselineId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s RegisterDefaultPatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterDefaultPatchBaselineOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *RegisterDefaultPatchBaselineOutput) SetBaselineId(v string) *RegisterDefaultPatchBaselineOutput { + s.BaselineId = &v + return s +} + +type RegisterPatchBaselineForPatchGroupInput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline to register the patch group with. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` + + // The name of the patch group that should be registered with the patch baseline. + // + // PatchGroup is a required field + PatchGroup *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RegisterPatchBaselineForPatchGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterPatchBaselineForPatchGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegisterPatchBaselineForPatchGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterPatchBaselineForPatchGroupInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + if s.PatchGroup == nil { + invalidParams.Add(request.NewErrParamRequired("PatchGroup")) + } + if s.PatchGroup != nil && len(*s.PatchGroup) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PatchGroup", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBaselineId sets the BaselineId field's value. +func (s *RegisterPatchBaselineForPatchGroupInput) SetBaselineId(v string) *RegisterPatchBaselineForPatchGroupInput { + s.BaselineId = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *RegisterPatchBaselineForPatchGroupInput) SetPatchGroup(v string) *RegisterPatchBaselineForPatchGroupInput { + s.PatchGroup = &v + return s +} + +type RegisterPatchBaselineForPatchGroupOutput struct { + _ struct{} `type:"structure"` + + // The ID of the patch baseline the patch group was registered with. + BaselineId *string `min:"20" type:"string"` + + // The name of the patch group registered with the patch baseline. + PatchGroup *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s RegisterPatchBaselineForPatchGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterPatchBaselineForPatchGroupOutput) GoString() string { + return s.String() +} + +// SetBaselineId sets the BaselineId field's value. +func (s *RegisterPatchBaselineForPatchGroupOutput) SetBaselineId(v string) *RegisterPatchBaselineForPatchGroupOutput { + s.BaselineId = &v + return s +} + +// SetPatchGroup sets the PatchGroup field's value. +func (s *RegisterPatchBaselineForPatchGroupOutput) SetPatchGroup(v string) *RegisterPatchBaselineForPatchGroupOutput { + s.PatchGroup = &v + return s +} + +type RegisterTargetWithMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // User-provided idempotency token. + ClientToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // An optional description for the target. + Description *string `min:"1" type:"string"` + + // An optional name for the target. + Name *string `min:"3" type:"string"` + + // User-provided value that will be included in any CloudWatch events raised + // while running tasks for these targets in this Maintenance Window. + OwnerInformation *string `min:"1" type:"string"` + + // The type of target being registered with the Maintenance Window. + // + // ResourceType is a required field + ResourceType *string `type:"string" required:"true" enum:"MaintenanceWindowResourceType"` + + // The targets (either instances or tags). + // + // Specify instances using the following format: + // + // Key=InstanceIds,Values=, + // + // Specify tags using either of the following formats: + // + // Key=tag:,Values=, + // + // Key=tag-key,Values=, + // + // Targets is a required field + Targets []*Target `type:"list" required:"true"` + + // The ID of the Maintenance Window the target should be registered with. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s RegisterTargetWithMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterTargetWithMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegisterTargetWithMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterTargetWithMaintenanceWindowInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.OwnerInformation != nil && len(*s.OwnerInformation) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OwnerInformation", 1)) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + if s.Targets == nil { + invalidParams.Add(request.NewErrParamRequired("Targets")) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetClientToken(v string) *RegisterTargetWithMaintenanceWindowInput { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetDescription(v string) *RegisterTargetWithMaintenanceWindowInput { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetName(v string) *RegisterTargetWithMaintenanceWindowInput { + s.Name = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetOwnerInformation(v string) *RegisterTargetWithMaintenanceWindowInput { + s.OwnerInformation = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetResourceType(v string) *RegisterTargetWithMaintenanceWindowInput { + s.ResourceType = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetTargets(v []*Target) *RegisterTargetWithMaintenanceWindowInput { + s.Targets = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *RegisterTargetWithMaintenanceWindowInput) SetWindowId(v string) *RegisterTargetWithMaintenanceWindowInput { + s.WindowId = &v + return s +} + +type RegisterTargetWithMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the target definition in this Maintenance Window. + WindowTargetId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s RegisterTargetWithMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterTargetWithMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *RegisterTargetWithMaintenanceWindowOutput) SetWindowTargetId(v string) *RegisterTargetWithMaintenanceWindowOutput { + s.WindowTargetId = &v + return s +} + +type RegisterTaskWithMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // User-provided idempotency token. + ClientToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // An optional description for the task. + Description *string `min:"1" type:"string"` + + // A structure containing information about an Amazon S3 bucket to write instance-level + // logs to. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + LoggingInfo *LoggingInfo `type:"structure"` + + // The maximum number of targets this task can be run for in parallel. + // + // MaxConcurrency is a required field + MaxConcurrency *string `min:"1" type:"string" required:"true"` + + // The maximum number of errors allowed before this task stops being scheduled. + // + // MaxErrors is a required field + MaxErrors *string `min:"1" type:"string" required:"true"` + + // An optional name for the task. + Name *string `min:"3" type:"string"` + + // The priority of the task in the Maintenance Window, the lower the number + // the higher the priority. Tasks in a Maintenance Window are scheduled in priority + // order with tasks that have the same priority scheduled in parallel. + Priority *int64 `type:"integer"` + + // The role that should be assumed when executing the task. + // + // ServiceRoleArn is a required field + ServiceRoleArn *string `type:"string" required:"true"` + + // The targets (either instances or Maintenance Window targets). + // + // Specify instances using the following format: + // + // Key=InstanceIds,Values=, + // + // Specify Maintenance Window targets using the following format: + // + // Key=,Values=, + // + // Targets is a required field + Targets []*Target `type:"list" required:"true"` + + // The ARN of the task to execute + // + // TaskArn is a required field + TaskArn *string `min:"1" type:"string" required:"true"` + + // The parameters that the task should use during execution. Populate only the + // fields that match the task type. All other fields should be empty. + TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` + + // The parameters that should be passed to the task when it is executed. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` + + // The type of task being registered. + // + // TaskType is a required field + TaskType *string `type:"string" required:"true" enum:"MaintenanceWindowTaskType"` + + // The ID of the Maintenance Window the task should be added to. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s RegisterTaskWithMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterTaskWithMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegisterTaskWithMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterTaskWithMaintenanceWindowInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.MaxConcurrency == nil { + invalidParams.Add(request.NewErrParamRequired("MaxConcurrency")) + } + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors == nil { + invalidParams.Add(request.NewErrParamRequired("MaxErrors")) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.ServiceRoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceRoleArn")) + } + if s.Targets == nil { + invalidParams.Add(request.NewErrParamRequired("Targets")) + } + if s.TaskArn == nil { + invalidParams.Add(request.NewErrParamRequired("TaskArn")) + } + if s.TaskArn != nil && len(*s.TaskArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskArn", 1)) + } + if s.TaskType == nil { + invalidParams.Add(request.NewErrParamRequired("TaskType")) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.LoggingInfo != nil { + if err := s.LoggingInfo.Validate(); err != nil { + invalidParams.AddNested("LoggingInfo", err.(request.ErrInvalidParams)) + } + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TaskInvocationParameters != nil { + if err := s.TaskInvocationParameters.Validate(); err != nil { + invalidParams.AddNested("TaskInvocationParameters", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetClientToken(v string) *RegisterTaskWithMaintenanceWindowInput { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetDescription(v string) *RegisterTaskWithMaintenanceWindowInput { + s.Description = &v + return s +} + +// SetLoggingInfo sets the LoggingInfo field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetLoggingInfo(v *LoggingInfo) *RegisterTaskWithMaintenanceWindowInput { + s.LoggingInfo = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetMaxConcurrency(v string) *RegisterTaskWithMaintenanceWindowInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetMaxErrors(v string) *RegisterTaskWithMaintenanceWindowInput { + s.MaxErrors = &v + return s +} + +// SetName sets the Name field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetName(v string) *RegisterTaskWithMaintenanceWindowInput { + s.Name = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetPriority(v int64) *RegisterTaskWithMaintenanceWindowInput { + s.Priority = &v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetServiceRoleArn(v string) *RegisterTaskWithMaintenanceWindowInput { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetTargets(v []*Target) *RegisterTaskWithMaintenanceWindowInput { + s.Targets = v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetTaskArn(v string) *RegisterTaskWithMaintenanceWindowInput { + s.TaskArn = &v + return s +} + +// SetTaskInvocationParameters sets the TaskInvocationParameters field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetTaskInvocationParameters(v *MaintenanceWindowTaskInvocationParameters) *RegisterTaskWithMaintenanceWindowInput { + s.TaskInvocationParameters = v + return s +} + +// SetTaskParameters sets the TaskParameters field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetTaskParameters(v map[string]*MaintenanceWindowTaskParameterValueExpression) *RegisterTaskWithMaintenanceWindowInput { + s.TaskParameters = v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetTaskType(v string) *RegisterTaskWithMaintenanceWindowInput { + s.TaskType = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *RegisterTaskWithMaintenanceWindowInput) SetWindowId(v string) *RegisterTaskWithMaintenanceWindowInput { + s.WindowId = &v + return s +} + +type RegisterTaskWithMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The id of the task in the Maintenance Window. + WindowTaskId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s RegisterTaskWithMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterTaskWithMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *RegisterTaskWithMaintenanceWindowOutput) SetWindowTaskId(v string) *RegisterTaskWithMaintenanceWindowOutput { + s.WindowTaskId = &v + return s +} + +type RemoveTagsFromResourceInput struct { + _ struct{} `type:"structure"` + + // The resource ID for which you want to remove tags. Use the ID of the resource. + // Here are some examples: + // + // ManagedInstance: mi-012345abcde + // + // MaintenanceWindow: mw-012345abcde + // + // PatchBaseline: pb-012345abcde + // + // For the Document and Parameter values, use the name of the resource. + // + // The ManagedInstance type for this API action is only for on-premises managed + // instances. You must specify the the name of the managed instance in the following + // format: mi-ID_number. For example, mi-1a2b3c4d5e6f. + // + // ResourceId is a required field + ResourceId *string `type:"string" required:"true"` + + // The type of resource of which you want to remove a tag. + // + // The ManagedInstance type for this API action is only for on-premises managed + // instances. You must specify the the name of the managed instance in the following + // format: mi-ID_number. For example, mi-1a2b3c4d5e6f. + // + // ResourceType is a required field + ResourceType *string `type:"string" required:"true" enum:"ResourceTypeForTagging"` + + // Tag keys that you want to remove from the specified resource. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s RemoveTagsFromResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsFromResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveTagsFromResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveTagsFromResourceInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *RemoveTagsFromResourceInput) SetResourceId(v string) *RemoveTagsFromResourceInput { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *RemoveTagsFromResourceInput) SetResourceType(v string) *RemoveTagsFromResourceInput { + s.ResourceType = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *RemoveTagsFromResourceInput) SetTagKeys(v []*string) *RemoveTagsFromResourceInput { + s.TagKeys = v + return s +} + +type RemoveTagsFromResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveTagsFromResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsFromResourceOutput) GoString() string { + return s.String() +} + +// Information about targets that resolved during the Automation execution. +type ResolvedTargets struct { + _ struct{} `type:"structure"` + + // A list of parameter values sent to targets that resolved during the Automation + // execution. + ParameterValues []*string `type:"list"` + + // A boolean value indicating whether the resolved target list is truncated. + Truncated *bool `type:"boolean"` +} + +// String returns the string representation +func (s ResolvedTargets) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResolvedTargets) GoString() string { + return s.String() +} + +// SetParameterValues sets the ParameterValues field's value. +func (s *ResolvedTargets) SetParameterValues(v []*string) *ResolvedTargets { + s.ParameterValues = v + return s +} + +// SetTruncated sets the Truncated field's value. +func (s *ResolvedTargets) SetTruncated(v bool) *ResolvedTargets { + s.Truncated = &v + return s +} + +// Compliance summary information for a specific resource. +type ResourceComplianceSummaryItem struct { + _ struct{} `type:"structure"` + + // The compliance type. + ComplianceType *string `min:"1" type:"string"` + + // A list of items that are compliant for the resource. + CompliantSummary *CompliantSummary `type:"structure"` + + // Information about the execution. + ExecutionSummary *ComplianceExecutionSummary `type:"structure"` + + // A list of items that aren't compliant for the resource. + NonCompliantSummary *NonCompliantSummary `type:"structure"` + + // The highest severity item found for the resource. The resource is compliant + // for this item. + OverallSeverity *string `type:"string" enum:"ComplianceSeverity"` + + // The resource ID. + ResourceId *string `min:"1" type:"string"` + + // The resource type. + ResourceType *string `min:"1" type:"string"` + + // The compliance status for the resource. + Status *string `type:"string" enum:"ComplianceStatus"` +} + +// String returns the string representation +func (s ResourceComplianceSummaryItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceComplianceSummaryItem) GoString() string { + return s.String() +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *ResourceComplianceSummaryItem) SetComplianceType(v string) *ResourceComplianceSummaryItem { + s.ComplianceType = &v + return s +} + +// SetCompliantSummary sets the CompliantSummary field's value. +func (s *ResourceComplianceSummaryItem) SetCompliantSummary(v *CompliantSummary) *ResourceComplianceSummaryItem { + s.CompliantSummary = v + return s +} + +// SetExecutionSummary sets the ExecutionSummary field's value. +func (s *ResourceComplianceSummaryItem) SetExecutionSummary(v *ComplianceExecutionSummary) *ResourceComplianceSummaryItem { + s.ExecutionSummary = v + return s +} + +// SetNonCompliantSummary sets the NonCompliantSummary field's value. +func (s *ResourceComplianceSummaryItem) SetNonCompliantSummary(v *NonCompliantSummary) *ResourceComplianceSummaryItem { + s.NonCompliantSummary = v + return s +} + +// SetOverallSeverity sets the OverallSeverity field's value. +func (s *ResourceComplianceSummaryItem) SetOverallSeverity(v string) *ResourceComplianceSummaryItem { + s.OverallSeverity = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ResourceComplianceSummaryItem) SetResourceId(v string) *ResourceComplianceSummaryItem { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ResourceComplianceSummaryItem) SetResourceType(v string) *ResourceComplianceSummaryItem { + s.ResourceType = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ResourceComplianceSummaryItem) SetStatus(v string) *ResourceComplianceSummaryItem { + s.Status = &v + return s +} + +// Information about a Resource Data Sync configuration, including its current +// status and last successful sync. +type ResourceDataSyncItem struct { + _ struct{} `type:"structure"` + + // The status reported by the last sync. + LastStatus *string `type:"string" enum:"LastResourceDataSyncStatus"` + + // The last time the sync operations returned a status of SUCCESSFUL (UTC). + LastSuccessfulSyncTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The status message details reported by the last sync. + LastSyncStatusMessage *string `type:"string"` + + // The last time the configuration attempted to sync (UTC). + LastSyncTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Configuration information for the target Amazon S3 bucket. + S3Destination *ResourceDataSyncS3Destination `type:"structure"` + + // The date and time the configuration was created (UTC). + SyncCreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the Resource Data Sync. + SyncName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ResourceDataSyncItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceDataSyncItem) GoString() string { + return s.String() +} + +// SetLastStatus sets the LastStatus field's value. +func (s *ResourceDataSyncItem) SetLastStatus(v string) *ResourceDataSyncItem { + s.LastStatus = &v + return s +} + +// SetLastSuccessfulSyncTime sets the LastSuccessfulSyncTime field's value. +func (s *ResourceDataSyncItem) SetLastSuccessfulSyncTime(v time.Time) *ResourceDataSyncItem { + s.LastSuccessfulSyncTime = &v + return s +} + +// SetLastSyncStatusMessage sets the LastSyncStatusMessage field's value. +func (s *ResourceDataSyncItem) SetLastSyncStatusMessage(v string) *ResourceDataSyncItem { + s.LastSyncStatusMessage = &v + return s +} + +// SetLastSyncTime sets the LastSyncTime field's value. +func (s *ResourceDataSyncItem) SetLastSyncTime(v time.Time) *ResourceDataSyncItem { + s.LastSyncTime = &v + return s +} + +// SetS3Destination sets the S3Destination field's value. +func (s *ResourceDataSyncItem) SetS3Destination(v *ResourceDataSyncS3Destination) *ResourceDataSyncItem { + s.S3Destination = v + return s +} + +// SetSyncCreatedTime sets the SyncCreatedTime field's value. +func (s *ResourceDataSyncItem) SetSyncCreatedTime(v time.Time) *ResourceDataSyncItem { + s.SyncCreatedTime = &v + return s +} + +// SetSyncName sets the SyncName field's value. +func (s *ResourceDataSyncItem) SetSyncName(v string) *ResourceDataSyncItem { + s.SyncName = &v + return s +} + +// Information about the target Amazon S3 bucket for the Resource Data Sync. +type ResourceDataSyncS3Destination struct { + _ struct{} `type:"structure"` + + // The ARN of an encryption key for a destination in Amazon S3. Must belong + // to the same region as the destination Amazon S3 bucket. + AWSKMSKeyARN *string `min:"1" type:"string"` + + // The name of the Amazon S3 bucket where the aggregated data is stored. + // + // BucketName is a required field + BucketName *string `min:"1" type:"string" required:"true"` + + // An Amazon S3 prefix for the bucket. + Prefix *string `min:"1" type:"string"` + + // The AWS Region with the Amazon S3 bucket targeted by the Resource Data Sync. + // + // Region is a required field + Region *string `min:"1" type:"string" required:"true"` + + // A supported sync format. The following format is currently supported: JsonSerDe + // + // SyncFormat is a required field + SyncFormat *string `type:"string" required:"true" enum:"ResourceDataSyncS3Format"` +} + +// String returns the string representation +func (s ResourceDataSyncS3Destination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceDataSyncS3Destination) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceDataSyncS3Destination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceDataSyncS3Destination"} + if s.AWSKMSKeyARN != nil && len(*s.AWSKMSKeyARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AWSKMSKeyARN", 1)) + } + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.BucketName != nil && len(*s.BucketName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BucketName", 1)) + } + if s.Prefix != nil && len(*s.Prefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Prefix", 1)) + } + if s.Region == nil { + invalidParams.Add(request.NewErrParamRequired("Region")) + } + if s.Region != nil && len(*s.Region) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Region", 1)) + } + if s.SyncFormat == nil { + invalidParams.Add(request.NewErrParamRequired("SyncFormat")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAWSKMSKeyARN sets the AWSKMSKeyARN field's value. +func (s *ResourceDataSyncS3Destination) SetAWSKMSKeyARN(v string) *ResourceDataSyncS3Destination { + s.AWSKMSKeyARN = &v + return s +} + +// SetBucketName sets the BucketName field's value. +func (s *ResourceDataSyncS3Destination) SetBucketName(v string) *ResourceDataSyncS3Destination { + s.BucketName = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ResourceDataSyncS3Destination) SetPrefix(v string) *ResourceDataSyncS3Destination { + s.Prefix = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *ResourceDataSyncS3Destination) SetRegion(v string) *ResourceDataSyncS3Destination { + s.Region = &v + return s +} + +// SetSyncFormat sets the SyncFormat field's value. +func (s *ResourceDataSyncS3Destination) SetSyncFormat(v string) *ResourceDataSyncS3Destination { + s.SyncFormat = &v + return s +} + +// The inventory item result attribute. +type ResultAttribute struct { + _ struct{} `type:"structure"` + + // Name of the inventory item type. Valid value: AWS:InstanceInformation. Default + // Value: AWS:InstanceInformation. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ResultAttribute) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResultAttribute) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResultAttribute) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResultAttribute"} + if s.TypeName == nil { + invalidParams.Add(request.NewErrParamRequired("TypeName")) + } + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTypeName sets the TypeName field's value. +func (s *ResultAttribute) SetTypeName(v string) *ResultAttribute { + s.TypeName = &v + return s +} + +// An Amazon S3 bucket where you want to store the results of this request. +type S3OutputLocation struct { + _ struct{} `type:"structure"` + + // The name of the Amazon S3 bucket. + OutputS3BucketName *string `min:"3" type:"string"` + + // The Amazon S3 bucket subfolder. + OutputS3KeyPrefix *string `type:"string"` + + // (Deprecated) You can no longer specify this parameter. The system ignores + // it. Instead, Systems Manager automatically determines the Amazon S3 bucket + // region. + OutputS3Region *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s S3OutputLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3OutputLocation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3OutputLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3OutputLocation"} + if s.OutputS3BucketName != nil && len(*s.OutputS3BucketName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("OutputS3BucketName", 3)) + } + if s.OutputS3Region != nil && len(*s.OutputS3Region) < 3 { + invalidParams.Add(request.NewErrParamMinLen("OutputS3Region", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOutputS3BucketName sets the OutputS3BucketName field's value. +func (s *S3OutputLocation) SetOutputS3BucketName(v string) *S3OutputLocation { + s.OutputS3BucketName = &v + return s +} + +// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. +func (s *S3OutputLocation) SetOutputS3KeyPrefix(v string) *S3OutputLocation { + s.OutputS3KeyPrefix = &v + return s +} + +// SetOutputS3Region sets the OutputS3Region field's value. +func (s *S3OutputLocation) SetOutputS3Region(v string) *S3OutputLocation { + s.OutputS3Region = &v + return s +} + +// A URL for the Amazon S3 bucket where you want to store the results of this +// request. +type S3OutputUrl struct { + _ struct{} `type:"structure"` + + // A URL for an Amazon S3 bucket where you want to store the results of this + // request. + OutputUrl *string `type:"string"` +} + +// String returns the string representation +func (s S3OutputUrl) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3OutputUrl) GoString() string { + return s.String() +} + +// SetOutputUrl sets the OutputUrl field's value. +func (s *S3OutputUrl) SetOutputUrl(v string) *S3OutputUrl { + s.OutputUrl = &v + return s +} + +type SendAutomationSignalInput struct { + _ struct{} `type:"structure"` + + // The unique identifier for an existing Automation execution that you want + // to send the signal to. + // + // AutomationExecutionId is a required field + AutomationExecutionId *string `min:"36" type:"string" required:"true"` + + // The data sent with the signal. The data schema depends on the type of signal + // used in the request. + Payload map[string][]*string `min:"1" type:"map"` + + // The type of signal. Valid signal types include the following: Approve and + // Reject + // + // SignalType is a required field + SignalType *string `type:"string" required:"true" enum:"SignalType"` +} + +// String returns the string representation +func (s SendAutomationSignalInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendAutomationSignalInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SendAutomationSignalInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SendAutomationSignalInput"} + if s.AutomationExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("AutomationExecutionId")) + } + if s.AutomationExecutionId != nil && len(*s.AutomationExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("AutomationExecutionId", 36)) + } + if s.Payload != nil && len(s.Payload) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Payload", 1)) + } + if s.SignalType == nil { + invalidParams.Add(request.NewErrParamRequired("SignalType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *SendAutomationSignalInput) SetAutomationExecutionId(v string) *SendAutomationSignalInput { + s.AutomationExecutionId = &v + return s +} + +// SetPayload sets the Payload field's value. +func (s *SendAutomationSignalInput) SetPayload(v map[string][]*string) *SendAutomationSignalInput { + s.Payload = v + return s +} + +// SetSignalType sets the SignalType field's value. +func (s *SendAutomationSignalInput) SetSignalType(v string) *SendAutomationSignalInput { + s.SignalType = &v + return s +} + +type SendAutomationSignalOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SendAutomationSignalOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendAutomationSignalOutput) GoString() string { + return s.String() +} + +type SendCommandInput struct { + _ struct{} `type:"structure"` + + // User-specified information about the command, such as a brief description + // of what the command should do. + Comment *string `type:"string"` + + // The Sha256 or Sha1 hash created by the system when the document was created. + // + // Sha1 hashes have been deprecated. + DocumentHash *string `type:"string"` + + // Sha256 or Sha1. + // + // Sha1 hashes have been deprecated. + DocumentHashType *string `type:"string" enum:"DocumentHashType"` + + // Required. The name of the Systems Manager document to execute. This can be + // a public document or a custom document. + // + // DocumentName is a required field + DocumentName *string `type:"string" required:"true"` + + // The SSM document version to use in the request. You can specify Default, + // Latest, or a specific version number. + DocumentVersion *string `type:"string"` + + // The instance IDs where the command should execute. You can specify a maximum + // of 50 IDs. If you prefer not to list individual instance IDs, you can instead + // send commands to a fleet of instances using the Targets parameter, which + // accepts EC2 tags. For more information about how to use Targets, see Sending + // Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + InstanceIds []*string `type:"list"` + + // (Optional) The maximum number of instances that are allowed to execute the + // command at the same time. You can specify a number such as 10 or a percentage + // such as 10%. The default value is 50. For more information about how to use + // MaxConcurrency, see Using Concurrency Controls (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-velocity.html). + MaxConcurrency *string `min:"1" type:"string"` + + // The maximum number of errors allowed without the command failing. When the + // command fails one more time beyond the value of MaxErrors, the systems stops + // sending the command to additional targets. You can specify a number like + // 10 or a percentage like 10%. The default value is 0. For more information + // about how to use MaxErrors, see Using Error Controls (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-maxerrors.html). + MaxErrors *string `min:"1" type:"string"` + + // Configurations for sending notifications. + NotificationConfig *NotificationConfig `type:"structure"` + + // The name of the S3 bucket where command execution responses should be stored. + OutputS3BucketName *string `min:"3" type:"string"` + + // The directory structure within the S3 bucket where the responses should be + // stored. + OutputS3KeyPrefix *string `type:"string"` + + // (Deprecated) You can no longer specify this parameter. The system ignores + // it. Instead, Systems Manager automatically determines the Amazon S3 bucket + // region. + OutputS3Region *string `min:"3" type:"string"` + + // The required and optional parameters specified in the document being executed. + Parameters map[string][]*string `type:"map"` + + // The IAM role that Systems Manager uses to send notifications. + ServiceRoleArn *string `type:"string"` + + // (Optional) An array of search criteria that targets instances using a Key,Value + // combination that you specify. Targets is required if you don't provide one + // or more instance IDs in the call. For more information about how to use Targets, + // see Sending Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + Targets []*Target `type:"list"` + + // If this time is reached and the command has not already started executing, + // it will not run. + TimeoutSeconds *int64 `min:"30" type:"integer"` +} + +// String returns the string representation +func (s SendCommandInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendCommandInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SendCommandInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SendCommandInput"} + if s.DocumentName == nil { + invalidParams.Add(request.NewErrParamRequired("DocumentName")) + } + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } + if s.OutputS3BucketName != nil && len(*s.OutputS3BucketName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("OutputS3BucketName", 3)) + } + if s.OutputS3Region != nil && len(*s.OutputS3Region) < 3 { + invalidParams.Add(request.NewErrParamMinLen("OutputS3Region", 3)) + } + if s.TimeoutSeconds != nil && *s.TimeoutSeconds < 30 { + invalidParams.Add(request.NewErrParamMinValue("TimeoutSeconds", 30)) + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetComment sets the Comment field's value. +func (s *SendCommandInput) SetComment(v string) *SendCommandInput { + s.Comment = &v + return s +} + +// SetDocumentHash sets the DocumentHash field's value. +func (s *SendCommandInput) SetDocumentHash(v string) *SendCommandInput { + s.DocumentHash = &v + return s +} + +// SetDocumentHashType sets the DocumentHashType field's value. +func (s *SendCommandInput) SetDocumentHashType(v string) *SendCommandInput { + s.DocumentHashType = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *SendCommandInput) SetDocumentName(v string) *SendCommandInput { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *SendCommandInput) SetDocumentVersion(v string) *SendCommandInput { + s.DocumentVersion = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *SendCommandInput) SetInstanceIds(v []*string) *SendCommandInput { + s.InstanceIds = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *SendCommandInput) SetMaxConcurrency(v string) *SendCommandInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *SendCommandInput) SetMaxErrors(v string) *SendCommandInput { + s.MaxErrors = &v + return s +} + +// SetNotificationConfig sets the NotificationConfig field's value. +func (s *SendCommandInput) SetNotificationConfig(v *NotificationConfig) *SendCommandInput { + s.NotificationConfig = v + return s +} + +// SetOutputS3BucketName sets the OutputS3BucketName field's value. +func (s *SendCommandInput) SetOutputS3BucketName(v string) *SendCommandInput { + s.OutputS3BucketName = &v + return s +} + +// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. +func (s *SendCommandInput) SetOutputS3KeyPrefix(v string) *SendCommandInput { + s.OutputS3KeyPrefix = &v + return s +} + +// SetOutputS3Region sets the OutputS3Region field's value. +func (s *SendCommandInput) SetOutputS3Region(v string) *SendCommandInput { + s.OutputS3Region = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *SendCommandInput) SetParameters(v map[string][]*string) *SendCommandInput { + s.Parameters = v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *SendCommandInput) SetServiceRoleArn(v string) *SendCommandInput { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *SendCommandInput) SetTargets(v []*Target) *SendCommandInput { + s.Targets = v + return s +} + +// SetTimeoutSeconds sets the TimeoutSeconds field's value. +func (s *SendCommandInput) SetTimeoutSeconds(v int64) *SendCommandInput { + s.TimeoutSeconds = &v + return s +} + +type SendCommandOutput struct { + _ struct{} `type:"structure"` + + // The request as it was received by Systems Manager. Also provides the command + // ID which can be used future references to this request. + Command *Command `type:"structure"` +} + +// String returns the string representation +func (s SendCommandOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendCommandOutput) GoString() string { + return s.String() +} + +// SetCommand sets the Command field's value. +func (s *SendCommandOutput) SetCommand(v *Command) *SendCommandOutput { + s.Command = v + return s +} + +// The number of managed instances found for each patch severity level defined +// in the request filter. +type SeveritySummary struct { + _ struct{} `type:"structure"` + + // The total number of resources or compliance items that have a severity level + // of critical. Critical severity is determined by the organization that published + // the compliance items. + CriticalCount *int64 `type:"integer"` + + // The total number of resources or compliance items that have a severity level + // of high. High severity is determined by the organization that published the + // compliance items. + HighCount *int64 `type:"integer"` + + // The total number of resources or compliance items that have a severity level + // of informational. Informational severity is determined by the organization + // that published the compliance items. + InformationalCount *int64 `type:"integer"` + + // The total number of resources or compliance items that have a severity level + // of low. Low severity is determined by the organization that published the + // compliance items. + LowCount *int64 `type:"integer"` + + // The total number of resources or compliance items that have a severity level + // of medium. Medium severity is determined by the organization that published + // the compliance items. + MediumCount *int64 `type:"integer"` + + // The total number of resources or compliance items that have a severity level + // of unspecified. Unspecified severity is determined by the organization that + // published the compliance items. + UnspecifiedCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s SeveritySummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SeveritySummary) GoString() string { + return s.String() +} + +// SetCriticalCount sets the CriticalCount field's value. +func (s *SeveritySummary) SetCriticalCount(v int64) *SeveritySummary { + s.CriticalCount = &v + return s +} + +// SetHighCount sets the HighCount field's value. +func (s *SeveritySummary) SetHighCount(v int64) *SeveritySummary { + s.HighCount = &v + return s +} + +// SetInformationalCount sets the InformationalCount field's value. +func (s *SeveritySummary) SetInformationalCount(v int64) *SeveritySummary { + s.InformationalCount = &v + return s +} + +// SetLowCount sets the LowCount field's value. +func (s *SeveritySummary) SetLowCount(v int64) *SeveritySummary { + s.LowCount = &v + return s +} + +// SetMediumCount sets the MediumCount field's value. +func (s *SeveritySummary) SetMediumCount(v int64) *SeveritySummary { + s.MediumCount = &v + return s +} + +// SetUnspecifiedCount sets the UnspecifiedCount field's value. +func (s *SeveritySummary) SetUnspecifiedCount(v int64) *SeveritySummary { + s.UnspecifiedCount = &v + return s +} + +type StartAutomationExecutionInput struct { + _ struct{} `type:"structure"` + + // User-provided idempotency token. The token must be unique, is case insensitive, + // enforces the UUID format, and can't be reused. + ClientToken *string `min:"36" type:"string"` + + // The name of the Automation document to use for this execution. + // + // DocumentName is a required field + DocumentName *string `type:"string" required:"true"` + + // The version of the Automation document to use for this execution. + DocumentVersion *string `type:"string"` + + // The maximum number of targets allowed to run this task in parallel. You can + // specify a number, such as 10, or a percentage, such as 10%. The default value + // is 10. + MaxConcurrency *string `min:"1" type:"string"` + + // The number of errors that are allowed before the system stops running the + // automation on additional targets. You can specify either an absolute number + // of errors, for example 10, or a percentage of the target set, for example + // 10%. If you specify 3, for example, the system stops running the automation + // when the fourth error is received. If you specify 0, then the system stops + // running the automation on additional targets after the first error result + // is returned. If you run an automation on 50 resources and set max-errors + // to 10%, then the system stops running the automation on additional targets + // when the sixth error is received. + // + // Executions that are already running an automation when max-errors is reached + // are allowed to complete, but some of these executions may fail as well. If + // you need to ensure that there won't be more than max-errors failed executions, + // set max-concurrency to 1 so the executions proceed one at a time. + MaxErrors *string `min:"1" type:"string"` + + // The execution mode of the automation. Valid modes include the following: + // Auto and Interactive. The default mode is Auto. + Mode *string `type:"string" enum:"ExecutionMode"` + + // A key-value map of execution parameters, which match the declared parameters + // in the Automation document. + Parameters map[string][]*string `min:"1" type:"map"` + + // The name of the parameter used as the target resource for the rate-controlled + // execution. Required if you specify Targets. + TargetParameterName *string `min:"1" type:"string"` + + // A key-value mapping to target resources. Required if you specify TargetParameterName. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s StartAutomationExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartAutomationExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartAutomationExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartAutomationExecutionInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 36 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 36)) + } + if s.DocumentName == nil { + invalidParams.Add(request.NewErrParamRequired("DocumentName")) + } + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } + if s.Parameters != nil && len(s.Parameters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Parameters", 1)) + } + if s.TargetParameterName != nil && len(*s.TargetParameterName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetParameterName", 1)) + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *StartAutomationExecutionInput) SetClientToken(v string) *StartAutomationExecutionInput { + s.ClientToken = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *StartAutomationExecutionInput) SetDocumentName(v string) *StartAutomationExecutionInput { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *StartAutomationExecutionInput) SetDocumentVersion(v string) *StartAutomationExecutionInput { + s.DocumentVersion = &v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *StartAutomationExecutionInput) SetMaxConcurrency(v string) *StartAutomationExecutionInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *StartAutomationExecutionInput) SetMaxErrors(v string) *StartAutomationExecutionInput { + s.MaxErrors = &v + return s +} + +// SetMode sets the Mode field's value. +func (s *StartAutomationExecutionInput) SetMode(v string) *StartAutomationExecutionInput { + s.Mode = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *StartAutomationExecutionInput) SetParameters(v map[string][]*string) *StartAutomationExecutionInput { + s.Parameters = v + return s +} + +// SetTargetParameterName sets the TargetParameterName field's value. +func (s *StartAutomationExecutionInput) SetTargetParameterName(v string) *StartAutomationExecutionInput { + s.TargetParameterName = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *StartAutomationExecutionInput) SetTargets(v []*Target) *StartAutomationExecutionInput { + s.Targets = v + return s +} + +type StartAutomationExecutionOutput struct { + _ struct{} `type:"structure"` + + // The unique ID of a newly scheduled automation execution. + AutomationExecutionId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s StartAutomationExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartAutomationExecutionOutput) GoString() string { + return s.String() +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *StartAutomationExecutionOutput) SetAutomationExecutionId(v string) *StartAutomationExecutionOutput { + s.AutomationExecutionId = &v + return s +} + +// Detailed information about an the execution state of an Automation step. +type StepExecution struct { + _ struct{} `type:"structure"` + + // The action this step performs. The action determines the behavior of the + // step. + Action *string `type:"string"` + + // If a step has finished execution, this contains the time the execution ended. + // If the step has not yet concluded, this field is not populated. + ExecutionEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // If a step has begun execution, this contains the time the step started. If + // the step is in Pending status, this field is not populated. + ExecutionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // Information about the Automation failure. + FailureDetails *FailureDetails `type:"structure"` + + // If a step failed, this message explains why the execution failed. + FailureMessage *string `type:"string"` + + // Fully-resolved values passed into the step before execution. + Inputs map[string]*string `type:"map"` + + // The maximum number of tries to run the action of the step. The default value + // is 1. + MaxAttempts *int64 `type:"integer"` + + // The action to take if the step fails. The default value is Abort. + OnFailure *string `type:"string"` + + // Returned values from the execution of the step. + Outputs map[string][]*string `min:"1" type:"map"` + + // A user-specified list of parameters to override when executing a step. + OverriddenParameters map[string][]*string `min:"1" type:"map"` + + // A message associated with the response code for an execution. + Response *string `type:"string"` + + // The response code returned by the execution of the step. + ResponseCode *string `type:"string"` + + // The unique ID of a step execution. + StepExecutionId *string `type:"string"` + + // The name of this execution step. + StepName *string `type:"string"` + + // The execution status for this step. Valid values include: Pending, InProgress, + // Success, Cancelled, Failed, and TimedOut. + StepStatus *string `type:"string" enum:"AutomationExecutionStatus"` + + // The timeout seconds of the step. + TimeoutSeconds *int64 `type:"long"` +} + +// String returns the string representation +func (s StepExecution) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StepExecution) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *StepExecution) SetAction(v string) *StepExecution { + s.Action = &v + return s +} + +// SetExecutionEndTime sets the ExecutionEndTime field's value. +func (s *StepExecution) SetExecutionEndTime(v time.Time) *StepExecution { + s.ExecutionEndTime = &v + return s +} + +// SetExecutionStartTime sets the ExecutionStartTime field's value. +func (s *StepExecution) SetExecutionStartTime(v time.Time) *StepExecution { + s.ExecutionStartTime = &v + return s +} + +// SetFailureDetails sets the FailureDetails field's value. +func (s *StepExecution) SetFailureDetails(v *FailureDetails) *StepExecution { + s.FailureDetails = v + return s +} + +// SetFailureMessage sets the FailureMessage field's value. +func (s *StepExecution) SetFailureMessage(v string) *StepExecution { + s.FailureMessage = &v + return s +} + +// SetInputs sets the Inputs field's value. +func (s *StepExecution) SetInputs(v map[string]*string) *StepExecution { + s.Inputs = v + return s +} + +// SetMaxAttempts sets the MaxAttempts field's value. +func (s *StepExecution) SetMaxAttempts(v int64) *StepExecution { + s.MaxAttempts = &v + return s +} + +// SetOnFailure sets the OnFailure field's value. +func (s *StepExecution) SetOnFailure(v string) *StepExecution { + s.OnFailure = &v + return s +} + +// SetOutputs sets the Outputs field's value. +func (s *StepExecution) SetOutputs(v map[string][]*string) *StepExecution { + s.Outputs = v + return s +} + +// SetOverriddenParameters sets the OverriddenParameters field's value. +func (s *StepExecution) SetOverriddenParameters(v map[string][]*string) *StepExecution { + s.OverriddenParameters = v + return s +} + +// SetResponse sets the Response field's value. +func (s *StepExecution) SetResponse(v string) *StepExecution { + s.Response = &v + return s +} + +// SetResponseCode sets the ResponseCode field's value. +func (s *StepExecution) SetResponseCode(v string) *StepExecution { + s.ResponseCode = &v + return s +} + +// SetStepExecutionId sets the StepExecutionId field's value. +func (s *StepExecution) SetStepExecutionId(v string) *StepExecution { + s.StepExecutionId = &v + return s +} + +// SetStepName sets the StepName field's value. +func (s *StepExecution) SetStepName(v string) *StepExecution { + s.StepName = &v + return s +} + +// SetStepStatus sets the StepStatus field's value. +func (s *StepExecution) SetStepStatus(v string) *StepExecution { + s.StepStatus = &v + return s +} + +// SetTimeoutSeconds sets the TimeoutSeconds field's value. +func (s *StepExecution) SetTimeoutSeconds(v int64) *StepExecution { + s.TimeoutSeconds = &v + return s +} + +// A filter to limit the amount of step execution information returned by the +// call. +type StepExecutionFilter struct { + _ struct{} `type:"structure"` + + // One or more keys to limit the results. Valid filter keys include the following: + // StepName, Action, StepExecutionId, StepExecutionStatus, StartTimeBefore, + // StartTimeAfter. + // + // Key is a required field + Key *string `type:"string" required:"true" enum:"StepExecutionFilterKey"` + + // The values of the filter key. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s StepExecutionFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StepExecutionFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StepExecutionFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StepExecutionFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *StepExecutionFilter) SetKey(v string) *StepExecutionFilter { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *StepExecutionFilter) SetValues(v []*string) *StepExecutionFilter { + s.Values = v + return s +} + +type StopAutomationExecutionInput struct { + _ struct{} `type:"structure"` + + // The execution ID of the Automation to stop. + // + // AutomationExecutionId is a required field + AutomationExecutionId *string `min:"36" type:"string" required:"true"` + + // The stop request type. Valid types include the following: Cancel and Complete. + // The default type is Cancel. + Type *string `type:"string" enum:"StopType"` +} + +// String returns the string representation +func (s StopAutomationExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopAutomationExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopAutomationExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopAutomationExecutionInput"} + if s.AutomationExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("AutomationExecutionId")) + } + if s.AutomationExecutionId != nil && len(*s.AutomationExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("AutomationExecutionId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutomationExecutionId sets the AutomationExecutionId field's value. +func (s *StopAutomationExecutionInput) SetAutomationExecutionId(v string) *StopAutomationExecutionInput { + s.AutomationExecutionId = &v + return s +} + +// SetType sets the Type field's value. +func (s *StopAutomationExecutionInput) SetType(v string) *StopAutomationExecutionInput { + s.Type = &v + return s +} + +type StopAutomationExecutionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StopAutomationExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopAutomationExecutionOutput) GoString() string { + return s.String() +} + +// Metadata that you assign to your AWS resources. Tags enable you to categorize +// your resources in different ways, for example, by purpose, owner, or environment. +// In Systems Manager, you can apply tags to documents, managed instances, Maintenance +// Windows, Parameter Store parameters, and patch baselines. +type Tag struct { + _ struct{} `type:"structure"` + + // The name of the tag. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The value of the tag. + // + // Value is a required field + Value *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +// An array of search criteria that targets instances using a Key,Value combination +// that you specify. Targets is required if you don't provide one or more instance +// IDs in the call. +type Target struct { + _ struct{} `type:"structure"` + + // User-defined criteria for sending commands that target instances that meet + // the criteria. Key can be tag: or InstanceIds. For more information + // about how to send commands that target instances using Key,Value parameters, + // see Executing a Command Using Systems Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + Key *string `min:"1" type:"string"` + + // User-defined criteria that maps to Key. For example, if you specified tag:ServerRole, + // you could specify value:WebServer to execute a command on instances that + // include Amazon EC2 tags of ServerRole,WebServer. For more information about + // how to send commands that target instances using Key,Value parameters, see + // Executing a Command Using Systems Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + Values []*string `type:"list"` +} + +// String returns the string representation +func (s Target) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Target) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Target) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Target"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Target) SetKey(v string) *Target { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *Target) SetValues(v []*string) *Target { + s.Values = v + return s +} + +type UpdateAssociationInput struct { + _ struct{} `type:"structure"` + + // The ID of the association you want to update. + // + // AssociationId is a required field + AssociationId *string `type:"string" required:"true"` + + // The name of the association that you want to update. + AssociationName *string `type:"string"` + + // This parameter is provided for concurrency control purposes. You must specify + // the latest association version in the service. If you want to ensure that + // this request succeeds, either specify $LATEST, or omit this parameter. + AssociationVersion *string `type:"string"` + + // The document version you want update for the association. + DocumentVersion *string `type:"string"` + + // The name of the association document. + Name *string `type:"string"` + + // An Amazon S3 bucket where you want to store the results of this request. + OutputLocation *InstanceAssociationOutputLocation `type:"structure"` + + // The parameters you want to update for the association. If you create a parameter + // using Parameter Store, you can reference the parameter using {{ssm:parameter-name}} + Parameters map[string][]*string `type:"map"` + + // The cron expression used to schedule the association that you want to update. + ScheduleExpression *string `min:"1" type:"string"` + + // The targets of the association. + Targets []*Target `type:"list"` +} + +// String returns the string representation +func (s UpdateAssociationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAssociationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAssociationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAssociationInput"} + if s.AssociationId == nil { + invalidParams.Add(request.NewErrParamRequired("AssociationId")) + } + if s.ScheduleExpression != nil && len(*s.ScheduleExpression) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduleExpression", 1)) + } + if s.OutputLocation != nil { + if err := s.OutputLocation.Validate(); err != nil { + invalidParams.AddNested("OutputLocation", err.(request.ErrInvalidParams)) + } + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationId sets the AssociationId field's value. +func (s *UpdateAssociationInput) SetAssociationId(v string) *UpdateAssociationInput { + s.AssociationId = &v + return s +} + +// SetAssociationName sets the AssociationName field's value. +func (s *UpdateAssociationInput) SetAssociationName(v string) *UpdateAssociationInput { + s.AssociationName = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *UpdateAssociationInput) SetAssociationVersion(v string) *UpdateAssociationInput { + s.AssociationVersion = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *UpdateAssociationInput) SetDocumentVersion(v string) *UpdateAssociationInput { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateAssociationInput) SetName(v string) *UpdateAssociationInput { + s.Name = &v + return s +} + +// SetOutputLocation sets the OutputLocation field's value. +func (s *UpdateAssociationInput) SetOutputLocation(v *InstanceAssociationOutputLocation) *UpdateAssociationInput { + s.OutputLocation = v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *UpdateAssociationInput) SetParameters(v map[string][]*string) *UpdateAssociationInput { + s.Parameters = v + return s +} + +// SetScheduleExpression sets the ScheduleExpression field's value. +func (s *UpdateAssociationInput) SetScheduleExpression(v string) *UpdateAssociationInput { + s.ScheduleExpression = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *UpdateAssociationInput) SetTargets(v []*Target) *UpdateAssociationInput { + s.Targets = v + return s +} + +type UpdateAssociationOutput struct { + _ struct{} `type:"structure"` + + // The description of the association that was updated. + AssociationDescription *AssociationDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateAssociationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAssociationOutput) GoString() string { + return s.String() +} + +// SetAssociationDescription sets the AssociationDescription field's value. +func (s *UpdateAssociationOutput) SetAssociationDescription(v *AssociationDescription) *UpdateAssociationOutput { + s.AssociationDescription = v + return s +} + +type UpdateAssociationStatusInput struct { + _ struct{} `type:"structure"` + + // The association status. + // + // AssociationStatus is a required field + AssociationStatus *AssociationStatus `type:"structure" required:"true"` + + // The ID of the instance. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` + + // The name of the Systems Manager document. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateAssociationStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAssociationStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAssociationStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAssociationStatusInput"} + if s.AssociationStatus == nil { + invalidParams.Add(request.NewErrParamRequired("AssociationStatus")) + } + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.AssociationStatus != nil { + if err := s.AssociationStatus.Validate(); err != nil { + invalidParams.AddNested("AssociationStatus", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationStatus sets the AssociationStatus field's value. +func (s *UpdateAssociationStatusInput) SetAssociationStatus(v *AssociationStatus) *UpdateAssociationStatusInput { + s.AssociationStatus = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *UpdateAssociationStatusInput) SetInstanceId(v string) *UpdateAssociationStatusInput { + s.InstanceId = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateAssociationStatusInput) SetName(v string) *UpdateAssociationStatusInput { + s.Name = &v + return s +} + +type UpdateAssociationStatusOutput struct { + _ struct{} `type:"structure"` + + // Information about the association. + AssociationDescription *AssociationDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateAssociationStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAssociationStatusOutput) GoString() string { + return s.String() +} + +// SetAssociationDescription sets the AssociationDescription field's value. +func (s *UpdateAssociationStatusOutput) SetAssociationDescription(v *AssociationDescription) *UpdateAssociationStatusOutput { + s.AssociationDescription = v + return s +} + +type UpdateDocumentDefaultVersionInput struct { + _ struct{} `type:"structure"` + + // The version of a custom document that you want to set as the default version. + // + // DocumentVersion is a required field + DocumentVersion *string `type:"string" required:"true"` + + // The name of a custom document that you want to set as the default version. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateDocumentDefaultVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDocumentDefaultVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateDocumentDefaultVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateDocumentDefaultVersionInput"} + if s.DocumentVersion == nil { + invalidParams.Add(request.NewErrParamRequired("DocumentVersion")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *UpdateDocumentDefaultVersionInput) SetDocumentVersion(v string) *UpdateDocumentDefaultVersionInput { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateDocumentDefaultVersionInput) SetName(v string) *UpdateDocumentDefaultVersionInput { + s.Name = &v + return s +} + +type UpdateDocumentDefaultVersionOutput struct { + _ struct{} `type:"structure"` + + // The description of a custom document that you want to set as the default + // version. + Description *DocumentDefaultVersionDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateDocumentDefaultVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDocumentDefaultVersionOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *UpdateDocumentDefaultVersionOutput) SetDescription(v *DocumentDefaultVersionDescription) *UpdateDocumentDefaultVersionOutput { + s.Description = v + return s +} + +type UpdateDocumentInput struct { + _ struct{} `type:"structure"` + + // The content in a document that you want to update. + // + // Content is a required field + Content *string `min:"1" type:"string" required:"true"` + + // Specify the document format for the new document version. Systems Manager + // supports JSON and YAML documents. JSON is the default format. + DocumentFormat *string `type:"string" enum:"DocumentFormat"` + + // The version of the document that you want to update. + DocumentVersion *string `type:"string"` + + // The name of the document that you want to update. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // Specify a new target type for the document. + TargetType *string `type:"string"` +} + +// String returns the string representation +func (s UpdateDocumentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDocumentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateDocumentInput"} + if s.Content == nil { + invalidParams.Add(request.NewErrParamRequired("Content")) + } + if s.Content != nil && len(*s.Content) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Content", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContent sets the Content field's value. +func (s *UpdateDocumentInput) SetContent(v string) *UpdateDocumentInput { + s.Content = &v + return s +} + +// SetDocumentFormat sets the DocumentFormat field's value. +func (s *UpdateDocumentInput) SetDocumentFormat(v string) *UpdateDocumentInput { + s.DocumentFormat = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *UpdateDocumentInput) SetDocumentVersion(v string) *UpdateDocumentInput { + s.DocumentVersion = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateDocumentInput) SetName(v string) *UpdateDocumentInput { + s.Name = &v + return s +} + +// SetTargetType sets the TargetType field's value. +func (s *UpdateDocumentInput) SetTargetType(v string) *UpdateDocumentInput { + s.TargetType = &v + return s +} + +type UpdateDocumentOutput struct { + _ struct{} `type:"structure"` + + // A description of the document that was updated. + DocumentDescription *DocumentDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateDocumentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDocumentOutput) GoString() string { + return s.String() +} + +// SetDocumentDescription sets the DocumentDescription field's value. +func (s *UpdateDocumentOutput) SetDocumentDescription(v *DocumentDescription) *UpdateDocumentOutput { + s.DocumentDescription = v + return s +} + +type UpdateMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // Whether targets must be registered with the Maintenance Window before tasks + // can be defined for those targets. + AllowUnassociatedTargets *bool `type:"boolean"` + + // The number of hours before the end of the Maintenance Window that Systems + // Manager stops scheduling new tasks for execution. + Cutoff *int64 `type:"integer"` + + // An optional description for the update request. + Description *string `min:"1" type:"string"` + + // The duration of the Maintenance Window in hours. + Duration *int64 `min:"1" type:"integer"` + + // Whether the Maintenance Window is enabled. + Enabled *bool `type:"boolean"` + + // The name of the Maintenance Window. + Name *string `min:"3" type:"string"` + + // If True, then all fields that are required by the CreateMaintenanceWindow + // action are also required for this API request. Optional fields that are not + // specified are set to null. + Replace *bool `type:"boolean"` + + // The schedule of the Maintenance Window in the form of a cron or rate expression. + Schedule *string `min:"1" type:"string"` + + // The ID of the Maintenance Window to update. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateMaintenanceWindowInput"} + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.Duration != nil && *s.Duration < 1 { + invalidParams.Add(request.NewErrParamMinValue("Duration", 1)) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.Schedule != nil && len(*s.Schedule) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Schedule", 1)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowUnassociatedTargets sets the AllowUnassociatedTargets field's value. +func (s *UpdateMaintenanceWindowInput) SetAllowUnassociatedTargets(v bool) *UpdateMaintenanceWindowInput { + s.AllowUnassociatedTargets = &v + return s +} + +// SetCutoff sets the Cutoff field's value. +func (s *UpdateMaintenanceWindowInput) SetCutoff(v int64) *UpdateMaintenanceWindowInput { + s.Cutoff = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateMaintenanceWindowInput) SetDescription(v string) *UpdateMaintenanceWindowInput { + s.Description = &v + return s +} + +// SetDuration sets the Duration field's value. +func (s *UpdateMaintenanceWindowInput) SetDuration(v int64) *UpdateMaintenanceWindowInput { + s.Duration = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *UpdateMaintenanceWindowInput) SetEnabled(v bool) *UpdateMaintenanceWindowInput { + s.Enabled = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateMaintenanceWindowInput) SetName(v string) *UpdateMaintenanceWindowInput { + s.Name = &v + return s +} + +// SetReplace sets the Replace field's value. +func (s *UpdateMaintenanceWindowInput) SetReplace(v bool) *UpdateMaintenanceWindowInput { + s.Replace = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *UpdateMaintenanceWindowInput) SetSchedule(v string) *UpdateMaintenanceWindowInput { + s.Schedule = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *UpdateMaintenanceWindowInput) SetWindowId(v string) *UpdateMaintenanceWindowInput { + s.WindowId = &v + return s +} + +type UpdateMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // Whether targets must be registered with the Maintenance Window before tasks + // can be defined for those targets. + AllowUnassociatedTargets *bool `type:"boolean"` + + // The number of hours before the end of the Maintenance Window that Systems + // Manager stops scheduling new tasks for execution. + Cutoff *int64 `type:"integer"` + + // An optional description of the update. + Description *string `min:"1" type:"string"` + + // The duration of the Maintenance Window in hours. + Duration *int64 `min:"1" type:"integer"` + + // Whether the Maintenance Window is enabled. + Enabled *bool `type:"boolean"` + + // The name of the Maintenance Window. + Name *string `min:"3" type:"string"` + + // The schedule of the Maintenance Window in the form of a cron or rate expression. + Schedule *string `min:"1" type:"string"` + + // The ID of the created Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s UpdateMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetAllowUnassociatedTargets sets the AllowUnassociatedTargets field's value. +func (s *UpdateMaintenanceWindowOutput) SetAllowUnassociatedTargets(v bool) *UpdateMaintenanceWindowOutput { + s.AllowUnassociatedTargets = &v + return s +} + +// SetCutoff sets the Cutoff field's value. +func (s *UpdateMaintenanceWindowOutput) SetCutoff(v int64) *UpdateMaintenanceWindowOutput { + s.Cutoff = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateMaintenanceWindowOutput) SetDescription(v string) *UpdateMaintenanceWindowOutput { + s.Description = &v + return s +} + +// SetDuration sets the Duration field's value. +func (s *UpdateMaintenanceWindowOutput) SetDuration(v int64) *UpdateMaintenanceWindowOutput { + s.Duration = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *UpdateMaintenanceWindowOutput) SetEnabled(v bool) *UpdateMaintenanceWindowOutput { + s.Enabled = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateMaintenanceWindowOutput) SetName(v string) *UpdateMaintenanceWindowOutput { + s.Name = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *UpdateMaintenanceWindowOutput) SetSchedule(v string) *UpdateMaintenanceWindowOutput { + s.Schedule = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *UpdateMaintenanceWindowOutput) SetWindowId(v string) *UpdateMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +type UpdateMaintenanceWindowTargetInput struct { + _ struct{} `type:"structure"` + + // An optional description for the update. + Description *string `min:"1" type:"string"` + + // A name for the update. + Name *string `min:"3" type:"string"` + + // User-provided value that will be included in any CloudWatch events raised + // while running tasks for these targets in this Maintenance Window. + OwnerInformation *string `min:"1" type:"string"` + + // If True, then all fields that are required by the RegisterTargetWithMaintenanceWindow + // action are also required for this API request. Optional fields that are not + // specified are set to null. + Replace *bool `type:"boolean"` + + // The targets to add or replace. + Targets []*Target `type:"list"` + + // The Maintenance Window ID with which to modify the target. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` + + // The target ID to modify. + // + // WindowTargetId is a required field + WindowTargetId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateMaintenanceWindowTargetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceWindowTargetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateMaintenanceWindowTargetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateMaintenanceWindowTargetInput"} + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.OwnerInformation != nil && len(*s.OwnerInformation) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OwnerInformation", 1)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.WindowTargetId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowTargetId")) + } + if s.WindowTargetId != nil && len(*s.WindowTargetId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowTargetId", 36)) + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetDescription(v string) *UpdateMaintenanceWindowTargetInput { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetName(v string) *UpdateMaintenanceWindowTargetInput { + s.Name = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetOwnerInformation(v string) *UpdateMaintenanceWindowTargetInput { + s.OwnerInformation = &v + return s +} + +// SetReplace sets the Replace field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetReplace(v bool) *UpdateMaintenanceWindowTargetInput { + s.Replace = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetTargets(v []*Target) *UpdateMaintenanceWindowTargetInput { + s.Targets = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetWindowId(v string) *UpdateMaintenanceWindowTargetInput { + s.WindowId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *UpdateMaintenanceWindowTargetInput) SetWindowTargetId(v string) *UpdateMaintenanceWindowTargetInput { + s.WindowTargetId = &v + return s +} + +type UpdateMaintenanceWindowTargetOutput struct { + _ struct{} `type:"structure"` + + // The updated description. + Description *string `min:"1" type:"string"` + + // The updated name. + Name *string `min:"3" type:"string"` + + // The updated owner. + OwnerInformation *string `min:"1" type:"string"` + + // The updated targets. + Targets []*Target `type:"list"` + + // The Maintenance Window ID specified in the update request. + WindowId *string `min:"20" type:"string"` + + // The target ID specified in the update request. + WindowTargetId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s UpdateMaintenanceWindowTargetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceWindowTargetOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *UpdateMaintenanceWindowTargetOutput) SetDescription(v string) *UpdateMaintenanceWindowTargetOutput { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateMaintenanceWindowTargetOutput) SetName(v string) *UpdateMaintenanceWindowTargetOutput { + s.Name = &v + return s +} + +// SetOwnerInformation sets the OwnerInformation field's value. +func (s *UpdateMaintenanceWindowTargetOutput) SetOwnerInformation(v string) *UpdateMaintenanceWindowTargetOutput { + s.OwnerInformation = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *UpdateMaintenanceWindowTargetOutput) SetTargets(v []*Target) *UpdateMaintenanceWindowTargetOutput { + s.Targets = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *UpdateMaintenanceWindowTargetOutput) SetWindowId(v string) *UpdateMaintenanceWindowTargetOutput { + s.WindowId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *UpdateMaintenanceWindowTargetOutput) SetWindowTargetId(v string) *UpdateMaintenanceWindowTargetOutput { + s.WindowTargetId = &v + return s +} + +type UpdateMaintenanceWindowTaskInput struct { + _ struct{} `type:"structure"` + + // The new task description to specify. + Description *string `min:"1" type:"string"` + + // The new logging location in Amazon S3 to specify. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + LoggingInfo *LoggingInfo `type:"structure"` + + // The new MaxConcurrency value you want to specify. MaxConcurrency is the number + // of targets that are allowed to run this task in parallel. + MaxConcurrency *string `min:"1" type:"string"` + + // The new MaxErrors value to specify. MaxErrors is the maximum number of errors + // that are allowed before the task stops being scheduled. + MaxErrors *string `min:"1" type:"string"` + + // The new task name to specify. + Name *string `min:"3" type:"string"` + + // The new task priority to specify. The lower the number, the higher the priority. + // Tasks that have the same priority are scheduled in parallel. + Priority *int64 `type:"integer"` + + // If True, then all fields that are required by the RegisterTaskWithMaintenanceWndow + // action are also required for this API request. Optional fields that are not + // specified are set to null. + Replace *bool `type:"boolean"` + + // The IAM service role ARN to modify. The system assumes this role during task + // execution. + ServiceRoleArn *string `type:"string"` + + // The targets (either instances or tags) to modify. Instances are specified + // using Key=instanceids,Values=instanceID_1,instanceID_2. Tags are specified + // using Key=tag_name,Values=tag_value. + Targets []*Target `type:"list"` + + // The task ARN to modify. + TaskArn *string `min:"1" type:"string"` + + // The parameters that the task should use during execution. Populate only the + // fields that match the task type. All other fields should be empty. + TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` + + // The parameters to modify. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // The map has the following format: + // + // Key: string, between 1 and 255 characters + // + // Value: an array of strings, each string is between 1 and 255 characters + TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` + + // The Maintenance Window ID that contains the task to modify. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` + + // The task ID to modify. + // + // WindowTaskId is a required field + WindowTaskId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateMaintenanceWindowTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceWindowTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateMaintenanceWindowTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateMaintenanceWindowTaskInput"} + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.TaskArn != nil && len(*s.TaskArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskArn", 1)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.WindowTaskId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowTaskId")) + } + if s.WindowTaskId != nil && len(*s.WindowTaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowTaskId", 36)) + } + if s.LoggingInfo != nil { + if err := s.LoggingInfo.Validate(); err != nil { + invalidParams.AddNested("LoggingInfo", err.(request.ErrInvalidParams)) + } + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TaskInvocationParameters != nil { + if err := s.TaskInvocationParameters.Validate(); err != nil { + invalidParams.AddNested("TaskInvocationParameters", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetDescription(v string) *UpdateMaintenanceWindowTaskInput { + s.Description = &v + return s +} + +// SetLoggingInfo sets the LoggingInfo field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetLoggingInfo(v *LoggingInfo) *UpdateMaintenanceWindowTaskInput { + s.LoggingInfo = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetMaxConcurrency(v string) *UpdateMaintenanceWindowTaskInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetMaxErrors(v string) *UpdateMaintenanceWindowTaskInput { + s.MaxErrors = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetName(v string) *UpdateMaintenanceWindowTaskInput { + s.Name = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetPriority(v int64) *UpdateMaintenanceWindowTaskInput { + s.Priority = &v + return s +} + +// SetReplace sets the Replace field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetReplace(v bool) *UpdateMaintenanceWindowTaskInput { + s.Replace = &v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetServiceRoleArn(v string) *UpdateMaintenanceWindowTaskInput { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetTargets(v []*Target) *UpdateMaintenanceWindowTaskInput { + s.Targets = v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetTaskArn(v string) *UpdateMaintenanceWindowTaskInput { + s.TaskArn = &v + return s +} + +// SetTaskInvocationParameters sets the TaskInvocationParameters field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetTaskInvocationParameters(v *MaintenanceWindowTaskInvocationParameters) *UpdateMaintenanceWindowTaskInput { + s.TaskInvocationParameters = v + return s +} + +// SetTaskParameters sets the TaskParameters field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetTaskParameters(v map[string]*MaintenanceWindowTaskParameterValueExpression) *UpdateMaintenanceWindowTaskInput { + s.TaskParameters = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetWindowId(v string) *UpdateMaintenanceWindowTaskInput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *UpdateMaintenanceWindowTaskInput) SetWindowTaskId(v string) *UpdateMaintenanceWindowTaskInput { + s.WindowTaskId = &v + return s +} + +type UpdateMaintenanceWindowTaskOutput struct { + _ struct{} `type:"structure"` + + // The updated task description. + Description *string `min:"1" type:"string"` + + // The updated logging information in Amazon S3. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + LoggingInfo *LoggingInfo `type:"structure"` + + // The updated MaxConcurrency value. + MaxConcurrency *string `min:"1" type:"string"` + + // The updated MaxErrors value. + MaxErrors *string `min:"1" type:"string"` + + // The updated task name. + Name *string `min:"3" type:"string"` + + // The updated priority value. + Priority *int64 `type:"integer"` + + // The updated service role ARN value. + ServiceRoleArn *string `type:"string"` + + // The updated target values. + Targets []*Target `type:"list"` + + // The updated task ARN value. + TaskArn *string `min:"1" type:"string"` + + // The updated parameter values. + TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` + + // The updated parameter values. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` + + // The ID of the Maintenance Window that was updated. + WindowId *string `min:"20" type:"string"` + + // The task ID of the Maintenance Window that was updated. + WindowTaskId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s UpdateMaintenanceWindowTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceWindowTaskOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetDescription(v string) *UpdateMaintenanceWindowTaskOutput { + s.Description = &v + return s +} + +// SetLoggingInfo sets the LoggingInfo field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetLoggingInfo(v *LoggingInfo) *UpdateMaintenanceWindowTaskOutput { + s.LoggingInfo = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetMaxConcurrency(v string) *UpdateMaintenanceWindowTaskOutput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetMaxErrors(v string) *UpdateMaintenanceWindowTaskOutput { + s.MaxErrors = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetName(v string) *UpdateMaintenanceWindowTaskOutput { + s.Name = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetPriority(v int64) *UpdateMaintenanceWindowTaskOutput { + s.Priority = &v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetServiceRoleArn(v string) *UpdateMaintenanceWindowTaskOutput { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetTargets(v []*Target) *UpdateMaintenanceWindowTaskOutput { + s.Targets = v + return s +} + +// SetTaskArn sets the TaskArn field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetTaskArn(v string) *UpdateMaintenanceWindowTaskOutput { + s.TaskArn = &v + return s +} + +// SetTaskInvocationParameters sets the TaskInvocationParameters field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetTaskInvocationParameters(v *MaintenanceWindowTaskInvocationParameters) *UpdateMaintenanceWindowTaskOutput { + s.TaskInvocationParameters = v + return s +} + +// SetTaskParameters sets the TaskParameters field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetTaskParameters(v map[string]*MaintenanceWindowTaskParameterValueExpression) *UpdateMaintenanceWindowTaskOutput { + s.TaskParameters = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetWindowId(v string) *UpdateMaintenanceWindowTaskOutput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *UpdateMaintenanceWindowTaskOutput) SetWindowTaskId(v string) *UpdateMaintenanceWindowTaskOutput { + s.WindowTaskId = &v + return s +} + +type UpdateManagedInstanceRoleInput struct { + _ struct{} `type:"structure"` + + // The IAM role you want to assign or change. + // + // IamRole is a required field + IamRole *string `type:"string" required:"true"` + + // The ID of the managed instance where you want to update the role. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateManagedInstanceRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateManagedInstanceRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateManagedInstanceRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateManagedInstanceRoleInput"} + if s.IamRole == nil { + invalidParams.Add(request.NewErrParamRequired("IamRole")) + } + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIamRole sets the IamRole field's value. +func (s *UpdateManagedInstanceRoleInput) SetIamRole(v string) *UpdateManagedInstanceRoleInput { + s.IamRole = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *UpdateManagedInstanceRoleInput) SetInstanceId(v string) *UpdateManagedInstanceRoleInput { + s.InstanceId = &v + return s +} + +type UpdateManagedInstanceRoleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateManagedInstanceRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateManagedInstanceRoleOutput) GoString() string { + return s.String() +} + +type UpdatePatchBaselineInput struct { + _ struct{} `type:"structure"` + + // A set of rules used to include patches in the baseline. + ApprovalRules *PatchRuleGroup `type:"structure"` + + // A list of explicitly approved patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. + ApprovedPatches []*string `type:"list"` + + // Assigns a new compliance severity level to an existing patch baseline. + ApprovedPatchesComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` + + // Indicates whether the list of approved patches includes non-security updates + // that should be applied to the instances. The default value is 'false'. Applies + // to Linux instances only. + ApprovedPatchesEnableNonSecurity *bool `type:"boolean"` + + // The ID of the patch baseline to update. + // + // BaselineId is a required field + BaselineId *string `min:"20" type:"string" required:"true"` + + // A description of the patch baseline. + Description *string `min:"1" type:"string"` + + // A set of global filters used to exclude patches from the baseline. + GlobalFilters *PatchFilterGroup `type:"structure"` + + // The name of the patch baseline. + Name *string `min:"3" type:"string"` + + // A list of explicitly rejected patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. + RejectedPatches []*string `type:"list"` + + // If True, then all fields that are required by the CreatePatchBaseline action + // are also required for this API request. Optional fields that are not specified + // are set to null. + Replace *bool `type:"boolean"` + + // Information about the patches to use to update the instances, including target + // operating systems and source repositories. Applies to Linux instances only. + Sources []*PatchSource `type:"list"` +} + +// String returns the string representation +func (s UpdatePatchBaselineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdatePatchBaselineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdatePatchBaselineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdatePatchBaselineInput"} + if s.BaselineId == nil { + invalidParams.Add(request.NewErrParamRequired("BaselineId")) + } + if s.BaselineId != nil && len(*s.BaselineId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("BaselineId", 20)) + } + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.Name != nil && len(*s.Name) < 3 { + invalidParams.Add(request.NewErrParamMinLen("Name", 3)) + } + if s.ApprovalRules != nil { + if err := s.ApprovalRules.Validate(); err != nil { + invalidParams.AddNested("ApprovalRules", err.(request.ErrInvalidParams)) + } + } + if s.GlobalFilters != nil { + if err := s.GlobalFilters.Validate(); err != nil { + invalidParams.AddNested("GlobalFilters", err.(request.ErrInvalidParams)) + } + } + if s.Sources != nil { + for i, v := range s.Sources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Sources", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApprovalRules sets the ApprovalRules field's value. +func (s *UpdatePatchBaselineInput) SetApprovalRules(v *PatchRuleGroup) *UpdatePatchBaselineInput { + s.ApprovalRules = v + return s +} + +// SetApprovedPatches sets the ApprovedPatches field's value. +func (s *UpdatePatchBaselineInput) SetApprovedPatches(v []*string) *UpdatePatchBaselineInput { + s.ApprovedPatches = v + return s +} + +// SetApprovedPatchesComplianceLevel sets the ApprovedPatchesComplianceLevel field's value. +func (s *UpdatePatchBaselineInput) SetApprovedPatchesComplianceLevel(v string) *UpdatePatchBaselineInput { + s.ApprovedPatchesComplianceLevel = &v + return s +} + +// SetApprovedPatchesEnableNonSecurity sets the ApprovedPatchesEnableNonSecurity field's value. +func (s *UpdatePatchBaselineInput) SetApprovedPatchesEnableNonSecurity(v bool) *UpdatePatchBaselineInput { + s.ApprovedPatchesEnableNonSecurity = &v + return s +} + +// SetBaselineId sets the BaselineId field's value. +func (s *UpdatePatchBaselineInput) SetBaselineId(v string) *UpdatePatchBaselineInput { + s.BaselineId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdatePatchBaselineInput) SetDescription(v string) *UpdatePatchBaselineInput { + s.Description = &v + return s +} + +// SetGlobalFilters sets the GlobalFilters field's value. +func (s *UpdatePatchBaselineInput) SetGlobalFilters(v *PatchFilterGroup) *UpdatePatchBaselineInput { + s.GlobalFilters = v + return s +} + +// SetName sets the Name field's value. +func (s *UpdatePatchBaselineInput) SetName(v string) *UpdatePatchBaselineInput { + s.Name = &v + return s +} + +// SetRejectedPatches sets the RejectedPatches field's value. +func (s *UpdatePatchBaselineInput) SetRejectedPatches(v []*string) *UpdatePatchBaselineInput { + s.RejectedPatches = v + return s +} + +// SetReplace sets the Replace field's value. +func (s *UpdatePatchBaselineInput) SetReplace(v bool) *UpdatePatchBaselineInput { + s.Replace = &v + return s +} + +// SetSources sets the Sources field's value. +func (s *UpdatePatchBaselineInput) SetSources(v []*PatchSource) *UpdatePatchBaselineInput { + s.Sources = v + return s +} + +type UpdatePatchBaselineOutput struct { + _ struct{} `type:"structure"` + + // A set of rules used to include patches in the baseline. + ApprovalRules *PatchRuleGroup `type:"structure"` + + // A list of explicitly approved patches for the baseline. + ApprovedPatches []*string `type:"list"` + + // The compliance severity level assigned to the patch baseline after the update + // completed. + ApprovedPatchesComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` + + // Indicates whether the list of approved patches includes non-security updates + // that should be applied to the instances. The default value is 'false'. Applies + // to Linux instances only. + ApprovedPatchesEnableNonSecurity *bool `type:"boolean"` + + // The ID of the deleted patch baseline. + BaselineId *string `min:"20" type:"string"` + + // The date when the patch baseline was created. + CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // A description of the Patch Baseline. + Description *string `min:"1" type:"string"` + + // A set of global filters used to exclude patches from the baseline. + GlobalFilters *PatchFilterGroup `type:"structure"` + + // The date when the patch baseline was last modified. + ModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The name of the patch baseline. + Name *string `min:"3" type:"string"` + + // The operating system rule used by the updated patch baseline. + OperatingSystem *string `type:"string" enum:"OperatingSystem"` + + // A list of explicitly rejected patches for the baseline. + RejectedPatches []*string `type:"list"` + + // Information about the patches to use to update the instances, including target + // operating systems and source repositories. Applies to Linux instances only. + Sources []*PatchSource `type:"list"` +} + +// String returns the string representation +func (s UpdatePatchBaselineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdatePatchBaselineOutput) GoString() string { + return s.String() +} + +// SetApprovalRules sets the ApprovalRules field's value. +func (s *UpdatePatchBaselineOutput) SetApprovalRules(v *PatchRuleGroup) *UpdatePatchBaselineOutput { + s.ApprovalRules = v + return s +} + +// SetApprovedPatches sets the ApprovedPatches field's value. +func (s *UpdatePatchBaselineOutput) SetApprovedPatches(v []*string) *UpdatePatchBaselineOutput { + s.ApprovedPatches = v + return s +} + +// SetApprovedPatchesComplianceLevel sets the ApprovedPatchesComplianceLevel field's value. +func (s *UpdatePatchBaselineOutput) SetApprovedPatchesComplianceLevel(v string) *UpdatePatchBaselineOutput { + s.ApprovedPatchesComplianceLevel = &v + return s +} + +// SetApprovedPatchesEnableNonSecurity sets the ApprovedPatchesEnableNonSecurity field's value. +func (s *UpdatePatchBaselineOutput) SetApprovedPatchesEnableNonSecurity(v bool) *UpdatePatchBaselineOutput { + s.ApprovedPatchesEnableNonSecurity = &v + return s +} + +// SetBaselineId sets the BaselineId field's value. +func (s *UpdatePatchBaselineOutput) SetBaselineId(v string) *UpdatePatchBaselineOutput { + s.BaselineId = &v + return s +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *UpdatePatchBaselineOutput) SetCreatedDate(v time.Time) *UpdatePatchBaselineOutput { + s.CreatedDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdatePatchBaselineOutput) SetDescription(v string) *UpdatePatchBaselineOutput { + s.Description = &v + return s +} + +// SetGlobalFilters sets the GlobalFilters field's value. +func (s *UpdatePatchBaselineOutput) SetGlobalFilters(v *PatchFilterGroup) *UpdatePatchBaselineOutput { + s.GlobalFilters = v + return s +} + +// SetModifiedDate sets the ModifiedDate field's value. +func (s *UpdatePatchBaselineOutput) SetModifiedDate(v time.Time) *UpdatePatchBaselineOutput { + s.ModifiedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdatePatchBaselineOutput) SetName(v string) *UpdatePatchBaselineOutput { + s.Name = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *UpdatePatchBaselineOutput) SetOperatingSystem(v string) *UpdatePatchBaselineOutput { + s.OperatingSystem = &v + return s +} + +// SetRejectedPatches sets the RejectedPatches field's value. +func (s *UpdatePatchBaselineOutput) SetRejectedPatches(v []*string) *UpdatePatchBaselineOutput { + s.RejectedPatches = v + return s +} + +// SetSources sets the Sources field's value. +func (s *UpdatePatchBaselineOutput) SetSources(v []*PatchSource) *UpdatePatchBaselineOutput { + s.Sources = v + return s +} + +const ( + // AssociationFilterKeyInstanceId is a AssociationFilterKey enum value + AssociationFilterKeyInstanceId = "InstanceId" + + // AssociationFilterKeyName is a AssociationFilterKey enum value + AssociationFilterKeyName = "Name" + + // AssociationFilterKeyAssociationId is a AssociationFilterKey enum value + AssociationFilterKeyAssociationId = "AssociationId" + + // AssociationFilterKeyAssociationStatusName is a AssociationFilterKey enum value + AssociationFilterKeyAssociationStatusName = "AssociationStatusName" + + // AssociationFilterKeyLastExecutedBefore is a AssociationFilterKey enum value + AssociationFilterKeyLastExecutedBefore = "LastExecutedBefore" + + // AssociationFilterKeyLastExecutedAfter is a AssociationFilterKey enum value + AssociationFilterKeyLastExecutedAfter = "LastExecutedAfter" + + // AssociationFilterKeyAssociationName is a AssociationFilterKey enum value + AssociationFilterKeyAssociationName = "AssociationName" +) + +const ( + // AssociationStatusNamePending is a AssociationStatusName enum value + AssociationStatusNamePending = "Pending" + + // AssociationStatusNameSuccess is a AssociationStatusName enum value + AssociationStatusNameSuccess = "Success" + + // AssociationStatusNameFailed is a AssociationStatusName enum value + AssociationStatusNameFailed = "Failed" +) + +const ( + // AutomationExecutionFilterKeyDocumentNamePrefix is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyDocumentNamePrefix = "DocumentNamePrefix" + + // AutomationExecutionFilterKeyExecutionStatus is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyExecutionStatus = "ExecutionStatus" + + // AutomationExecutionFilterKeyExecutionId is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyExecutionId = "ExecutionId" + + // AutomationExecutionFilterKeyParentExecutionId is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyParentExecutionId = "ParentExecutionId" + + // AutomationExecutionFilterKeyCurrentAction is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyCurrentAction = "CurrentAction" + + // AutomationExecutionFilterKeyStartTimeBefore is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyStartTimeBefore = "StartTimeBefore" + + // AutomationExecutionFilterKeyStartTimeAfter is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyStartTimeAfter = "StartTimeAfter" +) + +const ( + // AutomationExecutionStatusPending is a AutomationExecutionStatus enum value + AutomationExecutionStatusPending = "Pending" + + // AutomationExecutionStatusInProgress is a AutomationExecutionStatus enum value + AutomationExecutionStatusInProgress = "InProgress" + + // AutomationExecutionStatusWaiting is a AutomationExecutionStatus enum value + AutomationExecutionStatusWaiting = "Waiting" + + // AutomationExecutionStatusSuccess is a AutomationExecutionStatus enum value + AutomationExecutionStatusSuccess = "Success" + + // AutomationExecutionStatusTimedOut is a AutomationExecutionStatus enum value + AutomationExecutionStatusTimedOut = "TimedOut" + + // AutomationExecutionStatusCancelling is a AutomationExecutionStatus enum value + AutomationExecutionStatusCancelling = "Cancelling" + + // AutomationExecutionStatusCancelled is a AutomationExecutionStatus enum value + AutomationExecutionStatusCancelled = "Cancelled" + + // AutomationExecutionStatusFailed is a AutomationExecutionStatus enum value + AutomationExecutionStatusFailed = "Failed" +) + +const ( + // CommandFilterKeyInvokedAfter is a CommandFilterKey enum value + CommandFilterKeyInvokedAfter = "InvokedAfter" + + // CommandFilterKeyInvokedBefore is a CommandFilterKey enum value + CommandFilterKeyInvokedBefore = "InvokedBefore" + + // CommandFilterKeyStatus is a CommandFilterKey enum value + CommandFilterKeyStatus = "Status" +) + +const ( + // CommandInvocationStatusPending is a CommandInvocationStatus enum value + CommandInvocationStatusPending = "Pending" + + // CommandInvocationStatusInProgress is a CommandInvocationStatus enum value + CommandInvocationStatusInProgress = "InProgress" + + // CommandInvocationStatusDelayed is a CommandInvocationStatus enum value + CommandInvocationStatusDelayed = "Delayed" + + // CommandInvocationStatusSuccess is a CommandInvocationStatus enum value + CommandInvocationStatusSuccess = "Success" + + // CommandInvocationStatusCancelled is a CommandInvocationStatus enum value + CommandInvocationStatusCancelled = "Cancelled" + + // CommandInvocationStatusTimedOut is a CommandInvocationStatus enum value + CommandInvocationStatusTimedOut = "TimedOut" + + // CommandInvocationStatusFailed is a CommandInvocationStatus enum value + CommandInvocationStatusFailed = "Failed" + + // CommandInvocationStatusCancelling is a CommandInvocationStatus enum value + CommandInvocationStatusCancelling = "Cancelling" +) + +const ( + // CommandPluginStatusPending is a CommandPluginStatus enum value + CommandPluginStatusPending = "Pending" + + // CommandPluginStatusInProgress is a CommandPluginStatus enum value + CommandPluginStatusInProgress = "InProgress" + + // CommandPluginStatusSuccess is a CommandPluginStatus enum value + CommandPluginStatusSuccess = "Success" + + // CommandPluginStatusTimedOut is a CommandPluginStatus enum value + CommandPluginStatusTimedOut = "TimedOut" + + // CommandPluginStatusCancelled is a CommandPluginStatus enum value + CommandPluginStatusCancelled = "Cancelled" + + // CommandPluginStatusFailed is a CommandPluginStatus enum value + CommandPluginStatusFailed = "Failed" +) + +const ( + // CommandStatusPending is a CommandStatus enum value + CommandStatusPending = "Pending" + + // CommandStatusInProgress is a CommandStatus enum value + CommandStatusInProgress = "InProgress" + + // CommandStatusSuccess is a CommandStatus enum value + CommandStatusSuccess = "Success" + + // CommandStatusCancelled is a CommandStatus enum value + CommandStatusCancelled = "Cancelled" + + // CommandStatusFailed is a CommandStatus enum value + CommandStatusFailed = "Failed" + + // CommandStatusTimedOut is a CommandStatus enum value + CommandStatusTimedOut = "TimedOut" + + // CommandStatusCancelling is a CommandStatus enum value + CommandStatusCancelling = "Cancelling" +) + +const ( + // ComplianceQueryOperatorTypeEqual is a ComplianceQueryOperatorType enum value + ComplianceQueryOperatorTypeEqual = "EQUAL" + + // ComplianceQueryOperatorTypeNotEqual is a ComplianceQueryOperatorType enum value + ComplianceQueryOperatorTypeNotEqual = "NOT_EQUAL" + + // ComplianceQueryOperatorTypeBeginWith is a ComplianceQueryOperatorType enum value + ComplianceQueryOperatorTypeBeginWith = "BEGIN_WITH" + + // ComplianceQueryOperatorTypeLessThan is a ComplianceQueryOperatorType enum value + ComplianceQueryOperatorTypeLessThan = "LESS_THAN" + + // ComplianceQueryOperatorTypeGreaterThan is a ComplianceQueryOperatorType enum value + ComplianceQueryOperatorTypeGreaterThan = "GREATER_THAN" +) + +const ( + // ComplianceSeverityCritical is a ComplianceSeverity enum value + ComplianceSeverityCritical = "CRITICAL" + + // ComplianceSeverityHigh is a ComplianceSeverity enum value + ComplianceSeverityHigh = "HIGH" + + // ComplianceSeverityMedium is a ComplianceSeverity enum value + ComplianceSeverityMedium = "MEDIUM" + + // ComplianceSeverityLow is a ComplianceSeverity enum value + ComplianceSeverityLow = "LOW" + + // ComplianceSeverityInformational is a ComplianceSeverity enum value + ComplianceSeverityInformational = "INFORMATIONAL" + + // ComplianceSeverityUnspecified is a ComplianceSeverity enum value + ComplianceSeverityUnspecified = "UNSPECIFIED" +) + +const ( + // ComplianceStatusCompliant is a ComplianceStatus enum value + ComplianceStatusCompliant = "COMPLIANT" + + // ComplianceStatusNonCompliant is a ComplianceStatus enum value + ComplianceStatusNonCompliant = "NON_COMPLIANT" +) + +const ( + // DescribeActivationsFilterKeysActivationIds is a DescribeActivationsFilterKeys enum value + DescribeActivationsFilterKeysActivationIds = "ActivationIds" + + // DescribeActivationsFilterKeysDefaultInstanceName is a DescribeActivationsFilterKeys enum value + DescribeActivationsFilterKeysDefaultInstanceName = "DefaultInstanceName" + + // DescribeActivationsFilterKeysIamRole is a DescribeActivationsFilterKeys enum value + DescribeActivationsFilterKeysIamRole = "IamRole" +) + +const ( + // DocumentFilterKeyName is a DocumentFilterKey enum value + DocumentFilterKeyName = "Name" + + // DocumentFilterKeyOwner is a DocumentFilterKey enum value + DocumentFilterKeyOwner = "Owner" + + // DocumentFilterKeyPlatformTypes is a DocumentFilterKey enum value + DocumentFilterKeyPlatformTypes = "PlatformTypes" + + // DocumentFilterKeyDocumentType is a DocumentFilterKey enum value + DocumentFilterKeyDocumentType = "DocumentType" +) + +const ( + // DocumentFormatYaml is a DocumentFormat enum value + DocumentFormatYaml = "YAML" + + // DocumentFormatJson is a DocumentFormat enum value + DocumentFormatJson = "JSON" +) + +const ( + // DocumentHashTypeSha256 is a DocumentHashType enum value + DocumentHashTypeSha256 = "Sha256" + + // DocumentHashTypeSha1 is a DocumentHashType enum value + DocumentHashTypeSha1 = "Sha1" +) + +const ( + // DocumentParameterTypeString is a DocumentParameterType enum value + DocumentParameterTypeString = "String" + + // DocumentParameterTypeStringList is a DocumentParameterType enum value + DocumentParameterTypeStringList = "StringList" +) + +const ( + // DocumentPermissionTypeShare is a DocumentPermissionType enum value + DocumentPermissionTypeShare = "Share" +) + +const ( + // DocumentStatusCreating is a DocumentStatus enum value + DocumentStatusCreating = "Creating" + + // DocumentStatusActive is a DocumentStatus enum value + DocumentStatusActive = "Active" + + // DocumentStatusUpdating is a DocumentStatus enum value + DocumentStatusUpdating = "Updating" + + // DocumentStatusDeleting is a DocumentStatus enum value + DocumentStatusDeleting = "Deleting" +) + +const ( + // DocumentTypeCommand is a DocumentType enum value + DocumentTypeCommand = "Command" + + // DocumentTypePolicy is a DocumentType enum value + DocumentTypePolicy = "Policy" + + // DocumentTypeAutomation is a DocumentType enum value + DocumentTypeAutomation = "Automation" +) + +const ( + // ExecutionModeAuto is a ExecutionMode enum value + ExecutionModeAuto = "Auto" + + // ExecutionModeInteractive is a ExecutionMode enum value + ExecutionModeInteractive = "Interactive" +) + +const ( + // FaultClient is a Fault enum value + FaultClient = "Client" + + // FaultServer is a Fault enum value + FaultServer = "Server" + + // FaultUnknown is a Fault enum value + FaultUnknown = "Unknown" +) + +const ( + // InstanceInformationFilterKeyInstanceIds is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyInstanceIds = "InstanceIds" + + // InstanceInformationFilterKeyAgentVersion is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyAgentVersion = "AgentVersion" + + // InstanceInformationFilterKeyPingStatus is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyPingStatus = "PingStatus" + + // InstanceInformationFilterKeyPlatformTypes is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyPlatformTypes = "PlatformTypes" + + // InstanceInformationFilterKeyActivationIds is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyActivationIds = "ActivationIds" + + // InstanceInformationFilterKeyIamRole is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyIamRole = "IamRole" + + // InstanceInformationFilterKeyResourceType is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyResourceType = "ResourceType" + + // InstanceInformationFilterKeyAssociationStatus is a InstanceInformationFilterKey enum value + InstanceInformationFilterKeyAssociationStatus = "AssociationStatus" +) + +const ( + // InstancePatchStateOperatorTypeEqual is a InstancePatchStateOperatorType enum value + InstancePatchStateOperatorTypeEqual = "Equal" + + // InstancePatchStateOperatorTypeNotEqual is a InstancePatchStateOperatorType enum value + InstancePatchStateOperatorTypeNotEqual = "NotEqual" + + // InstancePatchStateOperatorTypeLessThan is a InstancePatchStateOperatorType enum value + InstancePatchStateOperatorTypeLessThan = "LessThan" + + // InstancePatchStateOperatorTypeGreaterThan is a InstancePatchStateOperatorType enum value + InstancePatchStateOperatorTypeGreaterThan = "GreaterThan" +) + +const ( + // InventoryAttributeDataTypeString is a InventoryAttributeDataType enum value + InventoryAttributeDataTypeString = "string" + + // InventoryAttributeDataTypeNumber is a InventoryAttributeDataType enum value + InventoryAttributeDataTypeNumber = "number" +) + +const ( + // InventoryDeletionStatusInProgress is a InventoryDeletionStatus enum value + InventoryDeletionStatusInProgress = "InProgress" + + // InventoryDeletionStatusComplete is a InventoryDeletionStatus enum value + InventoryDeletionStatusComplete = "Complete" +) + +const ( + // InventoryQueryOperatorTypeEqual is a InventoryQueryOperatorType enum value + InventoryQueryOperatorTypeEqual = "Equal" + + // InventoryQueryOperatorTypeNotEqual is a InventoryQueryOperatorType enum value + InventoryQueryOperatorTypeNotEqual = "NotEqual" + + // InventoryQueryOperatorTypeBeginWith is a InventoryQueryOperatorType enum value + InventoryQueryOperatorTypeBeginWith = "BeginWith" + + // InventoryQueryOperatorTypeLessThan is a InventoryQueryOperatorType enum value + InventoryQueryOperatorTypeLessThan = "LessThan" + + // InventoryQueryOperatorTypeGreaterThan is a InventoryQueryOperatorType enum value + InventoryQueryOperatorTypeGreaterThan = "GreaterThan" +) + +const ( + // InventorySchemaDeleteOptionDisableSchema is a InventorySchemaDeleteOption enum value + InventorySchemaDeleteOptionDisableSchema = "DisableSchema" + + // InventorySchemaDeleteOptionDeleteSchema is a InventorySchemaDeleteOption enum value + InventorySchemaDeleteOptionDeleteSchema = "DeleteSchema" +) + +const ( + // LastResourceDataSyncStatusSuccessful is a LastResourceDataSyncStatus enum value + LastResourceDataSyncStatusSuccessful = "Successful" + + // LastResourceDataSyncStatusFailed is a LastResourceDataSyncStatus enum value + LastResourceDataSyncStatusFailed = "Failed" + + // LastResourceDataSyncStatusInProgress is a LastResourceDataSyncStatus enum value + LastResourceDataSyncStatusInProgress = "InProgress" +) + +const ( + // MaintenanceWindowExecutionStatusPending is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusPending = "PENDING" + + // MaintenanceWindowExecutionStatusInProgress is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusInProgress = "IN_PROGRESS" + + // MaintenanceWindowExecutionStatusSuccess is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusSuccess = "SUCCESS" + + // MaintenanceWindowExecutionStatusFailed is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusFailed = "FAILED" + + // MaintenanceWindowExecutionStatusTimedOut is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusTimedOut = "TIMED_OUT" + + // MaintenanceWindowExecutionStatusCancelling is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusCancelling = "CANCELLING" + + // MaintenanceWindowExecutionStatusCancelled is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusCancelled = "CANCELLED" + + // MaintenanceWindowExecutionStatusSkippedOverlapping is a MaintenanceWindowExecutionStatus enum value + MaintenanceWindowExecutionStatusSkippedOverlapping = "SKIPPED_OVERLAPPING" +) + +const ( + // MaintenanceWindowResourceTypeInstance is a MaintenanceWindowResourceType enum value + MaintenanceWindowResourceTypeInstance = "INSTANCE" +) + +const ( + // MaintenanceWindowTaskTypeRunCommand is a MaintenanceWindowTaskType enum value + MaintenanceWindowTaskTypeRunCommand = "RUN_COMMAND" + + // MaintenanceWindowTaskTypeAutomation is a MaintenanceWindowTaskType enum value + MaintenanceWindowTaskTypeAutomation = "AUTOMATION" + + // MaintenanceWindowTaskTypeStepFunctions is a MaintenanceWindowTaskType enum value + MaintenanceWindowTaskTypeStepFunctions = "STEP_FUNCTIONS" + + // MaintenanceWindowTaskTypeLambda is a MaintenanceWindowTaskType enum value + MaintenanceWindowTaskTypeLambda = "LAMBDA" +) + +const ( + // NotificationEventAll is a NotificationEvent enum value + NotificationEventAll = "All" + + // NotificationEventInProgress is a NotificationEvent enum value + NotificationEventInProgress = "InProgress" + + // NotificationEventSuccess is a NotificationEvent enum value + NotificationEventSuccess = "Success" + + // NotificationEventTimedOut is a NotificationEvent enum value + NotificationEventTimedOut = "TimedOut" + + // NotificationEventCancelled is a NotificationEvent enum value + NotificationEventCancelled = "Cancelled" + + // NotificationEventFailed is a NotificationEvent enum value + NotificationEventFailed = "Failed" +) + +const ( + // NotificationTypeCommand is a NotificationType enum value + NotificationTypeCommand = "Command" + + // NotificationTypeInvocation is a NotificationType enum value + NotificationTypeInvocation = "Invocation" +) + +const ( + // OperatingSystemWindows is a OperatingSystem enum value + OperatingSystemWindows = "WINDOWS" + + // OperatingSystemAmazonLinux is a OperatingSystem enum value + OperatingSystemAmazonLinux = "AMAZON_LINUX" + + // OperatingSystemUbuntu is a OperatingSystem enum value + OperatingSystemUbuntu = "UBUNTU" + + // OperatingSystemRedhatEnterpriseLinux is a OperatingSystem enum value + OperatingSystemRedhatEnterpriseLinux = "REDHAT_ENTERPRISE_LINUX" + + // OperatingSystemSuse is a OperatingSystem enum value + OperatingSystemSuse = "SUSE" + + // OperatingSystemCentos is a OperatingSystem enum value + OperatingSystemCentos = "CENTOS" +) + +const ( + // ParameterTypeString is a ParameterType enum value + ParameterTypeString = "String" + + // ParameterTypeStringList is a ParameterType enum value + ParameterTypeStringList = "StringList" + + // ParameterTypeSecureString is a ParameterType enum value + ParameterTypeSecureString = "SecureString" +) + +const ( + // ParametersFilterKeyName is a ParametersFilterKey enum value + ParametersFilterKeyName = "Name" + + // ParametersFilterKeyType is a ParametersFilterKey enum value + ParametersFilterKeyType = "Type" + + // ParametersFilterKeyKeyId is a ParametersFilterKey enum value + ParametersFilterKeyKeyId = "KeyId" +) + +const ( + // PatchComplianceDataStateInstalled is a PatchComplianceDataState enum value + PatchComplianceDataStateInstalled = "INSTALLED" + + // PatchComplianceDataStateInstalledOther is a PatchComplianceDataState enum value + PatchComplianceDataStateInstalledOther = "INSTALLED_OTHER" + + // PatchComplianceDataStateMissing is a PatchComplianceDataState enum value + PatchComplianceDataStateMissing = "MISSING" + + // PatchComplianceDataStateNotApplicable is a PatchComplianceDataState enum value + PatchComplianceDataStateNotApplicable = "NOT_APPLICABLE" + + // PatchComplianceDataStateFailed is a PatchComplianceDataState enum value + PatchComplianceDataStateFailed = "FAILED" +) + +const ( + // PatchComplianceLevelCritical is a PatchComplianceLevel enum value + PatchComplianceLevelCritical = "CRITICAL" + + // PatchComplianceLevelHigh is a PatchComplianceLevel enum value + PatchComplianceLevelHigh = "HIGH" + + // PatchComplianceLevelMedium is a PatchComplianceLevel enum value + PatchComplianceLevelMedium = "MEDIUM" + + // PatchComplianceLevelLow is a PatchComplianceLevel enum value + PatchComplianceLevelLow = "LOW" + + // PatchComplianceLevelInformational is a PatchComplianceLevel enum value + PatchComplianceLevelInformational = "INFORMATIONAL" + + // PatchComplianceLevelUnspecified is a PatchComplianceLevel enum value + PatchComplianceLevelUnspecified = "UNSPECIFIED" +) + +const ( + // PatchDeploymentStatusApproved is a PatchDeploymentStatus enum value + PatchDeploymentStatusApproved = "APPROVED" + + // PatchDeploymentStatusPendingApproval is a PatchDeploymentStatus enum value + PatchDeploymentStatusPendingApproval = "PENDING_APPROVAL" + + // PatchDeploymentStatusExplicitApproved is a PatchDeploymentStatus enum value + PatchDeploymentStatusExplicitApproved = "EXPLICIT_APPROVED" + + // PatchDeploymentStatusExplicitRejected is a PatchDeploymentStatus enum value + PatchDeploymentStatusExplicitRejected = "EXPLICIT_REJECTED" +) + +const ( + // PatchFilterKeyProduct is a PatchFilterKey enum value + PatchFilterKeyProduct = "PRODUCT" + + // PatchFilterKeyClassification is a PatchFilterKey enum value + PatchFilterKeyClassification = "CLASSIFICATION" + + // PatchFilterKeyMsrcSeverity is a PatchFilterKey enum value + PatchFilterKeyMsrcSeverity = "MSRC_SEVERITY" + + // PatchFilterKeyPatchId is a PatchFilterKey enum value + PatchFilterKeyPatchId = "PATCH_ID" + + // PatchFilterKeySection is a PatchFilterKey enum value + PatchFilterKeySection = "SECTION" + + // PatchFilterKeyPriority is a PatchFilterKey enum value + PatchFilterKeyPriority = "PRIORITY" + + // PatchFilterKeySeverity is a PatchFilterKey enum value + PatchFilterKeySeverity = "SEVERITY" +) + +const ( + // PatchOperationTypeScan is a PatchOperationType enum value + PatchOperationTypeScan = "Scan" + + // PatchOperationTypeInstall is a PatchOperationType enum value + PatchOperationTypeInstall = "Install" +) + +const ( + // PingStatusOnline is a PingStatus enum value + PingStatusOnline = "Online" + + // PingStatusConnectionLost is a PingStatus enum value + PingStatusConnectionLost = "ConnectionLost" + + // PingStatusInactive is a PingStatus enum value + PingStatusInactive = "Inactive" +) + +const ( + // PlatformTypeWindows is a PlatformType enum value + PlatformTypeWindows = "Windows" + + // PlatformTypeLinux is a PlatformType enum value + PlatformTypeLinux = "Linux" +) + +const ( + // ResourceDataSyncS3FormatJsonSerDe is a ResourceDataSyncS3Format enum value + ResourceDataSyncS3FormatJsonSerDe = "JsonSerDe" +) + +const ( + // ResourceTypeManagedInstance is a ResourceType enum value + ResourceTypeManagedInstance = "ManagedInstance" + + // ResourceTypeDocument is a ResourceType enum value + ResourceTypeDocument = "Document" + + // ResourceTypeEc2instance is a ResourceType enum value + ResourceTypeEc2instance = "EC2Instance" +) + +const ( + // ResourceTypeForTaggingDocument is a ResourceTypeForTagging enum value + ResourceTypeForTaggingDocument = "Document" + + // ResourceTypeForTaggingManagedInstance is a ResourceTypeForTagging enum value + ResourceTypeForTaggingManagedInstance = "ManagedInstance" + + // ResourceTypeForTaggingMaintenanceWindow is a ResourceTypeForTagging enum value + ResourceTypeForTaggingMaintenanceWindow = "MaintenanceWindow" + + // ResourceTypeForTaggingParameter is a ResourceTypeForTagging enum value + ResourceTypeForTaggingParameter = "Parameter" + + // ResourceTypeForTaggingPatchBaseline is a ResourceTypeForTagging enum value + ResourceTypeForTaggingPatchBaseline = "PatchBaseline" +) + +const ( + // SignalTypeApprove is a SignalType enum value + SignalTypeApprove = "Approve" + + // SignalTypeReject is a SignalType enum value + SignalTypeReject = "Reject" + + // SignalTypeStartStep is a SignalType enum value + SignalTypeStartStep = "StartStep" + + // SignalTypeStopStep is a SignalType enum value + SignalTypeStopStep = "StopStep" + + // SignalTypeResume is a SignalType enum value + SignalTypeResume = "Resume" +) + +const ( + // StepExecutionFilterKeyStartTimeBefore is a StepExecutionFilterKey enum value + StepExecutionFilterKeyStartTimeBefore = "StartTimeBefore" + + // StepExecutionFilterKeyStartTimeAfter is a StepExecutionFilterKey enum value + StepExecutionFilterKeyStartTimeAfter = "StartTimeAfter" + + // StepExecutionFilterKeyStepExecutionStatus is a StepExecutionFilterKey enum value + StepExecutionFilterKeyStepExecutionStatus = "StepExecutionStatus" + + // StepExecutionFilterKeyStepExecutionId is a StepExecutionFilterKey enum value + StepExecutionFilterKeyStepExecutionId = "StepExecutionId" + + // StepExecutionFilterKeyStepName is a StepExecutionFilterKey enum value + StepExecutionFilterKeyStepName = "StepName" + + // StepExecutionFilterKeyAction is a StepExecutionFilterKey enum value + StepExecutionFilterKeyAction = "Action" +) + +const ( + // StopTypeComplete is a StopType enum value + StopTypeComplete = "Complete" + + // StopTypeCancel is a StopType enum value + StopTypeCancel = "Cancel" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go new file mode 100644 index 00000000..4f18dadc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go @@ -0,0 +1,44 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package ssm provides the client and types for making API +// requests to Amazon Simple Systems Manager (SSM). +// +// AWS Systems Manager is a collection of capabilities that helps you automate +// management tasks such as collecting system inventory, applying operating +// system (OS) patches, automating the creation of Amazon Machine Images (AMIs), +// and configuring operating systems (OSs) and applications at scale. Systems +// Manager lets you remotely and securely manage the configuration of your managed +// instances. A managed instance is any Amazon EC2 instance or on-premises machine +// in your hybrid environment that has been configured for Systems Manager. +// +// This reference is intended to be used with the AWS Systems Manager User Guide +// (http://docs.aws.amazon.com/systems-manager/latest/userguide/). +// +// To get started, verify prerequisites and configure managed instances. For +// more information, see Systems Manager Prerequisites (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up.html). +// +// For information about other API actions you can perform on Amazon EC2 instances, +// see the Amazon EC2 API Reference (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/). +// For information about how to use a Query API, see Making API Requests (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html). +// +// See https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06 for more information on this service. +// +// See ssm package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/ssm/ +// +// Using the Client +// +// To contact Amazon Simple Systems Manager (SSM) with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Simple Systems Manager (SSM) client SSM for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/ssm/#New +package ssm diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go new file mode 100644 index 00000000..3321d998 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go @@ -0,0 +1,637 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ssm + +const ( + + // ErrCodeAlreadyExistsException for service response error code + // "AlreadyExistsException". + // + // Error returned if an attempt is made to register a patch group with a patch + // baseline that is already registered with a different patch baseline. + ErrCodeAlreadyExistsException = "AlreadyExistsException" + + // ErrCodeAssociatedInstances for service response error code + // "AssociatedInstances". + // + // You must disassociate a document from all instances before you can delete + // it. + ErrCodeAssociatedInstances = "AssociatedInstances" + + // ErrCodeAssociationAlreadyExists for service response error code + // "AssociationAlreadyExists". + // + // The specified association already exists. + ErrCodeAssociationAlreadyExists = "AssociationAlreadyExists" + + // ErrCodeAssociationDoesNotExist for service response error code + // "AssociationDoesNotExist". + // + // The specified association does not exist. + ErrCodeAssociationDoesNotExist = "AssociationDoesNotExist" + + // ErrCodeAssociationLimitExceeded for service response error code + // "AssociationLimitExceeded". + // + // You can have at most 2,000 active associations. + ErrCodeAssociationLimitExceeded = "AssociationLimitExceeded" + + // ErrCodeAssociationVersionLimitExceeded for service response error code + // "AssociationVersionLimitExceeded". + // + // You have reached the maximum number versions allowed for an association. + // Each association has a limit of 1,000 versions. + ErrCodeAssociationVersionLimitExceeded = "AssociationVersionLimitExceeded" + + // ErrCodeAutomationDefinitionNotFoundException for service response error code + // "AutomationDefinitionNotFoundException". + // + // An Automation document with the specified name could not be found. + ErrCodeAutomationDefinitionNotFoundException = "AutomationDefinitionNotFoundException" + + // ErrCodeAutomationDefinitionVersionNotFoundException for service response error code + // "AutomationDefinitionVersionNotFoundException". + // + // An Automation document with the specified name and version could not be found. + ErrCodeAutomationDefinitionVersionNotFoundException = "AutomationDefinitionVersionNotFoundException" + + // ErrCodeAutomationExecutionLimitExceededException for service response error code + // "AutomationExecutionLimitExceededException". + // + // The number of simultaneously running Automation executions exceeded the allowable + // limit. + ErrCodeAutomationExecutionLimitExceededException = "AutomationExecutionLimitExceededException" + + // ErrCodeAutomationExecutionNotFoundException for service response error code + // "AutomationExecutionNotFoundException". + // + // There is no automation execution information for the requested automation + // execution ID. + ErrCodeAutomationExecutionNotFoundException = "AutomationExecutionNotFoundException" + + // ErrCodeAutomationStepNotFoundException for service response error code + // "AutomationStepNotFoundException". + // + // The specified step name and execution ID don't exist. Verify the information + // and try again. + ErrCodeAutomationStepNotFoundException = "AutomationStepNotFoundException" + + // ErrCodeComplianceTypeCountLimitExceededException for service response error code + // "ComplianceTypeCountLimitExceededException". + // + // You specified too many custom compliance types. You can specify a maximum + // of 10 different types. + ErrCodeComplianceTypeCountLimitExceededException = "ComplianceTypeCountLimitExceededException" + + // ErrCodeCustomSchemaCountLimitExceededException for service response error code + // "CustomSchemaCountLimitExceededException". + // + // You have exceeded the limit for custom schemas. Delete one or more custom + // schemas and try again. + ErrCodeCustomSchemaCountLimitExceededException = "CustomSchemaCountLimitExceededException" + + // ErrCodeDocumentAlreadyExists for service response error code + // "DocumentAlreadyExists". + // + // The specified document already exists. + ErrCodeDocumentAlreadyExists = "DocumentAlreadyExists" + + // ErrCodeDocumentLimitExceeded for service response error code + // "DocumentLimitExceeded". + // + // You can have at most 200 active Systems Manager documents. + ErrCodeDocumentLimitExceeded = "DocumentLimitExceeded" + + // ErrCodeDocumentPermissionLimit for service response error code + // "DocumentPermissionLimit". + // + // The document cannot be shared with more AWS user accounts. You can share + // a document with a maximum of 20 accounts. You can publicly share up to five + // documents. If you need to increase this limit, contact AWS Support. + ErrCodeDocumentPermissionLimit = "DocumentPermissionLimit" + + // ErrCodeDocumentVersionLimitExceeded for service response error code + // "DocumentVersionLimitExceeded". + // + // The document has too many versions. Delete one or more document versions + // and try again. + ErrCodeDocumentVersionLimitExceeded = "DocumentVersionLimitExceeded" + + // ErrCodeDoesNotExistException for service response error code + // "DoesNotExistException". + // + // Error returned when the ID specified for a resource, such as a Maintenance + // Window or Patch baseline, doesn't exist. + // + // For information about resource limits in Systems Manager, see AWS Systems + // Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). + ErrCodeDoesNotExistException = "DoesNotExistException" + + // ErrCodeDuplicateDocumentContent for service response error code + // "DuplicateDocumentContent". + // + // The content of the association document matches another document. Change + // the content of the document and try again. + ErrCodeDuplicateDocumentContent = "DuplicateDocumentContent" + + // ErrCodeDuplicateInstanceId for service response error code + // "DuplicateInstanceId". + // + // You cannot specify an instance ID in more than one association. + ErrCodeDuplicateInstanceId = "DuplicateInstanceId" + + // ErrCodeFeatureNotAvailableException for service response error code + // "FeatureNotAvailableException". + // + // You attempted to register a LAMBDA or STEP_FUNCTION task in a region where + // the corresponding service is not available. + ErrCodeFeatureNotAvailableException = "FeatureNotAvailableException" + + // ErrCodeHierarchyLevelLimitExceededException for service response error code + // "HierarchyLevelLimitExceededException". + // + // A hierarchy can have a maximum of 15 levels. For more information, see Working + // with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html). + ErrCodeHierarchyLevelLimitExceededException = "HierarchyLevelLimitExceededException" + + // ErrCodeHierarchyTypeMismatchException for service response error code + // "HierarchyTypeMismatchException". + // + // Parameter Store does not support changing a parameter type in a hierarchy. + // For example, you can't change a parameter from a String type to a SecureString + // type. You must create a new, unique parameter. + ErrCodeHierarchyTypeMismatchException = "HierarchyTypeMismatchException" + + // ErrCodeIdempotentParameterMismatch for service response error code + // "IdempotentParameterMismatch". + // + // Error returned when an idempotent operation is retried and the parameters + // don't match the original call to the API with the same idempotency token. + ErrCodeIdempotentParameterMismatch = "IdempotentParameterMismatch" + + // ErrCodeInternalServerError for service response error code + // "InternalServerError". + // + // An error occurred on the server side. + ErrCodeInternalServerError = "InternalServerError" + + // ErrCodeInvalidActivation for service response error code + // "InvalidActivation". + // + // The activation is not valid. The activation might have been deleted, or the + // ActivationId and the ActivationCode do not match. + ErrCodeInvalidActivation = "InvalidActivation" + + // ErrCodeInvalidActivationId for service response error code + // "InvalidActivationId". + // + // The activation ID is not valid. Verify the you entered the correct ActivationId + // or ActivationCode and try again. + ErrCodeInvalidActivationId = "InvalidActivationId" + + // ErrCodeInvalidAllowedPatternException for service response error code + // "InvalidAllowedPatternException". + // + // The request does not meet the regular expression requirement. + ErrCodeInvalidAllowedPatternException = "InvalidAllowedPatternException" + + // ErrCodeInvalidAssociationVersion for service response error code + // "InvalidAssociationVersion". + // + // The version you specified is not valid. Use ListAssociationVersions to view + // all versions of an association according to the association ID. Or, use the + // $LATEST parameter to view the latest version of the association. + ErrCodeInvalidAssociationVersion = "InvalidAssociationVersion" + + // ErrCodeInvalidAutomationExecutionParametersException for service response error code + // "InvalidAutomationExecutionParametersException". + // + // The supplied parameters for invoking the specified Automation document are + // incorrect. For example, they may not match the set of parameters permitted + // for the specified Automation document. + ErrCodeInvalidAutomationExecutionParametersException = "InvalidAutomationExecutionParametersException" + + // ErrCodeInvalidAutomationSignalException for service response error code + // "InvalidAutomationSignalException". + // + // The signal is not valid for the current Automation execution. + ErrCodeInvalidAutomationSignalException = "InvalidAutomationSignalException" + + // ErrCodeInvalidAutomationStatusUpdateException for service response error code + // "InvalidAutomationStatusUpdateException". + // + // The specified update status operation is not valid. + ErrCodeInvalidAutomationStatusUpdateException = "InvalidAutomationStatusUpdateException" + + // ErrCodeInvalidCommandId for service response error code + // "InvalidCommandId". + ErrCodeInvalidCommandId = "InvalidCommandId" + + // ErrCodeInvalidDeleteInventoryParametersException for service response error code + // "InvalidDeleteInventoryParametersException". + // + // One or more of the parameters specified for the delete operation is not valid. + // Verify all parameters and try again. + ErrCodeInvalidDeleteInventoryParametersException = "InvalidDeleteInventoryParametersException" + + // ErrCodeInvalidDeletionIdException for service response error code + // "InvalidDeletionIdException". + // + // The ID specified for the delete operation does not exist or is not valide. + // Verify the ID and try again. + ErrCodeInvalidDeletionIdException = "InvalidDeletionIdException" + + // ErrCodeInvalidDocument for service response error code + // "InvalidDocument". + // + // The specified document does not exist. + ErrCodeInvalidDocument = "InvalidDocument" + + // ErrCodeInvalidDocumentContent for service response error code + // "InvalidDocumentContent". + // + // The content for the document is not valid. + ErrCodeInvalidDocumentContent = "InvalidDocumentContent" + + // ErrCodeInvalidDocumentOperation for service response error code + // "InvalidDocumentOperation". + // + // You attempted to delete a document while it is still shared. You must stop + // sharing the document before you can delete it. + ErrCodeInvalidDocumentOperation = "InvalidDocumentOperation" + + // ErrCodeInvalidDocumentSchemaVersion for service response error code + // "InvalidDocumentSchemaVersion". + // + // The version of the document schema is not supported. + ErrCodeInvalidDocumentSchemaVersion = "InvalidDocumentSchemaVersion" + + // ErrCodeInvalidDocumentVersion for service response error code + // "InvalidDocumentVersion". + // + // The document version is not valid or does not exist. + ErrCodeInvalidDocumentVersion = "InvalidDocumentVersion" + + // ErrCodeInvalidFilter for service response error code + // "InvalidFilter". + // + // The filter name is not valid. Verify the you entered the correct name and + // try again. + ErrCodeInvalidFilter = "InvalidFilter" + + // ErrCodeInvalidFilterKey for service response error code + // "InvalidFilterKey". + // + // The specified key is not valid. + ErrCodeInvalidFilterKey = "InvalidFilterKey" + + // ErrCodeInvalidFilterOption for service response error code + // "InvalidFilterOption". + // + // The specified filter option is not valid. Valid options are Equals and BeginsWith. + // For Path filter, valid options are Recursive and OneLevel. + ErrCodeInvalidFilterOption = "InvalidFilterOption" + + // ErrCodeInvalidFilterValue for service response error code + // "InvalidFilterValue". + // + // The filter value is not valid. Verify the value and try again. + ErrCodeInvalidFilterValue = "InvalidFilterValue" + + // ErrCodeInvalidInstanceId for service response error code + // "InvalidInstanceId". + // + // The following problems can cause this exception: + // + // You do not have permission to access the instance. + // + // The SSM Agent is not running. On managed instances and Linux instances, verify + // that the SSM Agent is running. On EC2 Windows instances, verify that the + // EC2Config service is running. + // + // The SSM Agent or EC2Config service is not registered to the SSM endpoint. + // Try reinstalling the SSM Agent or EC2Config service. + // + // The instance is not in valid state. Valid states are: Running, Pending, Stopped, + // Stopping. Invalid states are: Shutting-down and Terminated. + ErrCodeInvalidInstanceId = "InvalidInstanceId" + + // ErrCodeInvalidInstanceInformationFilterValue for service response error code + // "InvalidInstanceInformationFilterValue". + // + // The specified filter value is not valid. + ErrCodeInvalidInstanceInformationFilterValue = "InvalidInstanceInformationFilterValue" + + // ErrCodeInvalidInventoryItemContextException for service response error code + // "InvalidInventoryItemContextException". + // + // You specified invalid keys or values in the Context attribute for InventoryItem. + // Verify the keys and values, and try again. + ErrCodeInvalidInventoryItemContextException = "InvalidInventoryItemContextException" + + // ErrCodeInvalidInventoryRequestException for service response error code + // "InvalidInventoryRequestException". + // + // The request is not valid. + ErrCodeInvalidInventoryRequestException = "InvalidInventoryRequestException" + + // ErrCodeInvalidItemContentException for service response error code + // "InvalidItemContentException". + // + // One or more content items is not valid. + ErrCodeInvalidItemContentException = "InvalidItemContentException" + + // ErrCodeInvalidKeyId for service response error code + // "InvalidKeyId". + // + // The query key ID is not valid. + ErrCodeInvalidKeyId = "InvalidKeyId" + + // ErrCodeInvalidNextToken for service response error code + // "InvalidNextToken". + // + // The specified token is not valid. + ErrCodeInvalidNextToken = "InvalidNextToken" + + // ErrCodeInvalidNotificationConfig for service response error code + // "InvalidNotificationConfig". + // + // One or more configuration items is not valid. Verify that a valid Amazon + // Resource Name (ARN) was provided for an Amazon SNS topic. + ErrCodeInvalidNotificationConfig = "InvalidNotificationConfig" + + // ErrCodeInvalidOptionException for service response error code + // "InvalidOptionException". + // + // The delete inventory option specified is not valid. Verify the option and + // try again. + ErrCodeInvalidOptionException = "InvalidOptionException" + + // ErrCodeInvalidOutputFolder for service response error code + // "InvalidOutputFolder". + // + // The S3 bucket does not exist. + ErrCodeInvalidOutputFolder = "InvalidOutputFolder" + + // ErrCodeInvalidOutputLocation for service response error code + // "InvalidOutputLocation". + // + // The output location is not valid or does not exist. + ErrCodeInvalidOutputLocation = "InvalidOutputLocation" + + // ErrCodeInvalidParameters for service response error code + // "InvalidParameters". + // + // You must specify values for all required parameters in the Systems Manager + // document. You can only supply values to parameters defined in the Systems + // Manager document. + ErrCodeInvalidParameters = "InvalidParameters" + + // ErrCodeInvalidPermissionType for service response error code + // "InvalidPermissionType". + // + // The permission type is not supported. Share is the only supported permission + // type. + ErrCodeInvalidPermissionType = "InvalidPermissionType" + + // ErrCodeInvalidPluginName for service response error code + // "InvalidPluginName". + // + // The plugin name is not valid. + ErrCodeInvalidPluginName = "InvalidPluginName" + + // ErrCodeInvalidResourceId for service response error code + // "InvalidResourceId". + // + // The resource ID is not valid. Verify that you entered the correct ID and + // try again. + ErrCodeInvalidResourceId = "InvalidResourceId" + + // ErrCodeInvalidResourceType for service response error code + // "InvalidResourceType". + // + // The resource type is not valid. For example, if you are attempting to tag + // an instance, the instance must be a registered, managed instance. + ErrCodeInvalidResourceType = "InvalidResourceType" + + // ErrCodeInvalidResultAttributeException for service response error code + // "InvalidResultAttributeException". + // + // The specified inventory item result attribute is not valid. + ErrCodeInvalidResultAttributeException = "InvalidResultAttributeException" + + // ErrCodeInvalidRole for service response error code + // "InvalidRole". + // + // The role name can't contain invalid characters. Also verify that you specified + // an IAM role for notifications that includes the required trust policy. For + // information about configuring the IAM role for Run Command notifications, + // see Configuring Amazon SNS Notifications for Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/rc-sns-notifications.html) + // in the AWS Systems Manager User Guide. + ErrCodeInvalidRole = "InvalidRole" + + // ErrCodeInvalidSchedule for service response error code + // "InvalidSchedule". + // + // The schedule is invalid. Verify your cron or rate expression and try again. + ErrCodeInvalidSchedule = "InvalidSchedule" + + // ErrCodeInvalidTarget for service response error code + // "InvalidTarget". + // + // The target is not valid or does not exist. It might not be configured for + // EC2 Systems Manager or you might not have permission to perform the operation. + ErrCodeInvalidTarget = "InvalidTarget" + + // ErrCodeInvalidTypeNameException for service response error code + // "InvalidTypeNameException". + // + // The parameter type name is not valid. + ErrCodeInvalidTypeNameException = "InvalidTypeNameException" + + // ErrCodeInvalidUpdate for service response error code + // "InvalidUpdate". + // + // The update is not valid. + ErrCodeInvalidUpdate = "InvalidUpdate" + + // ErrCodeInvocationDoesNotExist for service response error code + // "InvocationDoesNotExist". + // + // The command ID and instance ID you specified did not match any invocations. + // Verify the command ID adn the instance ID and try again. + ErrCodeInvocationDoesNotExist = "InvocationDoesNotExist" + + // ErrCodeItemContentMismatchException for service response error code + // "ItemContentMismatchException". + // + // The inventory item has invalid content. + ErrCodeItemContentMismatchException = "ItemContentMismatchException" + + // ErrCodeItemSizeLimitExceededException for service response error code + // "ItemSizeLimitExceededException". + // + // The inventory item size has exceeded the size limit. + ErrCodeItemSizeLimitExceededException = "ItemSizeLimitExceededException" + + // ErrCodeMaxDocumentSizeExceeded for service response error code + // "MaxDocumentSizeExceeded". + // + // The size limit of a document is 64 KB. + ErrCodeMaxDocumentSizeExceeded = "MaxDocumentSizeExceeded" + + // ErrCodeParameterAlreadyExists for service response error code + // "ParameterAlreadyExists". + // + // The parameter already exists. You can't create duplicate parameters. + ErrCodeParameterAlreadyExists = "ParameterAlreadyExists" + + // ErrCodeParameterLimitExceeded for service response error code + // "ParameterLimitExceeded". + // + // You have exceeded the number of parameters for this AWS account. Delete one + // or more parameters and try again. + ErrCodeParameterLimitExceeded = "ParameterLimitExceeded" + + // ErrCodeParameterMaxVersionLimitExceeded for service response error code + // "ParameterMaxVersionLimitExceeded". + // + // The parameter exceeded the maximum number of allowed versions. + ErrCodeParameterMaxVersionLimitExceeded = "ParameterMaxVersionLimitExceeded" + + // ErrCodeParameterNotFound for service response error code + // "ParameterNotFound". + // + // The parameter could not be found. Verify the name and try again. + ErrCodeParameterNotFound = "ParameterNotFound" + + // ErrCodeParameterPatternMismatchException for service response error code + // "ParameterPatternMismatchException". + // + // The parameter name is not valid. + ErrCodeParameterPatternMismatchException = "ParameterPatternMismatchException" + + // ErrCodeParameterVersionNotFound for service response error code + // "ParameterVersionNotFound". + // + // The specified parameter version was not found. Verify the parameter name + // and version, and try again. + ErrCodeParameterVersionNotFound = "ParameterVersionNotFound" + + // ErrCodeResourceDataSyncAlreadyExistsException for service response error code + // "ResourceDataSyncAlreadyExistsException". + // + // A sync configuration with the same name already exists. + ErrCodeResourceDataSyncAlreadyExistsException = "ResourceDataSyncAlreadyExistsException" + + // ErrCodeResourceDataSyncCountExceededException for service response error code + // "ResourceDataSyncCountExceededException". + // + // You have exceeded the allowed maximum sync configurations. + ErrCodeResourceDataSyncCountExceededException = "ResourceDataSyncCountExceededException" + + // ErrCodeResourceDataSyncInvalidConfigurationException for service response error code + // "ResourceDataSyncInvalidConfigurationException". + // + // The specified sync configuration is invalid. + ErrCodeResourceDataSyncInvalidConfigurationException = "ResourceDataSyncInvalidConfigurationException" + + // ErrCodeResourceDataSyncNotFoundException for service response error code + // "ResourceDataSyncNotFoundException". + // + // The specified sync name was not found. + ErrCodeResourceDataSyncNotFoundException = "ResourceDataSyncNotFoundException" + + // ErrCodeResourceInUseException for service response error code + // "ResourceInUseException". + // + // Error returned if an attempt is made to delete a patch baseline that is registered + // for a patch group. + ErrCodeResourceInUseException = "ResourceInUseException" + + // ErrCodeResourceLimitExceededException for service response error code + // "ResourceLimitExceededException". + // + // Error returned when the caller has exceeded the default resource limits. + // For example, too many Maintenance Windows or Patch baselines have been created. + // + // For information about resource limits in Systems Manager, see AWS Systems + // Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). + ErrCodeResourceLimitExceededException = "ResourceLimitExceededException" + + // ErrCodeStatusUnchanged for service response error code + // "StatusUnchanged". + // + // The updated status is the same as the current status. + ErrCodeStatusUnchanged = "StatusUnchanged" + + // ErrCodeSubTypeCountLimitExceededException for service response error code + // "SubTypeCountLimitExceededException". + // + // The sub-type count exceeded the limit for the inventory type. + ErrCodeSubTypeCountLimitExceededException = "SubTypeCountLimitExceededException" + + // ErrCodeTargetInUseException for service response error code + // "TargetInUseException". + // + // You specified the Safe option for the DeregisterTargetFromMaintenanceWindow + // operation, but the target is still referenced in a task. + ErrCodeTargetInUseException = "TargetInUseException" + + // ErrCodeTooManyTagsError for service response error code + // "TooManyTagsError". + // + // The Targets parameter includes too many tags. Remove one or more tags and + // try the command again. + ErrCodeTooManyTagsError = "TooManyTagsError" + + // ErrCodeTooManyUpdates for service response error code + // "TooManyUpdates". + // + // There are concurrent updates for a resource that supports one update at a + // time. + ErrCodeTooManyUpdates = "TooManyUpdates" + + // ErrCodeTotalSizeLimitExceededException for service response error code + // "TotalSizeLimitExceededException". + // + // The size of inventory data has exceeded the total size limit for the resource. + ErrCodeTotalSizeLimitExceededException = "TotalSizeLimitExceededException" + + // ErrCodeUnsupportedInventoryItemContextException for service response error code + // "UnsupportedInventoryItemContextException". + // + // The Context attribute that you specified for the InventoryItem is not allowed + // for this inventory type. You can only use the Context attribute with inventory + // types like AWS:ComplianceItem. + ErrCodeUnsupportedInventoryItemContextException = "UnsupportedInventoryItemContextException" + + // ErrCodeUnsupportedInventorySchemaVersionException for service response error code + // "UnsupportedInventorySchemaVersionException". + // + // Inventory item type schema version has to match supported versions in the + // service. Check output of GetInventorySchema to see the available schema version + // for each type. + ErrCodeUnsupportedInventorySchemaVersionException = "UnsupportedInventorySchemaVersionException" + + // ErrCodeUnsupportedOperatingSystem for service response error code + // "UnsupportedOperatingSystem". + // + // The operating systems you specified is not supported, or the operation is + // not supported for the operating system. Valid operating systems include: + // Windows, AmazonLinux, RedhatEnterpriseLinux, and Ubuntu. + ErrCodeUnsupportedOperatingSystem = "UnsupportedOperatingSystem" + + // ErrCodeUnsupportedParameterType for service response error code + // "UnsupportedParameterType". + // + // The parameter type is not supported. + ErrCodeUnsupportedParameterType = "UnsupportedParameterType" + + // ErrCodeUnsupportedPlatformType for service response error code + // "UnsupportedPlatformType". + // + // The document does not support the platform type of the given instance ID(s). + // For example, you sent an document for a Windows instance to a Linux instance. + ErrCodeUnsupportedPlatformType = "UnsupportedPlatformType" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go new file mode 100644 index 00000000..d414fb7d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go @@ -0,0 +1,95 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ssm + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// SSM provides the API operation methods for making requests to +// Amazon Simple Systems Manager (SSM). See this package's package overview docs +// for details on the service. +// +// SSM methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type SSM struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "ssm" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the SSM client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a SSM client from just a session. +// svc := ssm.New(mySession) +// +// // Create a SSM client with additional configuration +// svc := ssm.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *SSM { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *SSM { + svc := &SSM{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2014-11-06", + JSONVersion: "1.1", + TargetPrefix: "AmazonSSM", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a SSM operation and runs any +// custom request initialization. +func (c *SSM) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/ssmiface/interface.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/ssmiface/interface.go new file mode 100644 index 00000000..05c523fb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/ssmiface/interface.go @@ -0,0 +1,487 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package ssmiface provides an interface to enable mocking the Amazon Simple Systems Manager (SSM) service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package ssmiface + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/service/ssm" +) + +// SSMAPI provides an interface to enable mocking the +// ssm.SSM service client's API operation, +// paginators, and waiters. This make unit testing your code that calls out +// to the SDK's service client's calls easier. +// +// The best way to use this interface is so the SDK's service client's calls +// can be stubbed out for unit testing your code with the SDK without needing +// to inject custom request handlers into the SDK's request pipeline. +// +// // myFunc uses an SDK service client to make a request to +// // Amazon Simple Systems Manager (SSM). +// func myFunc(svc ssmiface.SSMAPI) bool { +// // Make svc.AddTagsToResource request +// } +// +// func main() { +// sess := session.New() +// svc := ssm.New(sess) +// +// myFunc(svc) +// } +// +// In your _test.go file: +// +// // Define a mock struct to be used in your unit tests of myFunc. +// type mockSSMClient struct { +// ssmiface.SSMAPI +// } +// func (m *mockSSMClient) AddTagsToResource(input *ssm.AddTagsToResourceInput) (*ssm.AddTagsToResourceOutput, error) { +// // mock response/functionality +// } +// +// func TestMyFunc(t *testing.T) { +// // Setup Test +// mockSvc := &mockSSMClient{} +// +// myfunc(mockSvc) +// +// // Verify myFunc's functionality +// } +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. Its suggested to use the pattern above for testing, or using +// tooling to generate mocks to satisfy the interfaces. +type SSMAPI interface { + AddTagsToResource(*ssm.AddTagsToResourceInput) (*ssm.AddTagsToResourceOutput, error) + AddTagsToResourceWithContext(aws.Context, *ssm.AddTagsToResourceInput, ...request.Option) (*ssm.AddTagsToResourceOutput, error) + AddTagsToResourceRequest(*ssm.AddTagsToResourceInput) (*request.Request, *ssm.AddTagsToResourceOutput) + + CancelCommand(*ssm.CancelCommandInput) (*ssm.CancelCommandOutput, error) + CancelCommandWithContext(aws.Context, *ssm.CancelCommandInput, ...request.Option) (*ssm.CancelCommandOutput, error) + CancelCommandRequest(*ssm.CancelCommandInput) (*request.Request, *ssm.CancelCommandOutput) + + CreateActivation(*ssm.CreateActivationInput) (*ssm.CreateActivationOutput, error) + CreateActivationWithContext(aws.Context, *ssm.CreateActivationInput, ...request.Option) (*ssm.CreateActivationOutput, error) + CreateActivationRequest(*ssm.CreateActivationInput) (*request.Request, *ssm.CreateActivationOutput) + + CreateAssociation(*ssm.CreateAssociationInput) (*ssm.CreateAssociationOutput, error) + CreateAssociationWithContext(aws.Context, *ssm.CreateAssociationInput, ...request.Option) (*ssm.CreateAssociationOutput, error) + CreateAssociationRequest(*ssm.CreateAssociationInput) (*request.Request, *ssm.CreateAssociationOutput) + + CreateAssociationBatch(*ssm.CreateAssociationBatchInput) (*ssm.CreateAssociationBatchOutput, error) + CreateAssociationBatchWithContext(aws.Context, *ssm.CreateAssociationBatchInput, ...request.Option) (*ssm.CreateAssociationBatchOutput, error) + CreateAssociationBatchRequest(*ssm.CreateAssociationBatchInput) (*request.Request, *ssm.CreateAssociationBatchOutput) + + CreateDocument(*ssm.CreateDocumentInput) (*ssm.CreateDocumentOutput, error) + CreateDocumentWithContext(aws.Context, *ssm.CreateDocumentInput, ...request.Option) (*ssm.CreateDocumentOutput, error) + CreateDocumentRequest(*ssm.CreateDocumentInput) (*request.Request, *ssm.CreateDocumentOutput) + + CreateMaintenanceWindow(*ssm.CreateMaintenanceWindowInput) (*ssm.CreateMaintenanceWindowOutput, error) + CreateMaintenanceWindowWithContext(aws.Context, *ssm.CreateMaintenanceWindowInput, ...request.Option) (*ssm.CreateMaintenanceWindowOutput, error) + CreateMaintenanceWindowRequest(*ssm.CreateMaintenanceWindowInput) (*request.Request, *ssm.CreateMaintenanceWindowOutput) + + CreatePatchBaseline(*ssm.CreatePatchBaselineInput) (*ssm.CreatePatchBaselineOutput, error) + CreatePatchBaselineWithContext(aws.Context, *ssm.CreatePatchBaselineInput, ...request.Option) (*ssm.CreatePatchBaselineOutput, error) + CreatePatchBaselineRequest(*ssm.CreatePatchBaselineInput) (*request.Request, *ssm.CreatePatchBaselineOutput) + + CreateResourceDataSync(*ssm.CreateResourceDataSyncInput) (*ssm.CreateResourceDataSyncOutput, error) + CreateResourceDataSyncWithContext(aws.Context, *ssm.CreateResourceDataSyncInput, ...request.Option) (*ssm.CreateResourceDataSyncOutput, error) + CreateResourceDataSyncRequest(*ssm.CreateResourceDataSyncInput) (*request.Request, *ssm.CreateResourceDataSyncOutput) + + DeleteActivation(*ssm.DeleteActivationInput) (*ssm.DeleteActivationOutput, error) + DeleteActivationWithContext(aws.Context, *ssm.DeleteActivationInput, ...request.Option) (*ssm.DeleteActivationOutput, error) + DeleteActivationRequest(*ssm.DeleteActivationInput) (*request.Request, *ssm.DeleteActivationOutput) + + DeleteAssociation(*ssm.DeleteAssociationInput) (*ssm.DeleteAssociationOutput, error) + DeleteAssociationWithContext(aws.Context, *ssm.DeleteAssociationInput, ...request.Option) (*ssm.DeleteAssociationOutput, error) + DeleteAssociationRequest(*ssm.DeleteAssociationInput) (*request.Request, *ssm.DeleteAssociationOutput) + + DeleteDocument(*ssm.DeleteDocumentInput) (*ssm.DeleteDocumentOutput, error) + DeleteDocumentWithContext(aws.Context, *ssm.DeleteDocumentInput, ...request.Option) (*ssm.DeleteDocumentOutput, error) + DeleteDocumentRequest(*ssm.DeleteDocumentInput) (*request.Request, *ssm.DeleteDocumentOutput) + + DeleteInventory(*ssm.DeleteInventoryInput) (*ssm.DeleteInventoryOutput, error) + DeleteInventoryWithContext(aws.Context, *ssm.DeleteInventoryInput, ...request.Option) (*ssm.DeleteInventoryOutput, error) + DeleteInventoryRequest(*ssm.DeleteInventoryInput) (*request.Request, *ssm.DeleteInventoryOutput) + + DeleteMaintenanceWindow(*ssm.DeleteMaintenanceWindowInput) (*ssm.DeleteMaintenanceWindowOutput, error) + DeleteMaintenanceWindowWithContext(aws.Context, *ssm.DeleteMaintenanceWindowInput, ...request.Option) (*ssm.DeleteMaintenanceWindowOutput, error) + DeleteMaintenanceWindowRequest(*ssm.DeleteMaintenanceWindowInput) (*request.Request, *ssm.DeleteMaintenanceWindowOutput) + + DeleteParameter(*ssm.DeleteParameterInput) (*ssm.DeleteParameterOutput, error) + DeleteParameterWithContext(aws.Context, *ssm.DeleteParameterInput, ...request.Option) (*ssm.DeleteParameterOutput, error) + DeleteParameterRequest(*ssm.DeleteParameterInput) (*request.Request, *ssm.DeleteParameterOutput) + + DeleteParameters(*ssm.DeleteParametersInput) (*ssm.DeleteParametersOutput, error) + DeleteParametersWithContext(aws.Context, *ssm.DeleteParametersInput, ...request.Option) (*ssm.DeleteParametersOutput, error) + DeleteParametersRequest(*ssm.DeleteParametersInput) (*request.Request, *ssm.DeleteParametersOutput) + + DeletePatchBaseline(*ssm.DeletePatchBaselineInput) (*ssm.DeletePatchBaselineOutput, error) + DeletePatchBaselineWithContext(aws.Context, *ssm.DeletePatchBaselineInput, ...request.Option) (*ssm.DeletePatchBaselineOutput, error) + DeletePatchBaselineRequest(*ssm.DeletePatchBaselineInput) (*request.Request, *ssm.DeletePatchBaselineOutput) + + DeleteResourceDataSync(*ssm.DeleteResourceDataSyncInput) (*ssm.DeleteResourceDataSyncOutput, error) + DeleteResourceDataSyncWithContext(aws.Context, *ssm.DeleteResourceDataSyncInput, ...request.Option) (*ssm.DeleteResourceDataSyncOutput, error) + DeleteResourceDataSyncRequest(*ssm.DeleteResourceDataSyncInput) (*request.Request, *ssm.DeleteResourceDataSyncOutput) + + DeregisterManagedInstance(*ssm.DeregisterManagedInstanceInput) (*ssm.DeregisterManagedInstanceOutput, error) + DeregisterManagedInstanceWithContext(aws.Context, *ssm.DeregisterManagedInstanceInput, ...request.Option) (*ssm.DeregisterManagedInstanceOutput, error) + DeregisterManagedInstanceRequest(*ssm.DeregisterManagedInstanceInput) (*request.Request, *ssm.DeregisterManagedInstanceOutput) + + DeregisterPatchBaselineForPatchGroup(*ssm.DeregisterPatchBaselineForPatchGroupInput) (*ssm.DeregisterPatchBaselineForPatchGroupOutput, error) + DeregisterPatchBaselineForPatchGroupWithContext(aws.Context, *ssm.DeregisterPatchBaselineForPatchGroupInput, ...request.Option) (*ssm.DeregisterPatchBaselineForPatchGroupOutput, error) + DeregisterPatchBaselineForPatchGroupRequest(*ssm.DeregisterPatchBaselineForPatchGroupInput) (*request.Request, *ssm.DeregisterPatchBaselineForPatchGroupOutput) + + DeregisterTargetFromMaintenanceWindow(*ssm.DeregisterTargetFromMaintenanceWindowInput) (*ssm.DeregisterTargetFromMaintenanceWindowOutput, error) + DeregisterTargetFromMaintenanceWindowWithContext(aws.Context, *ssm.DeregisterTargetFromMaintenanceWindowInput, ...request.Option) (*ssm.DeregisterTargetFromMaintenanceWindowOutput, error) + DeregisterTargetFromMaintenanceWindowRequest(*ssm.DeregisterTargetFromMaintenanceWindowInput) (*request.Request, *ssm.DeregisterTargetFromMaintenanceWindowOutput) + + DeregisterTaskFromMaintenanceWindow(*ssm.DeregisterTaskFromMaintenanceWindowInput) (*ssm.DeregisterTaskFromMaintenanceWindowOutput, error) + DeregisterTaskFromMaintenanceWindowWithContext(aws.Context, *ssm.DeregisterTaskFromMaintenanceWindowInput, ...request.Option) (*ssm.DeregisterTaskFromMaintenanceWindowOutput, error) + DeregisterTaskFromMaintenanceWindowRequest(*ssm.DeregisterTaskFromMaintenanceWindowInput) (*request.Request, *ssm.DeregisterTaskFromMaintenanceWindowOutput) + + DescribeActivations(*ssm.DescribeActivationsInput) (*ssm.DescribeActivationsOutput, error) + DescribeActivationsWithContext(aws.Context, *ssm.DescribeActivationsInput, ...request.Option) (*ssm.DescribeActivationsOutput, error) + DescribeActivationsRequest(*ssm.DescribeActivationsInput) (*request.Request, *ssm.DescribeActivationsOutput) + + DescribeActivationsPages(*ssm.DescribeActivationsInput, func(*ssm.DescribeActivationsOutput, bool) bool) error + DescribeActivationsPagesWithContext(aws.Context, *ssm.DescribeActivationsInput, func(*ssm.DescribeActivationsOutput, bool) bool, ...request.Option) error + + DescribeAssociation(*ssm.DescribeAssociationInput) (*ssm.DescribeAssociationOutput, error) + DescribeAssociationWithContext(aws.Context, *ssm.DescribeAssociationInput, ...request.Option) (*ssm.DescribeAssociationOutput, error) + DescribeAssociationRequest(*ssm.DescribeAssociationInput) (*request.Request, *ssm.DescribeAssociationOutput) + + DescribeAutomationExecutions(*ssm.DescribeAutomationExecutionsInput) (*ssm.DescribeAutomationExecutionsOutput, error) + DescribeAutomationExecutionsWithContext(aws.Context, *ssm.DescribeAutomationExecutionsInput, ...request.Option) (*ssm.DescribeAutomationExecutionsOutput, error) + DescribeAutomationExecutionsRequest(*ssm.DescribeAutomationExecutionsInput) (*request.Request, *ssm.DescribeAutomationExecutionsOutput) + + DescribeAutomationStepExecutions(*ssm.DescribeAutomationStepExecutionsInput) (*ssm.DescribeAutomationStepExecutionsOutput, error) + DescribeAutomationStepExecutionsWithContext(aws.Context, *ssm.DescribeAutomationStepExecutionsInput, ...request.Option) (*ssm.DescribeAutomationStepExecutionsOutput, error) + DescribeAutomationStepExecutionsRequest(*ssm.DescribeAutomationStepExecutionsInput) (*request.Request, *ssm.DescribeAutomationStepExecutionsOutput) + + DescribeAvailablePatches(*ssm.DescribeAvailablePatchesInput) (*ssm.DescribeAvailablePatchesOutput, error) + DescribeAvailablePatchesWithContext(aws.Context, *ssm.DescribeAvailablePatchesInput, ...request.Option) (*ssm.DescribeAvailablePatchesOutput, error) + DescribeAvailablePatchesRequest(*ssm.DescribeAvailablePatchesInput) (*request.Request, *ssm.DescribeAvailablePatchesOutput) + + DescribeDocument(*ssm.DescribeDocumentInput) (*ssm.DescribeDocumentOutput, error) + DescribeDocumentWithContext(aws.Context, *ssm.DescribeDocumentInput, ...request.Option) (*ssm.DescribeDocumentOutput, error) + DescribeDocumentRequest(*ssm.DescribeDocumentInput) (*request.Request, *ssm.DescribeDocumentOutput) + + DescribeDocumentPermission(*ssm.DescribeDocumentPermissionInput) (*ssm.DescribeDocumentPermissionOutput, error) + DescribeDocumentPermissionWithContext(aws.Context, *ssm.DescribeDocumentPermissionInput, ...request.Option) (*ssm.DescribeDocumentPermissionOutput, error) + DescribeDocumentPermissionRequest(*ssm.DescribeDocumentPermissionInput) (*request.Request, *ssm.DescribeDocumentPermissionOutput) + + DescribeEffectiveInstanceAssociations(*ssm.DescribeEffectiveInstanceAssociationsInput) (*ssm.DescribeEffectiveInstanceAssociationsOutput, error) + DescribeEffectiveInstanceAssociationsWithContext(aws.Context, *ssm.DescribeEffectiveInstanceAssociationsInput, ...request.Option) (*ssm.DescribeEffectiveInstanceAssociationsOutput, error) + DescribeEffectiveInstanceAssociationsRequest(*ssm.DescribeEffectiveInstanceAssociationsInput) (*request.Request, *ssm.DescribeEffectiveInstanceAssociationsOutput) + + DescribeEffectivePatchesForPatchBaseline(*ssm.DescribeEffectivePatchesForPatchBaselineInput) (*ssm.DescribeEffectivePatchesForPatchBaselineOutput, error) + DescribeEffectivePatchesForPatchBaselineWithContext(aws.Context, *ssm.DescribeEffectivePatchesForPatchBaselineInput, ...request.Option) (*ssm.DescribeEffectivePatchesForPatchBaselineOutput, error) + DescribeEffectivePatchesForPatchBaselineRequest(*ssm.DescribeEffectivePatchesForPatchBaselineInput) (*request.Request, *ssm.DescribeEffectivePatchesForPatchBaselineOutput) + + DescribeInstanceAssociationsStatus(*ssm.DescribeInstanceAssociationsStatusInput) (*ssm.DescribeInstanceAssociationsStatusOutput, error) + DescribeInstanceAssociationsStatusWithContext(aws.Context, *ssm.DescribeInstanceAssociationsStatusInput, ...request.Option) (*ssm.DescribeInstanceAssociationsStatusOutput, error) + DescribeInstanceAssociationsStatusRequest(*ssm.DescribeInstanceAssociationsStatusInput) (*request.Request, *ssm.DescribeInstanceAssociationsStatusOutput) + + DescribeInstanceInformation(*ssm.DescribeInstanceInformationInput) (*ssm.DescribeInstanceInformationOutput, error) + DescribeInstanceInformationWithContext(aws.Context, *ssm.DescribeInstanceInformationInput, ...request.Option) (*ssm.DescribeInstanceInformationOutput, error) + DescribeInstanceInformationRequest(*ssm.DescribeInstanceInformationInput) (*request.Request, *ssm.DescribeInstanceInformationOutput) + + DescribeInstanceInformationPages(*ssm.DescribeInstanceInformationInput, func(*ssm.DescribeInstanceInformationOutput, bool) bool) error + DescribeInstanceInformationPagesWithContext(aws.Context, *ssm.DescribeInstanceInformationInput, func(*ssm.DescribeInstanceInformationOutput, bool) bool, ...request.Option) error + + DescribeInstancePatchStates(*ssm.DescribeInstancePatchStatesInput) (*ssm.DescribeInstancePatchStatesOutput, error) + DescribeInstancePatchStatesWithContext(aws.Context, *ssm.DescribeInstancePatchStatesInput, ...request.Option) (*ssm.DescribeInstancePatchStatesOutput, error) + DescribeInstancePatchStatesRequest(*ssm.DescribeInstancePatchStatesInput) (*request.Request, *ssm.DescribeInstancePatchStatesOutput) + + DescribeInstancePatchStatesForPatchGroup(*ssm.DescribeInstancePatchStatesForPatchGroupInput) (*ssm.DescribeInstancePatchStatesForPatchGroupOutput, error) + DescribeInstancePatchStatesForPatchGroupWithContext(aws.Context, *ssm.DescribeInstancePatchStatesForPatchGroupInput, ...request.Option) (*ssm.DescribeInstancePatchStatesForPatchGroupOutput, error) + DescribeInstancePatchStatesForPatchGroupRequest(*ssm.DescribeInstancePatchStatesForPatchGroupInput) (*request.Request, *ssm.DescribeInstancePatchStatesForPatchGroupOutput) + + DescribeInstancePatches(*ssm.DescribeInstancePatchesInput) (*ssm.DescribeInstancePatchesOutput, error) + DescribeInstancePatchesWithContext(aws.Context, *ssm.DescribeInstancePatchesInput, ...request.Option) (*ssm.DescribeInstancePatchesOutput, error) + DescribeInstancePatchesRequest(*ssm.DescribeInstancePatchesInput) (*request.Request, *ssm.DescribeInstancePatchesOutput) + + DescribeInventoryDeletions(*ssm.DescribeInventoryDeletionsInput) (*ssm.DescribeInventoryDeletionsOutput, error) + DescribeInventoryDeletionsWithContext(aws.Context, *ssm.DescribeInventoryDeletionsInput, ...request.Option) (*ssm.DescribeInventoryDeletionsOutput, error) + DescribeInventoryDeletionsRequest(*ssm.DescribeInventoryDeletionsInput) (*request.Request, *ssm.DescribeInventoryDeletionsOutput) + + DescribeMaintenanceWindowExecutionTaskInvocations(*ssm.DescribeMaintenanceWindowExecutionTaskInvocationsInput) (*ssm.DescribeMaintenanceWindowExecutionTaskInvocationsOutput, error) + DescribeMaintenanceWindowExecutionTaskInvocationsWithContext(aws.Context, *ssm.DescribeMaintenanceWindowExecutionTaskInvocationsInput, ...request.Option) (*ssm.DescribeMaintenanceWindowExecutionTaskInvocationsOutput, error) + DescribeMaintenanceWindowExecutionTaskInvocationsRequest(*ssm.DescribeMaintenanceWindowExecutionTaskInvocationsInput) (*request.Request, *ssm.DescribeMaintenanceWindowExecutionTaskInvocationsOutput) + + DescribeMaintenanceWindowExecutionTasks(*ssm.DescribeMaintenanceWindowExecutionTasksInput) (*ssm.DescribeMaintenanceWindowExecutionTasksOutput, error) + DescribeMaintenanceWindowExecutionTasksWithContext(aws.Context, *ssm.DescribeMaintenanceWindowExecutionTasksInput, ...request.Option) (*ssm.DescribeMaintenanceWindowExecutionTasksOutput, error) + DescribeMaintenanceWindowExecutionTasksRequest(*ssm.DescribeMaintenanceWindowExecutionTasksInput) (*request.Request, *ssm.DescribeMaintenanceWindowExecutionTasksOutput) + + DescribeMaintenanceWindowExecutions(*ssm.DescribeMaintenanceWindowExecutionsInput) (*ssm.DescribeMaintenanceWindowExecutionsOutput, error) + DescribeMaintenanceWindowExecutionsWithContext(aws.Context, *ssm.DescribeMaintenanceWindowExecutionsInput, ...request.Option) (*ssm.DescribeMaintenanceWindowExecutionsOutput, error) + DescribeMaintenanceWindowExecutionsRequest(*ssm.DescribeMaintenanceWindowExecutionsInput) (*request.Request, *ssm.DescribeMaintenanceWindowExecutionsOutput) + + DescribeMaintenanceWindowTargets(*ssm.DescribeMaintenanceWindowTargetsInput) (*ssm.DescribeMaintenanceWindowTargetsOutput, error) + DescribeMaintenanceWindowTargetsWithContext(aws.Context, *ssm.DescribeMaintenanceWindowTargetsInput, ...request.Option) (*ssm.DescribeMaintenanceWindowTargetsOutput, error) + DescribeMaintenanceWindowTargetsRequest(*ssm.DescribeMaintenanceWindowTargetsInput) (*request.Request, *ssm.DescribeMaintenanceWindowTargetsOutput) + + DescribeMaintenanceWindowTasks(*ssm.DescribeMaintenanceWindowTasksInput) (*ssm.DescribeMaintenanceWindowTasksOutput, error) + DescribeMaintenanceWindowTasksWithContext(aws.Context, *ssm.DescribeMaintenanceWindowTasksInput, ...request.Option) (*ssm.DescribeMaintenanceWindowTasksOutput, error) + DescribeMaintenanceWindowTasksRequest(*ssm.DescribeMaintenanceWindowTasksInput) (*request.Request, *ssm.DescribeMaintenanceWindowTasksOutput) + + DescribeMaintenanceWindows(*ssm.DescribeMaintenanceWindowsInput) (*ssm.DescribeMaintenanceWindowsOutput, error) + DescribeMaintenanceWindowsWithContext(aws.Context, *ssm.DescribeMaintenanceWindowsInput, ...request.Option) (*ssm.DescribeMaintenanceWindowsOutput, error) + DescribeMaintenanceWindowsRequest(*ssm.DescribeMaintenanceWindowsInput) (*request.Request, *ssm.DescribeMaintenanceWindowsOutput) + + DescribeParameters(*ssm.DescribeParametersInput) (*ssm.DescribeParametersOutput, error) + DescribeParametersWithContext(aws.Context, *ssm.DescribeParametersInput, ...request.Option) (*ssm.DescribeParametersOutput, error) + DescribeParametersRequest(*ssm.DescribeParametersInput) (*request.Request, *ssm.DescribeParametersOutput) + + DescribeParametersPages(*ssm.DescribeParametersInput, func(*ssm.DescribeParametersOutput, bool) bool) error + DescribeParametersPagesWithContext(aws.Context, *ssm.DescribeParametersInput, func(*ssm.DescribeParametersOutput, bool) bool, ...request.Option) error + + DescribePatchBaselines(*ssm.DescribePatchBaselinesInput) (*ssm.DescribePatchBaselinesOutput, error) + DescribePatchBaselinesWithContext(aws.Context, *ssm.DescribePatchBaselinesInput, ...request.Option) (*ssm.DescribePatchBaselinesOutput, error) + DescribePatchBaselinesRequest(*ssm.DescribePatchBaselinesInput) (*request.Request, *ssm.DescribePatchBaselinesOutput) + + DescribePatchGroupState(*ssm.DescribePatchGroupStateInput) (*ssm.DescribePatchGroupStateOutput, error) + DescribePatchGroupStateWithContext(aws.Context, *ssm.DescribePatchGroupStateInput, ...request.Option) (*ssm.DescribePatchGroupStateOutput, error) + DescribePatchGroupStateRequest(*ssm.DescribePatchGroupStateInput) (*request.Request, *ssm.DescribePatchGroupStateOutput) + + DescribePatchGroups(*ssm.DescribePatchGroupsInput) (*ssm.DescribePatchGroupsOutput, error) + DescribePatchGroupsWithContext(aws.Context, *ssm.DescribePatchGroupsInput, ...request.Option) (*ssm.DescribePatchGroupsOutput, error) + DescribePatchGroupsRequest(*ssm.DescribePatchGroupsInput) (*request.Request, *ssm.DescribePatchGroupsOutput) + + GetAutomationExecution(*ssm.GetAutomationExecutionInput) (*ssm.GetAutomationExecutionOutput, error) + GetAutomationExecutionWithContext(aws.Context, *ssm.GetAutomationExecutionInput, ...request.Option) (*ssm.GetAutomationExecutionOutput, error) + GetAutomationExecutionRequest(*ssm.GetAutomationExecutionInput) (*request.Request, *ssm.GetAutomationExecutionOutput) + + GetCommandInvocation(*ssm.GetCommandInvocationInput) (*ssm.GetCommandInvocationOutput, error) + GetCommandInvocationWithContext(aws.Context, *ssm.GetCommandInvocationInput, ...request.Option) (*ssm.GetCommandInvocationOutput, error) + GetCommandInvocationRequest(*ssm.GetCommandInvocationInput) (*request.Request, *ssm.GetCommandInvocationOutput) + + GetDefaultPatchBaseline(*ssm.GetDefaultPatchBaselineInput) (*ssm.GetDefaultPatchBaselineOutput, error) + GetDefaultPatchBaselineWithContext(aws.Context, *ssm.GetDefaultPatchBaselineInput, ...request.Option) (*ssm.GetDefaultPatchBaselineOutput, error) + GetDefaultPatchBaselineRequest(*ssm.GetDefaultPatchBaselineInput) (*request.Request, *ssm.GetDefaultPatchBaselineOutput) + + GetDeployablePatchSnapshotForInstance(*ssm.GetDeployablePatchSnapshotForInstanceInput) (*ssm.GetDeployablePatchSnapshotForInstanceOutput, error) + GetDeployablePatchSnapshotForInstanceWithContext(aws.Context, *ssm.GetDeployablePatchSnapshotForInstanceInput, ...request.Option) (*ssm.GetDeployablePatchSnapshotForInstanceOutput, error) + GetDeployablePatchSnapshotForInstanceRequest(*ssm.GetDeployablePatchSnapshotForInstanceInput) (*request.Request, *ssm.GetDeployablePatchSnapshotForInstanceOutput) + + GetDocument(*ssm.GetDocumentInput) (*ssm.GetDocumentOutput, error) + GetDocumentWithContext(aws.Context, *ssm.GetDocumentInput, ...request.Option) (*ssm.GetDocumentOutput, error) + GetDocumentRequest(*ssm.GetDocumentInput) (*request.Request, *ssm.GetDocumentOutput) + + GetInventory(*ssm.GetInventoryInput) (*ssm.GetInventoryOutput, error) + GetInventoryWithContext(aws.Context, *ssm.GetInventoryInput, ...request.Option) (*ssm.GetInventoryOutput, error) + GetInventoryRequest(*ssm.GetInventoryInput) (*request.Request, *ssm.GetInventoryOutput) + + GetInventorySchema(*ssm.GetInventorySchemaInput) (*ssm.GetInventorySchemaOutput, error) + GetInventorySchemaWithContext(aws.Context, *ssm.GetInventorySchemaInput, ...request.Option) (*ssm.GetInventorySchemaOutput, error) + GetInventorySchemaRequest(*ssm.GetInventorySchemaInput) (*request.Request, *ssm.GetInventorySchemaOutput) + + GetMaintenanceWindow(*ssm.GetMaintenanceWindowInput) (*ssm.GetMaintenanceWindowOutput, error) + GetMaintenanceWindowWithContext(aws.Context, *ssm.GetMaintenanceWindowInput, ...request.Option) (*ssm.GetMaintenanceWindowOutput, error) + GetMaintenanceWindowRequest(*ssm.GetMaintenanceWindowInput) (*request.Request, *ssm.GetMaintenanceWindowOutput) + + GetMaintenanceWindowExecution(*ssm.GetMaintenanceWindowExecutionInput) (*ssm.GetMaintenanceWindowExecutionOutput, error) + GetMaintenanceWindowExecutionWithContext(aws.Context, *ssm.GetMaintenanceWindowExecutionInput, ...request.Option) (*ssm.GetMaintenanceWindowExecutionOutput, error) + GetMaintenanceWindowExecutionRequest(*ssm.GetMaintenanceWindowExecutionInput) (*request.Request, *ssm.GetMaintenanceWindowExecutionOutput) + + GetMaintenanceWindowExecutionTask(*ssm.GetMaintenanceWindowExecutionTaskInput) (*ssm.GetMaintenanceWindowExecutionTaskOutput, error) + GetMaintenanceWindowExecutionTaskWithContext(aws.Context, *ssm.GetMaintenanceWindowExecutionTaskInput, ...request.Option) (*ssm.GetMaintenanceWindowExecutionTaskOutput, error) + GetMaintenanceWindowExecutionTaskRequest(*ssm.GetMaintenanceWindowExecutionTaskInput) (*request.Request, *ssm.GetMaintenanceWindowExecutionTaskOutput) + + GetMaintenanceWindowExecutionTaskInvocation(*ssm.GetMaintenanceWindowExecutionTaskInvocationInput) (*ssm.GetMaintenanceWindowExecutionTaskInvocationOutput, error) + GetMaintenanceWindowExecutionTaskInvocationWithContext(aws.Context, *ssm.GetMaintenanceWindowExecutionTaskInvocationInput, ...request.Option) (*ssm.GetMaintenanceWindowExecutionTaskInvocationOutput, error) + GetMaintenanceWindowExecutionTaskInvocationRequest(*ssm.GetMaintenanceWindowExecutionTaskInvocationInput) (*request.Request, *ssm.GetMaintenanceWindowExecutionTaskInvocationOutput) + + GetMaintenanceWindowTask(*ssm.GetMaintenanceWindowTaskInput) (*ssm.GetMaintenanceWindowTaskOutput, error) + GetMaintenanceWindowTaskWithContext(aws.Context, *ssm.GetMaintenanceWindowTaskInput, ...request.Option) (*ssm.GetMaintenanceWindowTaskOutput, error) + GetMaintenanceWindowTaskRequest(*ssm.GetMaintenanceWindowTaskInput) (*request.Request, *ssm.GetMaintenanceWindowTaskOutput) + + GetParameter(*ssm.GetParameterInput) (*ssm.GetParameterOutput, error) + GetParameterWithContext(aws.Context, *ssm.GetParameterInput, ...request.Option) (*ssm.GetParameterOutput, error) + GetParameterRequest(*ssm.GetParameterInput) (*request.Request, *ssm.GetParameterOutput) + + GetParameterHistory(*ssm.GetParameterHistoryInput) (*ssm.GetParameterHistoryOutput, error) + GetParameterHistoryWithContext(aws.Context, *ssm.GetParameterHistoryInput, ...request.Option) (*ssm.GetParameterHistoryOutput, error) + GetParameterHistoryRequest(*ssm.GetParameterHistoryInput) (*request.Request, *ssm.GetParameterHistoryOutput) + + GetParameterHistoryPages(*ssm.GetParameterHistoryInput, func(*ssm.GetParameterHistoryOutput, bool) bool) error + GetParameterHistoryPagesWithContext(aws.Context, *ssm.GetParameterHistoryInput, func(*ssm.GetParameterHistoryOutput, bool) bool, ...request.Option) error + + GetParameters(*ssm.GetParametersInput) (*ssm.GetParametersOutput, error) + GetParametersWithContext(aws.Context, *ssm.GetParametersInput, ...request.Option) (*ssm.GetParametersOutput, error) + GetParametersRequest(*ssm.GetParametersInput) (*request.Request, *ssm.GetParametersOutput) + + GetParametersByPath(*ssm.GetParametersByPathInput) (*ssm.GetParametersByPathOutput, error) + GetParametersByPathWithContext(aws.Context, *ssm.GetParametersByPathInput, ...request.Option) (*ssm.GetParametersByPathOutput, error) + GetParametersByPathRequest(*ssm.GetParametersByPathInput) (*request.Request, *ssm.GetParametersByPathOutput) + + GetParametersByPathPages(*ssm.GetParametersByPathInput, func(*ssm.GetParametersByPathOutput, bool) bool) error + GetParametersByPathPagesWithContext(aws.Context, *ssm.GetParametersByPathInput, func(*ssm.GetParametersByPathOutput, bool) bool, ...request.Option) error + + GetPatchBaseline(*ssm.GetPatchBaselineInput) (*ssm.GetPatchBaselineOutput, error) + GetPatchBaselineWithContext(aws.Context, *ssm.GetPatchBaselineInput, ...request.Option) (*ssm.GetPatchBaselineOutput, error) + GetPatchBaselineRequest(*ssm.GetPatchBaselineInput) (*request.Request, *ssm.GetPatchBaselineOutput) + + GetPatchBaselineForPatchGroup(*ssm.GetPatchBaselineForPatchGroupInput) (*ssm.GetPatchBaselineForPatchGroupOutput, error) + GetPatchBaselineForPatchGroupWithContext(aws.Context, *ssm.GetPatchBaselineForPatchGroupInput, ...request.Option) (*ssm.GetPatchBaselineForPatchGroupOutput, error) + GetPatchBaselineForPatchGroupRequest(*ssm.GetPatchBaselineForPatchGroupInput) (*request.Request, *ssm.GetPatchBaselineForPatchGroupOutput) + + ListAssociationVersions(*ssm.ListAssociationVersionsInput) (*ssm.ListAssociationVersionsOutput, error) + ListAssociationVersionsWithContext(aws.Context, *ssm.ListAssociationVersionsInput, ...request.Option) (*ssm.ListAssociationVersionsOutput, error) + ListAssociationVersionsRequest(*ssm.ListAssociationVersionsInput) (*request.Request, *ssm.ListAssociationVersionsOutput) + + ListAssociations(*ssm.ListAssociationsInput) (*ssm.ListAssociationsOutput, error) + ListAssociationsWithContext(aws.Context, *ssm.ListAssociationsInput, ...request.Option) (*ssm.ListAssociationsOutput, error) + ListAssociationsRequest(*ssm.ListAssociationsInput) (*request.Request, *ssm.ListAssociationsOutput) + + ListAssociationsPages(*ssm.ListAssociationsInput, func(*ssm.ListAssociationsOutput, bool) bool) error + ListAssociationsPagesWithContext(aws.Context, *ssm.ListAssociationsInput, func(*ssm.ListAssociationsOutput, bool) bool, ...request.Option) error + + ListCommandInvocations(*ssm.ListCommandInvocationsInput) (*ssm.ListCommandInvocationsOutput, error) + ListCommandInvocationsWithContext(aws.Context, *ssm.ListCommandInvocationsInput, ...request.Option) (*ssm.ListCommandInvocationsOutput, error) + ListCommandInvocationsRequest(*ssm.ListCommandInvocationsInput) (*request.Request, *ssm.ListCommandInvocationsOutput) + + ListCommandInvocationsPages(*ssm.ListCommandInvocationsInput, func(*ssm.ListCommandInvocationsOutput, bool) bool) error + ListCommandInvocationsPagesWithContext(aws.Context, *ssm.ListCommandInvocationsInput, func(*ssm.ListCommandInvocationsOutput, bool) bool, ...request.Option) error + + ListCommands(*ssm.ListCommandsInput) (*ssm.ListCommandsOutput, error) + ListCommandsWithContext(aws.Context, *ssm.ListCommandsInput, ...request.Option) (*ssm.ListCommandsOutput, error) + ListCommandsRequest(*ssm.ListCommandsInput) (*request.Request, *ssm.ListCommandsOutput) + + ListCommandsPages(*ssm.ListCommandsInput, func(*ssm.ListCommandsOutput, bool) bool) error + ListCommandsPagesWithContext(aws.Context, *ssm.ListCommandsInput, func(*ssm.ListCommandsOutput, bool) bool, ...request.Option) error + + ListComplianceItems(*ssm.ListComplianceItemsInput) (*ssm.ListComplianceItemsOutput, error) + ListComplianceItemsWithContext(aws.Context, *ssm.ListComplianceItemsInput, ...request.Option) (*ssm.ListComplianceItemsOutput, error) + ListComplianceItemsRequest(*ssm.ListComplianceItemsInput) (*request.Request, *ssm.ListComplianceItemsOutput) + + ListComplianceSummaries(*ssm.ListComplianceSummariesInput) (*ssm.ListComplianceSummariesOutput, error) + ListComplianceSummariesWithContext(aws.Context, *ssm.ListComplianceSummariesInput, ...request.Option) (*ssm.ListComplianceSummariesOutput, error) + ListComplianceSummariesRequest(*ssm.ListComplianceSummariesInput) (*request.Request, *ssm.ListComplianceSummariesOutput) + + ListDocumentVersions(*ssm.ListDocumentVersionsInput) (*ssm.ListDocumentVersionsOutput, error) + ListDocumentVersionsWithContext(aws.Context, *ssm.ListDocumentVersionsInput, ...request.Option) (*ssm.ListDocumentVersionsOutput, error) + ListDocumentVersionsRequest(*ssm.ListDocumentVersionsInput) (*request.Request, *ssm.ListDocumentVersionsOutput) + + ListDocuments(*ssm.ListDocumentsInput) (*ssm.ListDocumentsOutput, error) + ListDocumentsWithContext(aws.Context, *ssm.ListDocumentsInput, ...request.Option) (*ssm.ListDocumentsOutput, error) + ListDocumentsRequest(*ssm.ListDocumentsInput) (*request.Request, *ssm.ListDocumentsOutput) + + ListDocumentsPages(*ssm.ListDocumentsInput, func(*ssm.ListDocumentsOutput, bool) bool) error + ListDocumentsPagesWithContext(aws.Context, *ssm.ListDocumentsInput, func(*ssm.ListDocumentsOutput, bool) bool, ...request.Option) error + + ListInventoryEntries(*ssm.ListInventoryEntriesInput) (*ssm.ListInventoryEntriesOutput, error) + ListInventoryEntriesWithContext(aws.Context, *ssm.ListInventoryEntriesInput, ...request.Option) (*ssm.ListInventoryEntriesOutput, error) + ListInventoryEntriesRequest(*ssm.ListInventoryEntriesInput) (*request.Request, *ssm.ListInventoryEntriesOutput) + + ListResourceComplianceSummaries(*ssm.ListResourceComplianceSummariesInput) (*ssm.ListResourceComplianceSummariesOutput, error) + ListResourceComplianceSummariesWithContext(aws.Context, *ssm.ListResourceComplianceSummariesInput, ...request.Option) (*ssm.ListResourceComplianceSummariesOutput, error) + ListResourceComplianceSummariesRequest(*ssm.ListResourceComplianceSummariesInput) (*request.Request, *ssm.ListResourceComplianceSummariesOutput) + + ListResourceDataSync(*ssm.ListResourceDataSyncInput) (*ssm.ListResourceDataSyncOutput, error) + ListResourceDataSyncWithContext(aws.Context, *ssm.ListResourceDataSyncInput, ...request.Option) (*ssm.ListResourceDataSyncOutput, error) + ListResourceDataSyncRequest(*ssm.ListResourceDataSyncInput) (*request.Request, *ssm.ListResourceDataSyncOutput) + + ListTagsForResource(*ssm.ListTagsForResourceInput) (*ssm.ListTagsForResourceOutput, error) + ListTagsForResourceWithContext(aws.Context, *ssm.ListTagsForResourceInput, ...request.Option) (*ssm.ListTagsForResourceOutput, error) + ListTagsForResourceRequest(*ssm.ListTagsForResourceInput) (*request.Request, *ssm.ListTagsForResourceOutput) + + ModifyDocumentPermission(*ssm.ModifyDocumentPermissionInput) (*ssm.ModifyDocumentPermissionOutput, error) + ModifyDocumentPermissionWithContext(aws.Context, *ssm.ModifyDocumentPermissionInput, ...request.Option) (*ssm.ModifyDocumentPermissionOutput, error) + ModifyDocumentPermissionRequest(*ssm.ModifyDocumentPermissionInput) (*request.Request, *ssm.ModifyDocumentPermissionOutput) + + PutComplianceItems(*ssm.PutComplianceItemsInput) (*ssm.PutComplianceItemsOutput, error) + PutComplianceItemsWithContext(aws.Context, *ssm.PutComplianceItemsInput, ...request.Option) (*ssm.PutComplianceItemsOutput, error) + PutComplianceItemsRequest(*ssm.PutComplianceItemsInput) (*request.Request, *ssm.PutComplianceItemsOutput) + + PutInventory(*ssm.PutInventoryInput) (*ssm.PutInventoryOutput, error) + PutInventoryWithContext(aws.Context, *ssm.PutInventoryInput, ...request.Option) (*ssm.PutInventoryOutput, error) + PutInventoryRequest(*ssm.PutInventoryInput) (*request.Request, *ssm.PutInventoryOutput) + + PutParameter(*ssm.PutParameterInput) (*ssm.PutParameterOutput, error) + PutParameterWithContext(aws.Context, *ssm.PutParameterInput, ...request.Option) (*ssm.PutParameterOutput, error) + PutParameterRequest(*ssm.PutParameterInput) (*request.Request, *ssm.PutParameterOutput) + + RegisterDefaultPatchBaseline(*ssm.RegisterDefaultPatchBaselineInput) (*ssm.RegisterDefaultPatchBaselineOutput, error) + RegisterDefaultPatchBaselineWithContext(aws.Context, *ssm.RegisterDefaultPatchBaselineInput, ...request.Option) (*ssm.RegisterDefaultPatchBaselineOutput, error) + RegisterDefaultPatchBaselineRequest(*ssm.RegisterDefaultPatchBaselineInput) (*request.Request, *ssm.RegisterDefaultPatchBaselineOutput) + + RegisterPatchBaselineForPatchGroup(*ssm.RegisterPatchBaselineForPatchGroupInput) (*ssm.RegisterPatchBaselineForPatchGroupOutput, error) + RegisterPatchBaselineForPatchGroupWithContext(aws.Context, *ssm.RegisterPatchBaselineForPatchGroupInput, ...request.Option) (*ssm.RegisterPatchBaselineForPatchGroupOutput, error) + RegisterPatchBaselineForPatchGroupRequest(*ssm.RegisterPatchBaselineForPatchGroupInput) (*request.Request, *ssm.RegisterPatchBaselineForPatchGroupOutput) + + RegisterTargetWithMaintenanceWindow(*ssm.RegisterTargetWithMaintenanceWindowInput) (*ssm.RegisterTargetWithMaintenanceWindowOutput, error) + RegisterTargetWithMaintenanceWindowWithContext(aws.Context, *ssm.RegisterTargetWithMaintenanceWindowInput, ...request.Option) (*ssm.RegisterTargetWithMaintenanceWindowOutput, error) + RegisterTargetWithMaintenanceWindowRequest(*ssm.RegisterTargetWithMaintenanceWindowInput) (*request.Request, *ssm.RegisterTargetWithMaintenanceWindowOutput) + + RegisterTaskWithMaintenanceWindow(*ssm.RegisterTaskWithMaintenanceWindowInput) (*ssm.RegisterTaskWithMaintenanceWindowOutput, error) + RegisterTaskWithMaintenanceWindowWithContext(aws.Context, *ssm.RegisterTaskWithMaintenanceWindowInput, ...request.Option) (*ssm.RegisterTaskWithMaintenanceWindowOutput, error) + RegisterTaskWithMaintenanceWindowRequest(*ssm.RegisterTaskWithMaintenanceWindowInput) (*request.Request, *ssm.RegisterTaskWithMaintenanceWindowOutput) + + RemoveTagsFromResource(*ssm.RemoveTagsFromResourceInput) (*ssm.RemoveTagsFromResourceOutput, error) + RemoveTagsFromResourceWithContext(aws.Context, *ssm.RemoveTagsFromResourceInput, ...request.Option) (*ssm.RemoveTagsFromResourceOutput, error) + RemoveTagsFromResourceRequest(*ssm.RemoveTagsFromResourceInput) (*request.Request, *ssm.RemoveTagsFromResourceOutput) + + SendAutomationSignal(*ssm.SendAutomationSignalInput) (*ssm.SendAutomationSignalOutput, error) + SendAutomationSignalWithContext(aws.Context, *ssm.SendAutomationSignalInput, ...request.Option) (*ssm.SendAutomationSignalOutput, error) + SendAutomationSignalRequest(*ssm.SendAutomationSignalInput) (*request.Request, *ssm.SendAutomationSignalOutput) + + SendCommand(*ssm.SendCommandInput) (*ssm.SendCommandOutput, error) + SendCommandWithContext(aws.Context, *ssm.SendCommandInput, ...request.Option) (*ssm.SendCommandOutput, error) + SendCommandRequest(*ssm.SendCommandInput) (*request.Request, *ssm.SendCommandOutput) + + StartAutomationExecution(*ssm.StartAutomationExecutionInput) (*ssm.StartAutomationExecutionOutput, error) + StartAutomationExecutionWithContext(aws.Context, *ssm.StartAutomationExecutionInput, ...request.Option) (*ssm.StartAutomationExecutionOutput, error) + StartAutomationExecutionRequest(*ssm.StartAutomationExecutionInput) (*request.Request, *ssm.StartAutomationExecutionOutput) + + StopAutomationExecution(*ssm.StopAutomationExecutionInput) (*ssm.StopAutomationExecutionOutput, error) + StopAutomationExecutionWithContext(aws.Context, *ssm.StopAutomationExecutionInput, ...request.Option) (*ssm.StopAutomationExecutionOutput, error) + StopAutomationExecutionRequest(*ssm.StopAutomationExecutionInput) (*request.Request, *ssm.StopAutomationExecutionOutput) + + UpdateAssociation(*ssm.UpdateAssociationInput) (*ssm.UpdateAssociationOutput, error) + UpdateAssociationWithContext(aws.Context, *ssm.UpdateAssociationInput, ...request.Option) (*ssm.UpdateAssociationOutput, error) + UpdateAssociationRequest(*ssm.UpdateAssociationInput) (*request.Request, *ssm.UpdateAssociationOutput) + + UpdateAssociationStatus(*ssm.UpdateAssociationStatusInput) (*ssm.UpdateAssociationStatusOutput, error) + UpdateAssociationStatusWithContext(aws.Context, *ssm.UpdateAssociationStatusInput, ...request.Option) (*ssm.UpdateAssociationStatusOutput, error) + UpdateAssociationStatusRequest(*ssm.UpdateAssociationStatusInput) (*request.Request, *ssm.UpdateAssociationStatusOutput) + + UpdateDocument(*ssm.UpdateDocumentInput) (*ssm.UpdateDocumentOutput, error) + UpdateDocumentWithContext(aws.Context, *ssm.UpdateDocumentInput, ...request.Option) (*ssm.UpdateDocumentOutput, error) + UpdateDocumentRequest(*ssm.UpdateDocumentInput) (*request.Request, *ssm.UpdateDocumentOutput) + + UpdateDocumentDefaultVersion(*ssm.UpdateDocumentDefaultVersionInput) (*ssm.UpdateDocumentDefaultVersionOutput, error) + UpdateDocumentDefaultVersionWithContext(aws.Context, *ssm.UpdateDocumentDefaultVersionInput, ...request.Option) (*ssm.UpdateDocumentDefaultVersionOutput, error) + UpdateDocumentDefaultVersionRequest(*ssm.UpdateDocumentDefaultVersionInput) (*request.Request, *ssm.UpdateDocumentDefaultVersionOutput) + + UpdateMaintenanceWindow(*ssm.UpdateMaintenanceWindowInput) (*ssm.UpdateMaintenanceWindowOutput, error) + UpdateMaintenanceWindowWithContext(aws.Context, *ssm.UpdateMaintenanceWindowInput, ...request.Option) (*ssm.UpdateMaintenanceWindowOutput, error) + UpdateMaintenanceWindowRequest(*ssm.UpdateMaintenanceWindowInput) (*request.Request, *ssm.UpdateMaintenanceWindowOutput) + + UpdateMaintenanceWindowTarget(*ssm.UpdateMaintenanceWindowTargetInput) (*ssm.UpdateMaintenanceWindowTargetOutput, error) + UpdateMaintenanceWindowTargetWithContext(aws.Context, *ssm.UpdateMaintenanceWindowTargetInput, ...request.Option) (*ssm.UpdateMaintenanceWindowTargetOutput, error) + UpdateMaintenanceWindowTargetRequest(*ssm.UpdateMaintenanceWindowTargetInput) (*request.Request, *ssm.UpdateMaintenanceWindowTargetOutput) + + UpdateMaintenanceWindowTask(*ssm.UpdateMaintenanceWindowTaskInput) (*ssm.UpdateMaintenanceWindowTaskOutput, error) + UpdateMaintenanceWindowTaskWithContext(aws.Context, *ssm.UpdateMaintenanceWindowTaskInput, ...request.Option) (*ssm.UpdateMaintenanceWindowTaskOutput, error) + UpdateMaintenanceWindowTaskRequest(*ssm.UpdateMaintenanceWindowTaskInput) (*request.Request, *ssm.UpdateMaintenanceWindowTaskOutput) + + UpdateManagedInstanceRole(*ssm.UpdateManagedInstanceRoleInput) (*ssm.UpdateManagedInstanceRoleOutput, error) + UpdateManagedInstanceRoleWithContext(aws.Context, *ssm.UpdateManagedInstanceRoleInput, ...request.Option) (*ssm.UpdateManagedInstanceRoleOutput, error) + UpdateManagedInstanceRoleRequest(*ssm.UpdateManagedInstanceRoleInput) (*request.Request, *ssm.UpdateManagedInstanceRoleOutput) + + UpdatePatchBaseline(*ssm.UpdatePatchBaselineInput) (*ssm.UpdatePatchBaselineOutput, error) + UpdatePatchBaselineWithContext(aws.Context, *ssm.UpdatePatchBaselineInput, ...request.Option) (*ssm.UpdatePatchBaselineOutput, error) + UpdatePatchBaselineRequest(*ssm.UpdatePatchBaselineInput) (*request.Request, *ssm.UpdatePatchBaselineOutput) +} + +var _ SSMAPI = (*ssm.SSM)(nil) diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/api.go b/vendor/github.com/aws/aws-sdk-go/service/sts/api.go new file mode 100644 index 00000000..b46da12c --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/api.go @@ -0,0 +1,2398 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package sts + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opAssumeRole = "AssumeRole" + +// AssumeRoleRequest generates a "aws/request.Request" representing the +// client's request for the AssumeRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssumeRole for more information on using the AssumeRole +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssumeRoleRequest method. +// req, resp := client.AssumeRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole +func (c *STS) AssumeRoleRequest(input *AssumeRoleInput) (req *request.Request, output *AssumeRoleOutput) { + op := &request.Operation{ + Name: opAssumeRole, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssumeRoleInput{} + } + + output = &AssumeRoleOutput{} + req = c.newRequest(op, input, output) + return +} + +// AssumeRole API operation for AWS Security Token Service. +// +// Returns a set of temporary security credentials (consisting of an access +// key ID, a secret access key, and a security token) that you can use to access +// AWS resources that you might not normally have access to. Typically, you +// use AssumeRole for cross-account access or federation. For a comparison of +// AssumeRole with the other APIs that produce temporary credentials, see Requesting +// Temporary Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html) +// and Comparing the AWS STS APIs (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison) +// in the IAM User Guide. +// +// Important: You cannot call AssumeRole by using AWS root account credentials; +// access is denied. You must use credentials for an IAM user or an IAM role +// to call AssumeRole. +// +// For cross-account access, imagine that you own multiple accounts and need +// to access resources in each account. You could create long-term credentials +// in each account to access those resources. However, managing all those credentials +// and remembering which one can access which account can be time consuming. +// Instead, you can create one set of long-term credentials in one account and +// then use temporary security credentials to access all the other accounts +// by assuming roles in those accounts. For more information about roles, see +// IAM Roles (Delegation and Federation) (http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html) +// in the IAM User Guide. +// +// For federation, you can, for example, grant single sign-on access to the +// AWS Management Console. If you already have an identity and authentication +// system in your corporate network, you don't have to recreate user identities +// in AWS in order to grant those user identities access to AWS. Instead, after +// a user has been authenticated, you call AssumeRole (and specify the role +// with the appropriate permissions) to get temporary security credentials for +// that user. With those temporary security credentials, you construct a sign-in +// URL that users can use to access the console. For more information, see Common +// Scenarios for Temporary Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html#sts-introduction) +// in the IAM User Guide. +// +// By default, the temporary security credentials created by AssumeRole last +// for one hour. However, you can use the optional DurationSeconds parameter +// to specify the duration of your session. You can provide a value from 900 +// seconds (15 minutes) up to the maximum session duration setting for the role. +// This setting can have a value from 1 hour to 12 hours. To learn how to view +// the maximum value for your role, see View the Maximum Session Duration Setting +// for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) +// in the IAM User Guide. The maximum session duration limit applies when you +// use the AssumeRole* API operations or the assume-role* CLI operations but +// does not apply when you use those operations to create a console URL. For +// more information, see Using IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) +// in the IAM User Guide. +// +// The temporary security credentials created by AssumeRole can be used to make +// API calls to any AWS service with the following exception: you cannot call +// the STS service's GetFederationToken or GetSessionToken APIs. +// +// Optionally, you can pass an IAM access policy to this operation. If you choose +// not to pass a policy, the temporary security credentials that are returned +// by the operation have the permissions that are defined in the access policy +// of the role that is being assumed. If you pass a policy to this operation, +// the temporary security credentials that are returned by the operation have +// the permissions that are allowed by both the access policy of the role that +// is being assumed, and the policy that you pass. This gives you a way to further +// restrict the permissions for the resulting temporary security credentials. +// You cannot use the passed policy to grant permissions that are in excess +// of those allowed by the access policy of the role that is being assumed. +// For more information, see Permissions for AssumeRole, AssumeRoleWithSAML, +// and AssumeRoleWithWebIdentity (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html) +// in the IAM User Guide. +// +// To assume a role, your AWS account must be trusted by the role. The trust +// relationship is defined in the role's trust policy when the role is created. +// That trust policy states which accounts are allowed to delegate access to +// this account's role. +// +// The user who wants to access the role must also have permissions delegated +// from the role's administrator. If the user is in a different account than +// the role, then the user's administrator must attach a policy that allows +// the user to call AssumeRole on the ARN of the role in the other account. +// If the user is in the same account as the role, then you can either attach +// a policy to the user (identical to the previous different account user), +// or you can add the user as a principal directly in the role's trust policy. +// In this case, the trust policy acts as the only resource-based policy in +// IAM, and users in the same account as the role do not need explicit permission +// to assume the role. For more information about trust policies and resource-based +// policies, see IAM Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) +// in the IAM User Guide. +// +// Using MFA with AssumeRole +// +// You can optionally include multi-factor authentication (MFA) information +// when you call AssumeRole. This is useful for cross-account scenarios in which +// you want to make sure that the user who is assuming the role has been authenticated +// using an AWS MFA device. In that scenario, the trust policy of the role being +// assumed includes a condition that tests for MFA authentication; if the caller +// does not include valid MFA information, the request to assume the role is +// denied. The condition in a trust policy that tests for MFA authentication +// might look like the following example. +// +// "Condition": {"Bool": {"aws:MultiFactorAuthPresent": true}} +// +// For more information, see Configuring MFA-Protected API Access (http://docs.aws.amazon.com/IAM/latest/UserGuide/MFAProtectedAPI.html) +// in the IAM User Guide guide. +// +// To use MFA with AssumeRole, you pass values for the SerialNumber and TokenCode +// parameters. The SerialNumber value identifies the user's hardware or virtual +// MFA device. The TokenCode is the time-based one-time password (TOTP) that +// the MFA devices produces. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation AssumeRole for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodePackedPolicyTooLargeException "PackedPolicyTooLarge" +// The request was rejected because the policy document was too large. The error +// message describes how big the policy document is, in packed form, as a percentage +// of what the API allows. +// +// * ErrCodeRegionDisabledException "RegionDisabledException" +// STS is not activated in the requested region for the account that is being +// asked to generate credentials. The account administrator must use the IAM +// console to activate STS in that region. For more information, see Activating +// and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the IAM User Guide. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole +func (c *STS) AssumeRole(input *AssumeRoleInput) (*AssumeRoleOutput, error) { + req, out := c.AssumeRoleRequest(input) + return out, req.Send() +} + +// AssumeRoleWithContext is the same as AssumeRole with the addition of +// the ability to pass a context and additional request options. +// +// See AssumeRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) AssumeRoleWithContext(ctx aws.Context, input *AssumeRoleInput, opts ...request.Option) (*AssumeRoleOutput, error) { + req, out := c.AssumeRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAssumeRoleWithSAML = "AssumeRoleWithSAML" + +// AssumeRoleWithSAMLRequest generates a "aws/request.Request" representing the +// client's request for the AssumeRoleWithSAML operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssumeRoleWithSAML for more information on using the AssumeRoleWithSAML +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssumeRoleWithSAMLRequest method. +// req, resp := client.AssumeRoleWithSAMLRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAML +func (c *STS) AssumeRoleWithSAMLRequest(input *AssumeRoleWithSAMLInput) (req *request.Request, output *AssumeRoleWithSAMLOutput) { + op := &request.Operation{ + Name: opAssumeRoleWithSAML, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssumeRoleWithSAMLInput{} + } + + output = &AssumeRoleWithSAMLOutput{} + req = c.newRequest(op, input, output) + return +} + +// AssumeRoleWithSAML API operation for AWS Security Token Service. +// +// Returns a set of temporary security credentials for users who have been authenticated +// via a SAML authentication response. This operation provides a mechanism for +// tying an enterprise identity store or directory to role-based AWS access +// without user-specific credentials or configuration. For a comparison of AssumeRoleWithSAML +// with the other APIs that produce temporary credentials, see Requesting Temporary +// Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html) +// and Comparing the AWS STS APIs (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison) +// in the IAM User Guide. +// +// The temporary security credentials returned by this operation consist of +// an access key ID, a secret access key, and a security token. Applications +// can use these temporary security credentials to sign calls to AWS services. +// +// By default, the temporary security credentials created by AssumeRoleWithSAML +// last for one hour. However, you can use the optional DurationSeconds parameter +// to specify the duration of your session. Your role session lasts for the +// duration that you specify, or until the time specified in the SAML authentication +// response's SessionNotOnOrAfter value, whichever is shorter. You can provide +// a DurationSeconds value from 900 seconds (15 minutes) up to the maximum session +// duration setting for the role. This setting can have a value from 1 hour +// to 12 hours. To learn how to view the maximum value for your role, see View +// the Maximum Session Duration Setting for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) +// in the IAM User Guide. The maximum session duration limit applies when you +// use the AssumeRole* API operations or the assume-role* CLI operations but +// does not apply when you use those operations to create a console URL. For +// more information, see Using IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) +// in the IAM User Guide. +// +// The temporary security credentials created by AssumeRoleWithSAML can be used +// to make API calls to any AWS service with the following exception: you cannot +// call the STS service's GetFederationToken or GetSessionToken APIs. +// +// Optionally, you can pass an IAM access policy to this operation. If you choose +// not to pass a policy, the temporary security credentials that are returned +// by the operation have the permissions that are defined in the access policy +// of the role that is being assumed. If you pass a policy to this operation, +// the temporary security credentials that are returned by the operation have +// the permissions that are allowed by the intersection of both the access policy +// of the role that is being assumed, and the policy that you pass. This means +// that both policies must grant the permission for the action to be allowed. +// This gives you a way to further restrict the permissions for the resulting +// temporary security credentials. You cannot use the passed policy to grant +// permissions that are in excess of those allowed by the access policy of the +// role that is being assumed. For more information, see Permissions for AssumeRole, +// AssumeRoleWithSAML, and AssumeRoleWithWebIdentity (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html) +// in the IAM User Guide. +// +// Before your application can call AssumeRoleWithSAML, you must configure your +// SAML identity provider (IdP) to issue the claims required by AWS. Additionally, +// you must use AWS Identity and Access Management (IAM) to create a SAML provider +// entity in your AWS account that represents your identity provider, and create +// an IAM role that specifies this SAML provider in its trust policy. +// +// Calling AssumeRoleWithSAML does not require the use of AWS security credentials. +// The identity of the caller is validated by using keys in the metadata document +// that is uploaded for the SAML provider entity for your identity provider. +// +// Calling AssumeRoleWithSAML can result in an entry in your AWS CloudTrail +// logs. The entry includes the value in the NameID element of the SAML assertion. +// We recommend that you use a NameIDType that is not associated with any personally +// identifiable information (PII). For example, you could instead use the Persistent +// Identifier (urn:oasis:names:tc:SAML:2.0:nameid-format:persistent). +// +// For more information, see the following resources: +// +// * About SAML 2.0-based Federation (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html) +// in the IAM User Guide. +// +// * Creating SAML Identity Providers (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html) +// in the IAM User Guide. +// +// * Configuring a Relying Party and Claims (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_relying-party.html) +// in the IAM User Guide. +// +// * Creating a Role for SAML 2.0 Federation (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_saml.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation AssumeRoleWithSAML for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodePackedPolicyTooLargeException "PackedPolicyTooLarge" +// The request was rejected because the policy document was too large. The error +// message describes how big the policy document is, in packed form, as a percentage +// of what the API allows. +// +// * ErrCodeIDPRejectedClaimException "IDPRejectedClaim" +// The identity provider (IdP) reported that authentication failed. This might +// be because the claim is invalid. +// +// If this error is returned for the AssumeRoleWithWebIdentity operation, it +// can also mean that the claim has expired or has been explicitly revoked. +// +// * ErrCodeInvalidIdentityTokenException "InvalidIdentityToken" +// The web identity token that was passed could not be validated by AWS. Get +// a new identity token from the identity provider and then retry the request. +// +// * ErrCodeExpiredTokenException "ExpiredTokenException" +// The web identity token that was passed is expired or is not valid. Get a +// new identity token from the identity provider and then retry the request. +// +// * ErrCodeRegionDisabledException "RegionDisabledException" +// STS is not activated in the requested region for the account that is being +// asked to generate credentials. The account administrator must use the IAM +// console to activate STS in that region. For more information, see Activating +// and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the IAM User Guide. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAML +func (c *STS) AssumeRoleWithSAML(input *AssumeRoleWithSAMLInput) (*AssumeRoleWithSAMLOutput, error) { + req, out := c.AssumeRoleWithSAMLRequest(input) + return out, req.Send() +} + +// AssumeRoleWithSAMLWithContext is the same as AssumeRoleWithSAML with the addition of +// the ability to pass a context and additional request options. +// +// See AssumeRoleWithSAML for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) AssumeRoleWithSAMLWithContext(ctx aws.Context, input *AssumeRoleWithSAMLInput, opts ...request.Option) (*AssumeRoleWithSAMLOutput, error) { + req, out := c.AssumeRoleWithSAMLRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAssumeRoleWithWebIdentity = "AssumeRoleWithWebIdentity" + +// AssumeRoleWithWebIdentityRequest generates a "aws/request.Request" representing the +// client's request for the AssumeRoleWithWebIdentity operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssumeRoleWithWebIdentity for more information on using the AssumeRoleWithWebIdentity +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssumeRoleWithWebIdentityRequest method. +// req, resp := client.AssumeRoleWithWebIdentityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentity +func (c *STS) AssumeRoleWithWebIdentityRequest(input *AssumeRoleWithWebIdentityInput) (req *request.Request, output *AssumeRoleWithWebIdentityOutput) { + op := &request.Operation{ + Name: opAssumeRoleWithWebIdentity, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssumeRoleWithWebIdentityInput{} + } + + output = &AssumeRoleWithWebIdentityOutput{} + req = c.newRequest(op, input, output) + return +} + +// AssumeRoleWithWebIdentity API operation for AWS Security Token Service. +// +// Returns a set of temporary security credentials for users who have been authenticated +// in a mobile or web application with a web identity provider, such as Amazon +// Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible +// identity provider. +// +// For mobile applications, we recommend that you use Amazon Cognito. You can +// use Amazon Cognito with the AWS SDK for iOS (http://aws.amazon.com/sdkforios/) +// and the AWS SDK for Android (http://aws.amazon.com/sdkforandroid/) to uniquely +// identify a user and supply the user with a consistent identity throughout +// the lifetime of an application. +// +// To learn more about Amazon Cognito, see Amazon Cognito Overview (http://docs.aws.amazon.com/mobile/sdkforandroid/developerguide/cognito-auth.html#d0e840) +// in the AWS SDK for Android Developer Guide guide and Amazon Cognito Overview +// (http://docs.aws.amazon.com/mobile/sdkforios/developerguide/cognito-auth.html#d0e664) +// in the AWS SDK for iOS Developer Guide. +// +// Calling AssumeRoleWithWebIdentity does not require the use of AWS security +// credentials. Therefore, you can distribute an application (for example, on +// mobile devices) that requests temporary security credentials without including +// long-term AWS credentials in the application, and without deploying server-based +// proxy services that use long-term AWS credentials. Instead, the identity +// of the caller is validated by using a token from the web identity provider. +// For a comparison of AssumeRoleWithWebIdentity with the other APIs that produce +// temporary credentials, see Requesting Temporary Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html) +// and Comparing the AWS STS APIs (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison) +// in the IAM User Guide. +// +// The temporary security credentials returned by this API consist of an access +// key ID, a secret access key, and a security token. Applications can use these +// temporary security credentials to sign calls to AWS service APIs. +// +// By default, the temporary security credentials created by AssumeRoleWithWebIdentity +// last for one hour. However, you can use the optional DurationSeconds parameter +// to specify the duration of your session. You can provide a value from 900 +// seconds (15 minutes) up to the maximum session duration setting for the role. +// This setting can have a value from 1 hour to 12 hours. To learn how to view +// the maximum value for your role, see View the Maximum Session Duration Setting +// for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) +// in the IAM User Guide. The maximum session duration limit applies when you +// use the AssumeRole* API operations or the assume-role* CLI operations but +// does not apply when you use those operations to create a console URL. For +// more information, see Using IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) +// in the IAM User Guide. +// +// The temporary security credentials created by AssumeRoleWithWebIdentity can +// be used to make API calls to any AWS service with the following exception: +// you cannot call the STS service's GetFederationToken or GetSessionToken APIs. +// +// Optionally, you can pass an IAM access policy to this operation. If you choose +// not to pass a policy, the temporary security credentials that are returned +// by the operation have the permissions that are defined in the access policy +// of the role that is being assumed. If you pass a policy to this operation, +// the temporary security credentials that are returned by the operation have +// the permissions that are allowed by both the access policy of the role that +// is being assumed, and the policy that you pass. This gives you a way to further +// restrict the permissions for the resulting temporary security credentials. +// You cannot use the passed policy to grant permissions that are in excess +// of those allowed by the access policy of the role that is being assumed. +// For more information, see Permissions for AssumeRole, AssumeRoleWithSAML, +// and AssumeRoleWithWebIdentity (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html) +// in the IAM User Guide. +// +// Before your application can call AssumeRoleWithWebIdentity, you must have +// an identity token from a supported identity provider and create a role that +// the application can assume. The role that your application assumes must trust +// the identity provider that is associated with the identity token. In other +// words, the identity provider must be specified in the role's trust policy. +// +// Calling AssumeRoleWithWebIdentity can result in an entry in your AWS CloudTrail +// logs. The entry includes the Subject (http://openid.net/specs/openid-connect-core-1_0.html#Claims) +// of the provided Web Identity Token. We recommend that you avoid using any +// personally identifiable information (PII) in this field. For example, you +// could instead use a GUID or a pairwise identifier, as suggested in the OIDC +// specification (http://openid.net/specs/openid-connect-core-1_0.html#SubjectIDTypes). +// +// For more information about how to use web identity federation and the AssumeRoleWithWebIdentity +// API, see the following resources: +// +// * Using Web Identity Federation APIs for Mobile Apps (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_manual.html) +// and Federation Through a Web-based Identity Provider (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithwebidentity). +// +// +// * Web Identity Federation Playground (https://web-identity-federation-playground.s3.amazonaws.com/index.html). +// This interactive website lets you walk through the process of authenticating +// via Login with Amazon, Facebook, or Google, getting temporary security +// credentials, and then using those credentials to make a request to AWS. +// +// +// * AWS SDK for iOS (http://aws.amazon.com/sdkforios/) and AWS SDK for Android +// (http://aws.amazon.com/sdkforandroid/). These toolkits contain sample +// apps that show how to invoke the identity providers, and then how to use +// the information from these providers to get and use temporary security +// credentials. +// +// * Web Identity Federation with Mobile Applications (http://aws.amazon.com/articles/web-identity-federation-with-mobile-applications). +// This article discusses web identity federation and shows an example of +// how to use web identity federation to get access to content in Amazon +// S3. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation AssumeRoleWithWebIdentity for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodePackedPolicyTooLargeException "PackedPolicyTooLarge" +// The request was rejected because the policy document was too large. The error +// message describes how big the policy document is, in packed form, as a percentage +// of what the API allows. +// +// * ErrCodeIDPRejectedClaimException "IDPRejectedClaim" +// The identity provider (IdP) reported that authentication failed. This might +// be because the claim is invalid. +// +// If this error is returned for the AssumeRoleWithWebIdentity operation, it +// can also mean that the claim has expired or has been explicitly revoked. +// +// * ErrCodeIDPCommunicationErrorException "IDPCommunicationError" +// The request could not be fulfilled because the non-AWS identity provider +// (IDP) that was asked to verify the incoming identity token could not be reached. +// This is often a transient error caused by network conditions. Retry the request +// a limited number of times so that you don't exceed the request rate. If the +// error persists, the non-AWS identity provider might be down or not responding. +// +// * ErrCodeInvalidIdentityTokenException "InvalidIdentityToken" +// The web identity token that was passed could not be validated by AWS. Get +// a new identity token from the identity provider and then retry the request. +// +// * ErrCodeExpiredTokenException "ExpiredTokenException" +// The web identity token that was passed is expired or is not valid. Get a +// new identity token from the identity provider and then retry the request. +// +// * ErrCodeRegionDisabledException "RegionDisabledException" +// STS is not activated in the requested region for the account that is being +// asked to generate credentials. The account administrator must use the IAM +// console to activate STS in that region. For more information, see Activating +// and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the IAM User Guide. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentity +func (c *STS) AssumeRoleWithWebIdentity(input *AssumeRoleWithWebIdentityInput) (*AssumeRoleWithWebIdentityOutput, error) { + req, out := c.AssumeRoleWithWebIdentityRequest(input) + return out, req.Send() +} + +// AssumeRoleWithWebIdentityWithContext is the same as AssumeRoleWithWebIdentity with the addition of +// the ability to pass a context and additional request options. +// +// See AssumeRoleWithWebIdentity for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) AssumeRoleWithWebIdentityWithContext(ctx aws.Context, input *AssumeRoleWithWebIdentityInput, opts ...request.Option) (*AssumeRoleWithWebIdentityOutput, error) { + req, out := c.AssumeRoleWithWebIdentityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDecodeAuthorizationMessage = "DecodeAuthorizationMessage" + +// DecodeAuthorizationMessageRequest generates a "aws/request.Request" representing the +// client's request for the DecodeAuthorizationMessage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DecodeAuthorizationMessage for more information on using the DecodeAuthorizationMessage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DecodeAuthorizationMessageRequest method. +// req, resp := client.DecodeAuthorizationMessageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessage +func (c *STS) DecodeAuthorizationMessageRequest(input *DecodeAuthorizationMessageInput) (req *request.Request, output *DecodeAuthorizationMessageOutput) { + op := &request.Operation{ + Name: opDecodeAuthorizationMessage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DecodeAuthorizationMessageInput{} + } + + output = &DecodeAuthorizationMessageOutput{} + req = c.newRequest(op, input, output) + return +} + +// DecodeAuthorizationMessage API operation for AWS Security Token Service. +// +// Decodes additional information about the authorization status of a request +// from an encoded message returned in response to an AWS request. +// +// For example, if a user is not authorized to perform an action that he or +// she has requested, the request returns a Client.UnauthorizedOperation response +// (an HTTP 403 response). Some AWS actions additionally return an encoded message +// that can provide details about this authorization failure. +// +// Only certain AWS actions return an encoded authorization message. The documentation +// for an individual action indicates whether that action returns an encoded +// message in addition to returning an HTTP code. +// +// The message is encoded because the details of the authorization status can +// constitute privileged information that the user who requested the action +// should not see. To decode an authorization status message, a user must be +// granted permissions via an IAM policy to request the DecodeAuthorizationMessage +// (sts:DecodeAuthorizationMessage) action. +// +// The decoded message includes the following type of information: +// +// * Whether the request was denied due to an explicit deny or due to the +// absence of an explicit allow. For more information, see Determining Whether +// a Request is Allowed or Denied (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow) +// in the IAM User Guide. +// +// * The principal who made the request. +// +// * The requested action. +// +// * The requested resource. +// +// * The values of condition keys in the context of the user's request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation DecodeAuthorizationMessage for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidAuthorizationMessageException "InvalidAuthorizationMessageException" +// The error returned if the message passed to DecodeAuthorizationMessage was +// invalid. This can happen if the token contains invalid characters, such as +// linebreaks. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessage +func (c *STS) DecodeAuthorizationMessage(input *DecodeAuthorizationMessageInput) (*DecodeAuthorizationMessageOutput, error) { + req, out := c.DecodeAuthorizationMessageRequest(input) + return out, req.Send() +} + +// DecodeAuthorizationMessageWithContext is the same as DecodeAuthorizationMessage with the addition of +// the ability to pass a context and additional request options. +// +// See DecodeAuthorizationMessage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) DecodeAuthorizationMessageWithContext(ctx aws.Context, input *DecodeAuthorizationMessageInput, opts ...request.Option) (*DecodeAuthorizationMessageOutput, error) { + req, out := c.DecodeAuthorizationMessageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCallerIdentity = "GetCallerIdentity" + +// GetCallerIdentityRequest generates a "aws/request.Request" representing the +// client's request for the GetCallerIdentity operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCallerIdentity for more information on using the GetCallerIdentity +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCallerIdentityRequest method. +// req, resp := client.GetCallerIdentityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentity +func (c *STS) GetCallerIdentityRequest(input *GetCallerIdentityInput) (req *request.Request, output *GetCallerIdentityOutput) { + op := &request.Operation{ + Name: opGetCallerIdentity, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetCallerIdentityInput{} + } + + output = &GetCallerIdentityOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCallerIdentity API operation for AWS Security Token Service. +// +// Returns details about the IAM identity whose credentials are used to call +// the API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation GetCallerIdentity for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentity +func (c *STS) GetCallerIdentity(input *GetCallerIdentityInput) (*GetCallerIdentityOutput, error) { + req, out := c.GetCallerIdentityRequest(input) + return out, req.Send() +} + +// GetCallerIdentityWithContext is the same as GetCallerIdentity with the addition of +// the ability to pass a context and additional request options. +// +// See GetCallerIdentity for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) GetCallerIdentityWithContext(ctx aws.Context, input *GetCallerIdentityInput, opts ...request.Option) (*GetCallerIdentityOutput, error) { + req, out := c.GetCallerIdentityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetFederationToken = "GetFederationToken" + +// GetFederationTokenRequest generates a "aws/request.Request" representing the +// client's request for the GetFederationToken operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetFederationToken for more information on using the GetFederationToken +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetFederationTokenRequest method. +// req, resp := client.GetFederationTokenRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationToken +func (c *STS) GetFederationTokenRequest(input *GetFederationTokenInput) (req *request.Request, output *GetFederationTokenOutput) { + op := &request.Operation{ + Name: opGetFederationToken, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetFederationTokenInput{} + } + + output = &GetFederationTokenOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetFederationToken API operation for AWS Security Token Service. +// +// Returns a set of temporary security credentials (consisting of an access +// key ID, a secret access key, and a security token) for a federated user. +// A typical use is in a proxy application that gets temporary security credentials +// on behalf of distributed applications inside a corporate network. Because +// you must call the GetFederationToken action using the long-term security +// credentials of an IAM user, this call is appropriate in contexts where those +// credentials can be safely stored, usually in a server-based application. +// For a comparison of GetFederationToken with the other APIs that produce temporary +// credentials, see Requesting Temporary Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html) +// and Comparing the AWS STS APIs (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison) +// in the IAM User Guide. +// +// If you are creating a mobile-based or browser-based app that can authenticate +// users using a web identity provider like Login with Amazon, Facebook, Google, +// or an OpenID Connect-compatible identity provider, we recommend that you +// use Amazon Cognito (http://aws.amazon.com/cognito/) or AssumeRoleWithWebIdentity. +// For more information, see Federation Through a Web-based Identity Provider +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithwebidentity). +// +// The GetFederationToken action must be called by using the long-term AWS security +// credentials of an IAM user. You can also call GetFederationToken using the +// security credentials of an AWS root account, but we do not recommended it. +// Instead, we recommend that you create an IAM user for the purpose of the +// proxy application and then attach a policy to the IAM user that limits federated +// users to only the actions and resources that they need access to. For more +// information, see IAM Best Practices (http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) +// in the IAM User Guide. +// +// The temporary security credentials that are obtained by using the long-term +// credentials of an IAM user are valid for the specified duration, from 900 +// seconds (15 minutes) up to a maximium of 129600 seconds (36 hours). The default +// is 43200 seconds (12 hours). Temporary credentials that are obtained by using +// AWS root account credentials have a maximum duration of 3600 seconds (1 hour). +// +// The temporary security credentials created by GetFederationToken can be used +// to make API calls to any AWS service with the following exceptions: +// +// * You cannot use these credentials to call any IAM APIs. +// +// * You cannot call any STS APIs except GetCallerIdentity. +// +// Permissions +// +// The permissions for the temporary security credentials returned by GetFederationToken +// are determined by a combination of the following: +// +// * The policy or policies that are attached to the IAM user whose credentials +// are used to call GetFederationToken. +// +// * The policy that is passed as a parameter in the call. +// +// The passed policy is attached to the temporary security credentials that +// result from the GetFederationToken API call--that is, to the federated user. +// When the federated user makes an AWS request, AWS evaluates the policy attached +// to the federated user in combination with the policy or policies attached +// to the IAM user whose credentials were used to call GetFederationToken. AWS +// allows the federated user's request only when both the federated user and +// the IAM user are explicitly allowed to perform the requested action. The +// passed policy cannot grant more permissions than those that are defined in +// the IAM user policy. +// +// A typical use case is that the permissions of the IAM user whose credentials +// are used to call GetFederationToken are designed to allow access to all the +// actions and resources that any federated user will need. Then, for individual +// users, you pass a policy to the operation that scopes down the permissions +// to a level that's appropriate to that individual user, using a policy that +// allows only a subset of permissions that are granted to the IAM user. +// +// If you do not pass a policy, the resulting temporary security credentials +// have no effective permissions. The only exception is when the temporary security +// credentials are used to access a resource that has a resource-based policy +// that specifically allows the federated user to access the resource. +// +// For more information about how permissions work, see Permissions for GetFederationToken +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_getfederationtoken.html). +// For information about using GetFederationToken to create temporary security +// credentials, see GetFederationToken—Federation Through a Custom Identity +// Broker (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_getfederationtoken). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation GetFederationToken for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodePackedPolicyTooLargeException "PackedPolicyTooLarge" +// The request was rejected because the policy document was too large. The error +// message describes how big the policy document is, in packed form, as a percentage +// of what the API allows. +// +// * ErrCodeRegionDisabledException "RegionDisabledException" +// STS is not activated in the requested region for the account that is being +// asked to generate credentials. The account administrator must use the IAM +// console to activate STS in that region. For more information, see Activating +// and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the IAM User Guide. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationToken +func (c *STS) GetFederationToken(input *GetFederationTokenInput) (*GetFederationTokenOutput, error) { + req, out := c.GetFederationTokenRequest(input) + return out, req.Send() +} + +// GetFederationTokenWithContext is the same as GetFederationToken with the addition of +// the ability to pass a context and additional request options. +// +// See GetFederationToken for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) GetFederationTokenWithContext(ctx aws.Context, input *GetFederationTokenInput, opts ...request.Option) (*GetFederationTokenOutput, error) { + req, out := c.GetFederationTokenRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSessionToken = "GetSessionToken" + +// GetSessionTokenRequest generates a "aws/request.Request" representing the +// client's request for the GetSessionToken operation. The "output" return +// value will be populated with the request's response once the request completes +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSessionToken for more information on using the GetSessionToken +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSessionTokenRequest method. +// req, resp := client.GetSessionTokenRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionToken +func (c *STS) GetSessionTokenRequest(input *GetSessionTokenInput) (req *request.Request, output *GetSessionTokenOutput) { + op := &request.Operation{ + Name: opGetSessionToken, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetSessionTokenInput{} + } + + output = &GetSessionTokenOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSessionToken API operation for AWS Security Token Service. +// +// Returns a set of temporary credentials for an AWS account or IAM user. The +// credentials consist of an access key ID, a secret access key, and a security +// token. Typically, you use GetSessionToken if you want to use MFA to protect +// programmatic calls to specific AWS APIs like Amazon EC2 StopInstances. MFA-enabled +// IAM users would need to call GetSessionToken and submit an MFA code that +// is associated with their MFA device. Using the temporary security credentials +// that are returned from the call, IAM users can then make programmatic calls +// to APIs that require MFA authentication. If you do not supply a correct MFA +// code, then the API returns an access denied error. For a comparison of GetSessionToken +// with the other APIs that produce temporary credentials, see Requesting Temporary +// Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html) +// and Comparing the AWS STS APIs (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison) +// in the IAM User Guide. +// +// The GetSessionToken action must be called by using the long-term AWS security +// credentials of the AWS account or an IAM user. Credentials that are created +// by IAM users are valid for the duration that you specify, from 900 seconds +// (15 minutes) up to a maximum of 129600 seconds (36 hours), with a default +// of 43200 seconds (12 hours); credentials that are created by using account +// credentials can range from 900 seconds (15 minutes) up to a maximum of 3600 +// seconds (1 hour), with a default of 1 hour. +// +// The temporary security credentials created by GetSessionToken can be used +// to make API calls to any AWS service with the following exceptions: +// +// * You cannot call any IAM APIs unless MFA authentication information is +// included in the request. +// +// * You cannot call any STS API exceptAssumeRole or GetCallerIdentity. +// +// We recommend that you do not call GetSessionToken with root account credentials. +// Instead, follow our best practices (http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#create-iam-users) +// by creating one or more IAM users, giving them the necessary permissions, +// and using IAM users for everyday interaction with AWS. +// +// The permissions associated with the temporary security credentials returned +// by GetSessionToken are based on the permissions associated with account or +// IAM user whose credentials are used to call the action. If GetSessionToken +// is called using root account credentials, the temporary credentials have +// root account permissions. Similarly, if GetSessionToken is called using the +// credentials of an IAM user, the temporary credentials have the same permissions +// as the IAM user. +// +// For more information about using GetSessionToken to create temporary credentials, +// go to Temporary Credentials for Users in Untrusted Environments (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_getsessiontoken) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Security Token Service's +// API operation GetSessionToken for usage and error information. +// +// Returned Error Codes: +// * ErrCodeRegionDisabledException "RegionDisabledException" +// STS is not activated in the requested region for the account that is being +// asked to generate credentials. The account administrator must use the IAM +// console to activate STS in that region. For more information, see Activating +// and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the IAM User Guide. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionToken +func (c *STS) GetSessionToken(input *GetSessionTokenInput) (*GetSessionTokenOutput, error) { + req, out := c.GetSessionTokenRequest(input) + return out, req.Send() +} + +// GetSessionTokenWithContext is the same as GetSessionToken with the addition of +// the ability to pass a context and additional request options. +// +// See GetSessionToken for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *STS) GetSessionTokenWithContext(ctx aws.Context, input *GetSessionTokenInput, opts ...request.Option) (*GetSessionTokenOutput, error) { + req, out := c.GetSessionTokenRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AssumeRoleInput struct { + _ struct{} `type:"structure"` + + // The duration, in seconds, of the role session. The value can range from 900 + // seconds (15 minutes) up to the maximum session duration setting for the role. + // This setting can have a value from 1 hour to 12 hours. If you specify a value + // higher than this setting, the operation fails. For example, if you specify + // a session duration of 12 hours, but your administrator set the maximum session + // duration to 6 hours, your operation fails. To learn how to view the maximum + // value for your role, see View the Maximum Session Duration Setting for a + // Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) + // in the IAM User Guide. + // + // By default, the value is set to 3600 seconds. + // + // The DurationSeconds parameter is separate from the duration of a console + // session that you might request using the returned credentials. The request + // to the federation endpoint for a console sign-in token takes a SessionDuration + // parameter that specifies the maximum length of the console session. For more + // information, see Creating a URL that Enables Federated Users to Access the + // AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) + // in the IAM User Guide. + DurationSeconds *int64 `min:"900" type:"integer"` + + // A unique identifier that is used by third parties when assuming roles in + // their customers' accounts. For each role that the third party can assume, + // they should instruct their customers to ensure the role's trust policy checks + // for the external ID that the third party generated. Each time the third party + // assumes the role, they should pass the customer's external ID. The external + // ID is useful in order to help third parties bind a role to the customer who + // created it. For more information about the external ID, see How to Use an + // External ID When Granting Access to Your AWS Resources to a Third Party (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) + // in the IAM User Guide. + // + // The regex used to validated this parameter is a string of characters consisting + // of upper- and lower-case alphanumeric characters with no spaces. You can + // also include underscores or any of the following characters: =,.@:/- + ExternalId *string `min:"2" type:"string"` + + // An IAM policy in JSON format. + // + // This parameter is optional. If you pass a policy, the temporary security + // credentials that are returned by the operation have the permissions that + // are allowed by both (the intersection of) the access policy of the role that + // is being assumed, and the policy that you pass. This gives you a way to further + // restrict the permissions for the resulting temporary security credentials. + // You cannot use the passed policy to grant permissions that are in excess + // of those allowed by the access policy of the role that is being assumed. + // For more information, see Permissions for AssumeRole, AssumeRoleWithSAML, + // and AssumeRoleWithWebIdentity (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html) + // in the IAM User Guide. + // + // The format for this parameter, as described by its regex pattern, is a string + // of characters up to 2048 characters in length. The characters can be any + // ASCII character from the space character to the end of the valid character + // list (\u0020-\u00FF). It can also include the tab (\u0009), linefeed (\u000A), + // and carriage return (\u000D) characters. + // + // The policy plain text must be 2048 bytes or shorter. However, an internal + // conversion compresses it into a packed binary format with a separate limit. + // The PackedPolicySize response element indicates by percentage how close to + // the upper size limit the policy is, with 100% equaling the maximum allowed + // size. + Policy *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the role to assume. + // + // RoleArn is a required field + RoleArn *string `min:"20" type:"string" required:"true"` + + // An identifier for the assumed role session. + // + // Use the role session name to uniquely identify a session when the same role + // is assumed by different principals or for different reasons. In cross-account + // scenarios, the role session name is visible to, and can be logged by the + // account that owns the role. The role session name is also used in the ARN + // of the assumed role principal. This means that subsequent cross-account API + // requests using the temporary security credentials will expose the role session + // name to the external account in their CloudTrail logs. + // + // The regex used to validate this parameter is a string of characters consisting + // of upper- and lower-case alphanumeric characters with no spaces. You can + // also include underscores or any of the following characters: =,.@- + // + // RoleSessionName is a required field + RoleSessionName *string `min:"2" type:"string" required:"true"` + + // The identification number of the MFA device that is associated with the user + // who is making the AssumeRole call. Specify this value if the trust policy + // of the role being assumed includes a condition that requires MFA authentication. + // The value is either the serial number for a hardware device (such as GAHT12345678) + // or an Amazon Resource Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user). + // + // The regex used to validate this parameter is a string of characters consisting + // of upper- and lower-case alphanumeric characters with no spaces. You can + // also include underscores or any of the following characters: =,.@- + SerialNumber *string `min:"9" type:"string"` + + // The value provided by the MFA device, if the trust policy of the role being + // assumed requires MFA (that is, if the policy includes a condition that tests + // for MFA). If the role being assumed requires MFA and if the TokenCode value + // is missing or expired, the AssumeRole call returns an "access denied" error. + // + // The format for this parameter, as described by its regex pattern, is a sequence + // of six numeric digits. + TokenCode *string `min:"6" type:"string"` +} + +// String returns the string representation +func (s AssumeRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumeRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssumeRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssumeRoleInput"} + if s.DurationSeconds != nil && *s.DurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("DurationSeconds", 900)) + } + if s.ExternalId != nil && len(*s.ExternalId) < 2 { + invalidParams.Add(request.NewErrParamMinLen("ExternalId", 2)) + } + if s.Policy != nil && len(*s.Policy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Policy", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.RoleSessionName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleSessionName")) + } + if s.RoleSessionName != nil && len(*s.RoleSessionName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RoleSessionName", 2)) + } + if s.SerialNumber != nil && len(*s.SerialNumber) < 9 { + invalidParams.Add(request.NewErrParamMinLen("SerialNumber", 9)) + } + if s.TokenCode != nil && len(*s.TokenCode) < 6 { + invalidParams.Add(request.NewErrParamMinLen("TokenCode", 6)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDurationSeconds sets the DurationSeconds field's value. +func (s *AssumeRoleInput) SetDurationSeconds(v int64) *AssumeRoleInput { + s.DurationSeconds = &v + return s +} + +// SetExternalId sets the ExternalId field's value. +func (s *AssumeRoleInput) SetExternalId(v string) *AssumeRoleInput { + s.ExternalId = &v + return s +} + +// SetPolicy sets the Policy field's value. +func (s *AssumeRoleInput) SetPolicy(v string) *AssumeRoleInput { + s.Policy = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *AssumeRoleInput) SetRoleArn(v string) *AssumeRoleInput { + s.RoleArn = &v + return s +} + +// SetRoleSessionName sets the RoleSessionName field's value. +func (s *AssumeRoleInput) SetRoleSessionName(v string) *AssumeRoleInput { + s.RoleSessionName = &v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *AssumeRoleInput) SetSerialNumber(v string) *AssumeRoleInput { + s.SerialNumber = &v + return s +} + +// SetTokenCode sets the TokenCode field's value. +func (s *AssumeRoleInput) SetTokenCode(v string) *AssumeRoleInput { + s.TokenCode = &v + return s +} + +// Contains the response to a successful AssumeRole request, including temporary +// AWS credentials that can be used to make AWS requests. +type AssumeRoleOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) and the assumed role ID, which are identifiers + // that you can use to refer to the resulting temporary security credentials. + // For example, you can reference these credentials as a principal in a resource-based + // policy by using the ARN or assumed role ID. The ARN and ID include the RoleSessionName + // that you specified when you called AssumeRole. + AssumedRoleUser *AssumedRoleUser `type:"structure"` + + // The temporary security credentials, which include an access key ID, a secret + // access key, and a security (or session) token. + // + // Note: The size of the security token that STS APIs return is not fixed. We + // strongly recommend that you make no assumptions about the maximum size. As + // of this writing, the typical size is less than 4096 bytes, but that can vary. + // Also, future updates to AWS might require larger sizes. + Credentials *Credentials `type:"structure"` + + // A percentage value that indicates the size of the policy in packed form. + // The service rejects any policy with a packed size greater than 100 percent, + // which means the policy exceeded the allowed space. + PackedPolicySize *int64 `type:"integer"` +} + +// String returns the string representation +func (s AssumeRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumeRoleOutput) GoString() string { + return s.String() +} + +// SetAssumedRoleUser sets the AssumedRoleUser field's value. +func (s *AssumeRoleOutput) SetAssumedRoleUser(v *AssumedRoleUser) *AssumeRoleOutput { + s.AssumedRoleUser = v + return s +} + +// SetCredentials sets the Credentials field's value. +func (s *AssumeRoleOutput) SetCredentials(v *Credentials) *AssumeRoleOutput { + s.Credentials = v + return s +} + +// SetPackedPolicySize sets the PackedPolicySize field's value. +func (s *AssumeRoleOutput) SetPackedPolicySize(v int64) *AssumeRoleOutput { + s.PackedPolicySize = &v + return s +} + +type AssumeRoleWithSAMLInput struct { + _ struct{} `type:"structure"` + + // The duration, in seconds, of the role session. Your role session lasts for + // the duration that you specify for the DurationSeconds parameter, or until + // the time specified in the SAML authentication response's SessionNotOnOrAfter + // value, whichever is shorter. You can provide a DurationSeconds value from + // 900 seconds (15 minutes) up to the maximum session duration setting for the + // role. This setting can have a value from 1 hour to 12 hours. If you specify + // a value higher than this setting, the operation fails. For example, if you + // specify a session duration of 12 hours, but your administrator set the maximum + // session duration to 6 hours, your operation fails. To learn how to view the + // maximum value for your role, see View the Maximum Session Duration Setting + // for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) + // in the IAM User Guide. + // + // By default, the value is set to 3600 seconds. + // + // The DurationSeconds parameter is separate from the duration of a console + // session that you might request using the returned credentials. The request + // to the federation endpoint for a console sign-in token takes a SessionDuration + // parameter that specifies the maximum length of the console session. For more + // information, see Creating a URL that Enables Federated Users to Access the + // AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) + // in the IAM User Guide. + DurationSeconds *int64 `min:"900" type:"integer"` + + // An IAM policy in JSON format. + // + // The policy parameter is optional. If you pass a policy, the temporary security + // credentials that are returned by the operation have the permissions that + // are allowed by both the access policy of the role that is being assumed, + // and the policy that you pass. This gives you a way to further restrict the + // permissions for the resulting temporary security credentials. You cannot + // use the passed policy to grant permissions that are in excess of those allowed + // by the access policy of the role that is being assumed. For more information, + // Permissions for AssumeRole, AssumeRoleWithSAML, and AssumeRoleWithWebIdentity + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html) + // in the IAM User Guide. + // + // The format for this parameter, as described by its regex pattern, is a string + // of characters up to 2048 characters in length. The characters can be any + // ASCII character from the space character to the end of the valid character + // list (\u0020-\u00FF). It can also include the tab (\u0009), linefeed (\u000A), + // and carriage return (\u000D) characters. + // + // The policy plain text must be 2048 bytes or shorter. However, an internal + // conversion compresses it into a packed binary format with a separate limit. + // The PackedPolicySize response element indicates by percentage how close to + // the upper size limit the policy is, with 100% equaling the maximum allowed + // size. + Policy *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the SAML provider in IAM that describes + // the IdP. + // + // PrincipalArn is a required field + PrincipalArn *string `min:"20" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the role that the caller is assuming. + // + // RoleArn is a required field + RoleArn *string `min:"20" type:"string" required:"true"` + + // The base-64 encoded SAML authentication response provided by the IdP. + // + // For more information, see Configuring a Relying Party and Adding Claims (http://docs.aws.amazon.com/IAM/latest/UserGuide/create-role-saml-IdP-tasks.html) + // in the Using IAM guide. + // + // SAMLAssertion is a required field + SAMLAssertion *string `min:"4" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssumeRoleWithSAMLInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumeRoleWithSAMLInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssumeRoleWithSAMLInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssumeRoleWithSAMLInput"} + if s.DurationSeconds != nil && *s.DurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("DurationSeconds", 900)) + } + if s.Policy != nil && len(*s.Policy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Policy", 1)) + } + if s.PrincipalArn == nil { + invalidParams.Add(request.NewErrParamRequired("PrincipalArn")) + } + if s.PrincipalArn != nil && len(*s.PrincipalArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PrincipalArn", 20)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.SAMLAssertion == nil { + invalidParams.Add(request.NewErrParamRequired("SAMLAssertion")) + } + if s.SAMLAssertion != nil && len(*s.SAMLAssertion) < 4 { + invalidParams.Add(request.NewErrParamMinLen("SAMLAssertion", 4)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDurationSeconds sets the DurationSeconds field's value. +func (s *AssumeRoleWithSAMLInput) SetDurationSeconds(v int64) *AssumeRoleWithSAMLInput { + s.DurationSeconds = &v + return s +} + +// SetPolicy sets the Policy field's value. +func (s *AssumeRoleWithSAMLInput) SetPolicy(v string) *AssumeRoleWithSAMLInput { + s.Policy = &v + return s +} + +// SetPrincipalArn sets the PrincipalArn field's value. +func (s *AssumeRoleWithSAMLInput) SetPrincipalArn(v string) *AssumeRoleWithSAMLInput { + s.PrincipalArn = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *AssumeRoleWithSAMLInput) SetRoleArn(v string) *AssumeRoleWithSAMLInput { + s.RoleArn = &v + return s +} + +// SetSAMLAssertion sets the SAMLAssertion field's value. +func (s *AssumeRoleWithSAMLInput) SetSAMLAssertion(v string) *AssumeRoleWithSAMLInput { + s.SAMLAssertion = &v + return s +} + +// Contains the response to a successful AssumeRoleWithSAML request, including +// temporary AWS credentials that can be used to make AWS requests. +type AssumeRoleWithSAMLOutput struct { + _ struct{} `type:"structure"` + + // The identifiers for the temporary security credentials that the operation + // returns. + AssumedRoleUser *AssumedRoleUser `type:"structure"` + + // The value of the Recipient attribute of the SubjectConfirmationData element + // of the SAML assertion. + Audience *string `type:"string"` + + // The temporary security credentials, which include an access key ID, a secret + // access key, and a security (or session) token. + // + // Note: The size of the security token that STS APIs return is not fixed. We + // strongly recommend that you make no assumptions about the maximum size. As + // of this writing, the typical size is less than 4096 bytes, but that can vary. + // Also, future updates to AWS might require larger sizes. + Credentials *Credentials `type:"structure"` + + // The value of the Issuer element of the SAML assertion. + Issuer *string `type:"string"` + + // A hash value based on the concatenation of the Issuer response value, the + // AWS account ID, and the friendly name (the last part of the ARN) of the SAML + // provider in IAM. The combination of NameQualifier and Subject can be used + // to uniquely identify a federated user. + // + // The following pseudocode shows how the hash value is calculated: + // + // BASE64 ( SHA1 ( "https://example.com/saml" + "123456789012" + "/MySAMLIdP" + // ) ) + NameQualifier *string `type:"string"` + + // A percentage value that indicates the size of the policy in packed form. + // The service rejects any policy with a packed size greater than 100 percent, + // which means the policy exceeded the allowed space. + PackedPolicySize *int64 `type:"integer"` + + // The value of the NameID element in the Subject element of the SAML assertion. + Subject *string `type:"string"` + + // The format of the name ID, as defined by the Format attribute in the NameID + // element of the SAML assertion. Typical examples of the format are transient + // or persistent. + // + // If the format includes the prefix urn:oasis:names:tc:SAML:2.0:nameid-format, + // that prefix is removed. For example, urn:oasis:names:tc:SAML:2.0:nameid-format:transient + // is returned as transient. If the format includes any other prefix, the format + // is returned with no modifications. + SubjectType *string `type:"string"` +} + +// String returns the string representation +func (s AssumeRoleWithSAMLOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumeRoleWithSAMLOutput) GoString() string { + return s.String() +} + +// SetAssumedRoleUser sets the AssumedRoleUser field's value. +func (s *AssumeRoleWithSAMLOutput) SetAssumedRoleUser(v *AssumedRoleUser) *AssumeRoleWithSAMLOutput { + s.AssumedRoleUser = v + return s +} + +// SetAudience sets the Audience field's value. +func (s *AssumeRoleWithSAMLOutput) SetAudience(v string) *AssumeRoleWithSAMLOutput { + s.Audience = &v + return s +} + +// SetCredentials sets the Credentials field's value. +func (s *AssumeRoleWithSAMLOutput) SetCredentials(v *Credentials) *AssumeRoleWithSAMLOutput { + s.Credentials = v + return s +} + +// SetIssuer sets the Issuer field's value. +func (s *AssumeRoleWithSAMLOutput) SetIssuer(v string) *AssumeRoleWithSAMLOutput { + s.Issuer = &v + return s +} + +// SetNameQualifier sets the NameQualifier field's value. +func (s *AssumeRoleWithSAMLOutput) SetNameQualifier(v string) *AssumeRoleWithSAMLOutput { + s.NameQualifier = &v + return s +} + +// SetPackedPolicySize sets the PackedPolicySize field's value. +func (s *AssumeRoleWithSAMLOutput) SetPackedPolicySize(v int64) *AssumeRoleWithSAMLOutput { + s.PackedPolicySize = &v + return s +} + +// SetSubject sets the Subject field's value. +func (s *AssumeRoleWithSAMLOutput) SetSubject(v string) *AssumeRoleWithSAMLOutput { + s.Subject = &v + return s +} + +// SetSubjectType sets the SubjectType field's value. +func (s *AssumeRoleWithSAMLOutput) SetSubjectType(v string) *AssumeRoleWithSAMLOutput { + s.SubjectType = &v + return s +} + +type AssumeRoleWithWebIdentityInput struct { + _ struct{} `type:"structure"` + + // The duration, in seconds, of the role session. The value can range from 900 + // seconds (15 minutes) up to the maximum session duration setting for the role. + // This setting can have a value from 1 hour to 12 hours. If you specify a value + // higher than this setting, the operation fails. For example, if you specify + // a session duration of 12 hours, but your administrator set the maximum session + // duration to 6 hours, your operation fails. To learn how to view the maximum + // value for your role, see View the Maximum Session Duration Setting for a + // Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) + // in the IAM User Guide. + // + // By default, the value is set to 3600 seconds. + // + // The DurationSeconds parameter is separate from the duration of a console + // session that you might request using the returned credentials. The request + // to the federation endpoint for a console sign-in token takes a SessionDuration + // parameter that specifies the maximum length of the console session. For more + // information, see Creating a URL that Enables Federated Users to Access the + // AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) + // in the IAM User Guide. + DurationSeconds *int64 `min:"900" type:"integer"` + + // An IAM policy in JSON format. + // + // The policy parameter is optional. If you pass a policy, the temporary security + // credentials that are returned by the operation have the permissions that + // are allowed by both the access policy of the role that is being assumed, + // and the policy that you pass. This gives you a way to further restrict the + // permissions for the resulting temporary security credentials. You cannot + // use the passed policy to grant permissions that are in excess of those allowed + // by the access policy of the role that is being assumed. For more information, + // see Permissions for AssumeRoleWithWebIdentity (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html) + // in the IAM User Guide. + // + // The format for this parameter, as described by its regex pattern, is a string + // of characters up to 2048 characters in length. The characters can be any + // ASCII character from the space character to the end of the valid character + // list (\u0020-\u00FF). It can also include the tab (\u0009), linefeed (\u000A), + // and carriage return (\u000D) characters. + // + // The policy plain text must be 2048 bytes or shorter. However, an internal + // conversion compresses it into a packed binary format with a separate limit. + // The PackedPolicySize response element indicates by percentage how close to + // the upper size limit the policy is, with 100% equaling the maximum allowed + // size. + Policy *string `min:"1" type:"string"` + + // The fully qualified host component of the domain name of the identity provider. + // + // Specify this value only for OAuth 2.0 access tokens. Currently www.amazon.com + // and graph.facebook.com are the only supported identity providers for OAuth + // 2.0 access tokens. Do not include URL schemes and port numbers. + // + // Do not specify this value for OpenID Connect ID tokens. + ProviderId *string `min:"4" type:"string"` + + // The Amazon Resource Name (ARN) of the role that the caller is assuming. + // + // RoleArn is a required field + RoleArn *string `min:"20" type:"string" required:"true"` + + // An identifier for the assumed role session. Typically, you pass the name + // or identifier that is associated with the user who is using your application. + // That way, the temporary security credentials that your application will use + // are associated with that user. This session name is included as part of the + // ARN and assumed role ID in the AssumedRoleUser response element. + // + // The regex used to validate this parameter is a string of characters consisting + // of upper- and lower-case alphanumeric characters with no spaces. You can + // also include underscores or any of the following characters: =,.@- + // + // RoleSessionName is a required field + RoleSessionName *string `min:"2" type:"string" required:"true"` + + // The OAuth 2.0 access token or OpenID Connect ID token that is provided by + // the identity provider. Your application must get this token by authenticating + // the user who is using your application with a web identity provider before + // the application makes an AssumeRoleWithWebIdentity call. + // + // WebIdentityToken is a required field + WebIdentityToken *string `min:"4" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssumeRoleWithWebIdentityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumeRoleWithWebIdentityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssumeRoleWithWebIdentityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssumeRoleWithWebIdentityInput"} + if s.DurationSeconds != nil && *s.DurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("DurationSeconds", 900)) + } + if s.Policy != nil && len(*s.Policy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Policy", 1)) + } + if s.ProviderId != nil && len(*s.ProviderId) < 4 { + invalidParams.Add(request.NewErrParamMinLen("ProviderId", 4)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.RoleSessionName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleSessionName")) + } + if s.RoleSessionName != nil && len(*s.RoleSessionName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RoleSessionName", 2)) + } + if s.WebIdentityToken == nil { + invalidParams.Add(request.NewErrParamRequired("WebIdentityToken")) + } + if s.WebIdentityToken != nil && len(*s.WebIdentityToken) < 4 { + invalidParams.Add(request.NewErrParamMinLen("WebIdentityToken", 4)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDurationSeconds sets the DurationSeconds field's value. +func (s *AssumeRoleWithWebIdentityInput) SetDurationSeconds(v int64) *AssumeRoleWithWebIdentityInput { + s.DurationSeconds = &v + return s +} + +// SetPolicy sets the Policy field's value. +func (s *AssumeRoleWithWebIdentityInput) SetPolicy(v string) *AssumeRoleWithWebIdentityInput { + s.Policy = &v + return s +} + +// SetProviderId sets the ProviderId field's value. +func (s *AssumeRoleWithWebIdentityInput) SetProviderId(v string) *AssumeRoleWithWebIdentityInput { + s.ProviderId = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *AssumeRoleWithWebIdentityInput) SetRoleArn(v string) *AssumeRoleWithWebIdentityInput { + s.RoleArn = &v + return s +} + +// SetRoleSessionName sets the RoleSessionName field's value. +func (s *AssumeRoleWithWebIdentityInput) SetRoleSessionName(v string) *AssumeRoleWithWebIdentityInput { + s.RoleSessionName = &v + return s +} + +// SetWebIdentityToken sets the WebIdentityToken field's value. +func (s *AssumeRoleWithWebIdentityInput) SetWebIdentityToken(v string) *AssumeRoleWithWebIdentityInput { + s.WebIdentityToken = &v + return s +} + +// Contains the response to a successful AssumeRoleWithWebIdentity request, +// including temporary AWS credentials that can be used to make AWS requests. +type AssumeRoleWithWebIdentityOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) and the assumed role ID, which are identifiers + // that you can use to refer to the resulting temporary security credentials. + // For example, you can reference these credentials as a principal in a resource-based + // policy by using the ARN or assumed role ID. The ARN and ID include the RoleSessionName + // that you specified when you called AssumeRole. + AssumedRoleUser *AssumedRoleUser `type:"structure"` + + // The intended audience (also known as client ID) of the web identity token. + // This is traditionally the client identifier issued to the application that + // requested the web identity token. + Audience *string `type:"string"` + + // The temporary security credentials, which include an access key ID, a secret + // access key, and a security token. + // + // Note: The size of the security token that STS APIs return is not fixed. We + // strongly recommend that you make no assumptions about the maximum size. As + // of this writing, the typical size is less than 4096 bytes, but that can vary. + // Also, future updates to AWS might require larger sizes. + Credentials *Credentials `type:"structure"` + + // A percentage value that indicates the size of the policy in packed form. + // The service rejects any policy with a packed size greater than 100 percent, + // which means the policy exceeded the allowed space. + PackedPolicySize *int64 `type:"integer"` + + // The issuing authority of the web identity token presented. For OpenID Connect + // ID Tokens this contains the value of the iss field. For OAuth 2.0 access + // tokens, this contains the value of the ProviderId parameter that was passed + // in the AssumeRoleWithWebIdentity request. + Provider *string `type:"string"` + + // The unique user identifier that is returned by the identity provider. This + // identifier is associated with the WebIdentityToken that was submitted with + // the AssumeRoleWithWebIdentity call. The identifier is typically unique to + // the user and the application that acquired the WebIdentityToken (pairwise + // identifier). For OpenID Connect ID tokens, this field contains the value + // returned by the identity provider as the token's sub (Subject) claim. + SubjectFromWebIdentityToken *string `min:"6" type:"string"` +} + +// String returns the string representation +func (s AssumeRoleWithWebIdentityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumeRoleWithWebIdentityOutput) GoString() string { + return s.String() +} + +// SetAssumedRoleUser sets the AssumedRoleUser field's value. +func (s *AssumeRoleWithWebIdentityOutput) SetAssumedRoleUser(v *AssumedRoleUser) *AssumeRoleWithWebIdentityOutput { + s.AssumedRoleUser = v + return s +} + +// SetAudience sets the Audience field's value. +func (s *AssumeRoleWithWebIdentityOutput) SetAudience(v string) *AssumeRoleWithWebIdentityOutput { + s.Audience = &v + return s +} + +// SetCredentials sets the Credentials field's value. +func (s *AssumeRoleWithWebIdentityOutput) SetCredentials(v *Credentials) *AssumeRoleWithWebIdentityOutput { + s.Credentials = v + return s +} + +// SetPackedPolicySize sets the PackedPolicySize field's value. +func (s *AssumeRoleWithWebIdentityOutput) SetPackedPolicySize(v int64) *AssumeRoleWithWebIdentityOutput { + s.PackedPolicySize = &v + return s +} + +// SetProvider sets the Provider field's value. +func (s *AssumeRoleWithWebIdentityOutput) SetProvider(v string) *AssumeRoleWithWebIdentityOutput { + s.Provider = &v + return s +} + +// SetSubjectFromWebIdentityToken sets the SubjectFromWebIdentityToken field's value. +func (s *AssumeRoleWithWebIdentityOutput) SetSubjectFromWebIdentityToken(v string) *AssumeRoleWithWebIdentityOutput { + s.SubjectFromWebIdentityToken = &v + return s +} + +// The identifiers for the temporary security credentials that the operation +// returns. +type AssumedRoleUser struct { + _ struct{} `type:"structure"` + + // The ARN of the temporary security credentials that are returned from the + // AssumeRole action. For more information about ARNs and how to use them in + // policies, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) + // in Using IAM. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // A unique identifier that contains the role ID and the role session name of + // the role that is being assumed. The role ID is generated by AWS when the + // role is created. + // + // AssumedRoleId is a required field + AssumedRoleId *string `min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssumedRoleUser) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssumedRoleUser) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *AssumedRoleUser) SetArn(v string) *AssumedRoleUser { + s.Arn = &v + return s +} + +// SetAssumedRoleId sets the AssumedRoleId field's value. +func (s *AssumedRoleUser) SetAssumedRoleId(v string) *AssumedRoleUser { + s.AssumedRoleId = &v + return s +} + +// AWS credentials for API authentication. +type Credentials struct { + _ struct{} `type:"structure"` + + // The access key ID that identifies the temporary security credentials. + // + // AccessKeyId is a required field + AccessKeyId *string `min:"16" type:"string" required:"true"` + + // The date on which the current credentials expire. + // + // Expiration is a required field + Expiration *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + + // The secret access key that can be used to sign requests. + // + // SecretAccessKey is a required field + SecretAccessKey *string `type:"string" required:"true"` + + // The token that users must pass to the service API to use the temporary credentials. + // + // SessionToken is a required field + SessionToken *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Credentials) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Credentials) GoString() string { + return s.String() +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *Credentials) SetAccessKeyId(v string) *Credentials { + s.AccessKeyId = &v + return s +} + +// SetExpiration sets the Expiration field's value. +func (s *Credentials) SetExpiration(v time.Time) *Credentials { + s.Expiration = &v + return s +} + +// SetSecretAccessKey sets the SecretAccessKey field's value. +func (s *Credentials) SetSecretAccessKey(v string) *Credentials { + s.SecretAccessKey = &v + return s +} + +// SetSessionToken sets the SessionToken field's value. +func (s *Credentials) SetSessionToken(v string) *Credentials { + s.SessionToken = &v + return s +} + +type DecodeAuthorizationMessageInput struct { + _ struct{} `type:"structure"` + + // The encoded message that was returned with the response. + // + // EncodedMessage is a required field + EncodedMessage *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DecodeAuthorizationMessageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DecodeAuthorizationMessageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DecodeAuthorizationMessageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DecodeAuthorizationMessageInput"} + if s.EncodedMessage == nil { + invalidParams.Add(request.NewErrParamRequired("EncodedMessage")) + } + if s.EncodedMessage != nil && len(*s.EncodedMessage) < 1 { + invalidParams.Add(request.NewErrParamMinLen("EncodedMessage", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncodedMessage sets the EncodedMessage field's value. +func (s *DecodeAuthorizationMessageInput) SetEncodedMessage(v string) *DecodeAuthorizationMessageInput { + s.EncodedMessage = &v + return s +} + +// A document that contains additional information about the authorization status +// of a request from an encoded message that is returned in response to an AWS +// request. +type DecodeAuthorizationMessageOutput struct { + _ struct{} `type:"structure"` + + // An XML document that contains the decoded message. + DecodedMessage *string `type:"string"` +} + +// String returns the string representation +func (s DecodeAuthorizationMessageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DecodeAuthorizationMessageOutput) GoString() string { + return s.String() +} + +// SetDecodedMessage sets the DecodedMessage field's value. +func (s *DecodeAuthorizationMessageOutput) SetDecodedMessage(v string) *DecodeAuthorizationMessageOutput { + s.DecodedMessage = &v + return s +} + +// Identifiers for the federated user that is associated with the credentials. +type FederatedUser struct { + _ struct{} `type:"structure"` + + // The ARN that specifies the federated user that is associated with the credentials. + // For more information about ARNs and how to use them in policies, see IAM + // Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) + // in Using IAM. + // + // Arn is a required field + Arn *string `min:"20" type:"string" required:"true"` + + // The string that identifies the federated user associated with the credentials, + // similar to the unique ID of an IAM user. + // + // FederatedUserId is a required field + FederatedUserId *string `min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s FederatedUser) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FederatedUser) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *FederatedUser) SetArn(v string) *FederatedUser { + s.Arn = &v + return s +} + +// SetFederatedUserId sets the FederatedUserId field's value. +func (s *FederatedUser) SetFederatedUserId(v string) *FederatedUser { + s.FederatedUserId = &v + return s +} + +type GetCallerIdentityInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetCallerIdentityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCallerIdentityInput) GoString() string { + return s.String() +} + +// Contains the response to a successful GetCallerIdentity request, including +// information about the entity making the request. +type GetCallerIdentityOutput struct { + _ struct{} `type:"structure"` + + // The AWS account ID number of the account that owns or contains the calling + // entity. + Account *string `type:"string"` + + // The AWS ARN associated with the calling entity. + Arn *string `min:"20" type:"string"` + + // The unique identifier of the calling entity. The exact value depends on the + // type of entity making the call. The values returned are those listed in the + // aws:userid column in the Principal table (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html#principaltable) + // found on the Policy Variables reference page in the IAM User Guide. + UserId *string `type:"string"` +} + +// String returns the string representation +func (s GetCallerIdentityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCallerIdentityOutput) GoString() string { + return s.String() +} + +// SetAccount sets the Account field's value. +func (s *GetCallerIdentityOutput) SetAccount(v string) *GetCallerIdentityOutput { + s.Account = &v + return s +} + +// SetArn sets the Arn field's value. +func (s *GetCallerIdentityOutput) SetArn(v string) *GetCallerIdentityOutput { + s.Arn = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *GetCallerIdentityOutput) SetUserId(v string) *GetCallerIdentityOutput { + s.UserId = &v + return s +} + +type GetFederationTokenInput struct { + _ struct{} `type:"structure"` + + // The duration, in seconds, that the session should last. Acceptable durations + // for federation sessions range from 900 seconds (15 minutes) to 129600 seconds + // (36 hours), with 43200 seconds (12 hours) as the default. Sessions obtained + // using AWS account (root) credentials are restricted to a maximum of 3600 + // seconds (one hour). If the specified duration is longer than one hour, the + // session obtained by using AWS account (root) credentials defaults to one + // hour. + DurationSeconds *int64 `min:"900" type:"integer"` + + // The name of the federated user. The name is used as an identifier for the + // temporary security credentials (such as Bob). For example, you can reference + // the federated user name in a resource-based policy, such as in an Amazon + // S3 bucket policy. + // + // The regex used to validate this parameter is a string of characters consisting + // of upper- and lower-case alphanumeric characters with no spaces. You can + // also include underscores or any of the following characters: =,.@- + // + // Name is a required field + Name *string `min:"2" type:"string" required:"true"` + + // An IAM policy in JSON format that is passed with the GetFederationToken call + // and evaluated along with the policy or policies that are attached to the + // IAM user whose credentials are used to call GetFederationToken. The passed + // policy is used to scope down the permissions that are available to the IAM + // user, by allowing only a subset of the permissions that are granted to the + // IAM user. The passed policy cannot grant more permissions than those granted + // to the IAM user. The final permissions for the federated user are the most + // restrictive set based on the intersection of the passed policy and the IAM + // user policy. + // + // If you do not pass a policy, the resulting temporary security credentials + // have no effective permissions. The only exception is when the temporary security + // credentials are used to access a resource that has a resource-based policy + // that specifically allows the federated user to access the resource. + // + // The format for this parameter, as described by its regex pattern, is a string + // of characters up to 2048 characters in length. The characters can be any + // ASCII character from the space character to the end of the valid character + // list (\u0020-\u00FF). It can also include the tab (\u0009), linefeed (\u000A), + // and carriage return (\u000D) characters. + // + // The policy plain text must be 2048 bytes or shorter. However, an internal + // conversion compresses it into a packed binary format with a separate limit. + // The PackedPolicySize response element indicates by percentage how close to + // the upper size limit the policy is, with 100% equaling the maximum allowed + // size. + // + // For more information about how permissions work, see Permissions for GetFederationToken + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_getfederationtoken.html). + Policy *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetFederationTokenInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetFederationTokenInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetFederationTokenInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFederationTokenInput"} + if s.DurationSeconds != nil && *s.DurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("DurationSeconds", 900)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 2 { + invalidParams.Add(request.NewErrParamMinLen("Name", 2)) + } + if s.Policy != nil && len(*s.Policy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Policy", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDurationSeconds sets the DurationSeconds field's value. +func (s *GetFederationTokenInput) SetDurationSeconds(v int64) *GetFederationTokenInput { + s.DurationSeconds = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetFederationTokenInput) SetName(v string) *GetFederationTokenInput { + s.Name = &v + return s +} + +// SetPolicy sets the Policy field's value. +func (s *GetFederationTokenInput) SetPolicy(v string) *GetFederationTokenInput { + s.Policy = &v + return s +} + +// Contains the response to a successful GetFederationToken request, including +// temporary AWS credentials that can be used to make AWS requests. +type GetFederationTokenOutput struct { + _ struct{} `type:"structure"` + + // The temporary security credentials, which include an access key ID, a secret + // access key, and a security (or session) token. + // + // Note: The size of the security token that STS APIs return is not fixed. We + // strongly recommend that you make no assumptions about the maximum size. As + // of this writing, the typical size is less than 4096 bytes, but that can vary. + // Also, future updates to AWS might require larger sizes. + Credentials *Credentials `type:"structure"` + + // Identifiers for the federated user associated with the credentials (such + // as arn:aws:sts::123456789012:federated-user/Bob or 123456789012:Bob). You + // can use the federated user's ARN in your resource-based policies, such as + // an Amazon S3 bucket policy. + FederatedUser *FederatedUser `type:"structure"` + + // A percentage value indicating the size of the policy in packed form. The + // service rejects policies for which the packed size is greater than 100 percent + // of the allowed value. + PackedPolicySize *int64 `type:"integer"` +} + +// String returns the string representation +func (s GetFederationTokenOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetFederationTokenOutput) GoString() string { + return s.String() +} + +// SetCredentials sets the Credentials field's value. +func (s *GetFederationTokenOutput) SetCredentials(v *Credentials) *GetFederationTokenOutput { + s.Credentials = v + return s +} + +// SetFederatedUser sets the FederatedUser field's value. +func (s *GetFederationTokenOutput) SetFederatedUser(v *FederatedUser) *GetFederationTokenOutput { + s.FederatedUser = v + return s +} + +// SetPackedPolicySize sets the PackedPolicySize field's value. +func (s *GetFederationTokenOutput) SetPackedPolicySize(v int64) *GetFederationTokenOutput { + s.PackedPolicySize = &v + return s +} + +type GetSessionTokenInput struct { + _ struct{} `type:"structure"` + + // The duration, in seconds, that the credentials should remain valid. Acceptable + // durations for IAM user sessions range from 900 seconds (15 minutes) to 129600 + // seconds (36 hours), with 43200 seconds (12 hours) as the default. Sessions + // for AWS account owners are restricted to a maximum of 3600 seconds (one hour). + // If the duration is longer than one hour, the session for AWS account owners + // defaults to one hour. + DurationSeconds *int64 `min:"900" type:"integer"` + + // The identification number of the MFA device that is associated with the IAM + // user who is making the GetSessionToken call. Specify this value if the IAM + // user has a policy that requires MFA authentication. The value is either the + // serial number for a hardware device (such as GAHT12345678) or an Amazon Resource + // Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user). + // You can find the device for an IAM user by going to the AWS Management Console + // and viewing the user's security credentials. + // + // The regex used to validated this parameter is a string of characters consisting + // of upper- and lower-case alphanumeric characters with no spaces. You can + // also include underscores or any of the following characters: =,.@:/- + SerialNumber *string `min:"9" type:"string"` + + // The value provided by the MFA device, if MFA is required. If any policy requires + // the IAM user to submit an MFA code, specify this value. If MFA authentication + // is required, and the user does not provide a code when requesting a set of + // temporary security credentials, the user will receive an "access denied" + // response when requesting resources that require MFA authentication. + // + // The format for this parameter, as described by its regex pattern, is a sequence + // of six numeric digits. + TokenCode *string `min:"6" type:"string"` +} + +// String returns the string representation +func (s GetSessionTokenInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSessionTokenInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSessionTokenInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSessionTokenInput"} + if s.DurationSeconds != nil && *s.DurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("DurationSeconds", 900)) + } + if s.SerialNumber != nil && len(*s.SerialNumber) < 9 { + invalidParams.Add(request.NewErrParamMinLen("SerialNumber", 9)) + } + if s.TokenCode != nil && len(*s.TokenCode) < 6 { + invalidParams.Add(request.NewErrParamMinLen("TokenCode", 6)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDurationSeconds sets the DurationSeconds field's value. +func (s *GetSessionTokenInput) SetDurationSeconds(v int64) *GetSessionTokenInput { + s.DurationSeconds = &v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *GetSessionTokenInput) SetSerialNumber(v string) *GetSessionTokenInput { + s.SerialNumber = &v + return s +} + +// SetTokenCode sets the TokenCode field's value. +func (s *GetSessionTokenInput) SetTokenCode(v string) *GetSessionTokenInput { + s.TokenCode = &v + return s +} + +// Contains the response to a successful GetSessionToken request, including +// temporary AWS credentials that can be used to make AWS requests. +type GetSessionTokenOutput struct { + _ struct{} `type:"structure"` + + // The temporary security credentials, which include an access key ID, a secret + // access key, and a security (or session) token. + // + // Note: The size of the security token that STS APIs return is not fixed. We + // strongly recommend that you make no assumptions about the maximum size. As + // of this writing, the typical size is less than 4096 bytes, but that can vary. + // Also, future updates to AWS might require larger sizes. + Credentials *Credentials `type:"structure"` +} + +// String returns the string representation +func (s GetSessionTokenOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSessionTokenOutput) GoString() string { + return s.String() +} + +// SetCredentials sets the Credentials field's value. +func (s *GetSessionTokenOutput) SetCredentials(v *Credentials) *GetSessionTokenOutput { + s.Credentials = v + return s +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/sts/customizations.go new file mode 100644 index 00000000..4010cc7f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/customizations.go @@ -0,0 +1,12 @@ +package sts + +import "github.com/aws/aws-sdk-go/aws/request" + +func init() { + initRequest = func(r *request.Request) { + switch r.Operation.Name { + case opAssumeRoleWithSAML, opAssumeRoleWithWebIdentity: + r.Handlers.Sign.Clear() // these operations are unsigned + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go b/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go new file mode 100644 index 00000000..ef681ab0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go @@ -0,0 +1,72 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package sts provides the client and types for making API +// requests to AWS Security Token Service. +// +// The AWS Security Token Service (STS) is a web service that enables you to +// request temporary, limited-privilege credentials for AWS Identity and Access +// Management (IAM) users or for users that you authenticate (federated users). +// This guide provides descriptions of the STS API. For more detailed information +// about using this service, go to Temporary Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html). +// +// As an alternative to using the API, you can use one of the AWS SDKs, which +// consist of libraries and sample code for various programming languages and +// platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient +// way to create programmatic access to STS. For example, the SDKs take care +// of cryptographically signing requests, managing errors, and retrying requests +// automatically. For information about the AWS SDKs, including how to download +// and install them, see the Tools for Amazon Web Services page (http://aws.amazon.com/tools/). +// +// For information about setting up signatures and authorization through the +// API, go to Signing AWS API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) +// in the AWS General Reference. For general information about the Query API, +// go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in Using IAM. For information about using security tokens with other AWS +// products, go to AWS Services That Work with IAM (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) +// in the IAM User Guide. +// +// If you're new to AWS and need additional technical information about a specific +// AWS product, you can find the product's technical documentation at http://aws.amazon.com/documentation/ +// (http://aws.amazon.com/documentation/). +// +// Endpoints +// +// The AWS Security Token Service (STS) has a default endpoint of https://sts.amazonaws.com +// that maps to the US East (N. Virginia) region. Additional regions are available +// and are activated by default. For more information, see Activating and Deactivating +// AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the IAM User Guide. +// +// For information about STS endpoints, see Regions and Endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html#sts_region) +// in the AWS General Reference. +// +// Recording API requests +// +// STS supports AWS CloudTrail, which is a service that records AWS calls for +// your AWS account and delivers log files to an Amazon S3 bucket. By using +// information collected by CloudTrail, you can determine what requests were +// successfully made to STS, who made the request, when it was made, and so +// on. To learn more about CloudTrail, including how to turn it on and find +// your log files, see the AWS CloudTrail User Guide (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html). +// +// See https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15 for more information on this service. +// +// See sts package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/sts/ +// +// Using the Client +// +// To contact AWS Security Token Service with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Security Token Service client STS for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/sts/#New +package sts diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/errors.go b/vendor/github.com/aws/aws-sdk-go/service/sts/errors.go new file mode 100644 index 00000000..e24884ef --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/errors.go @@ -0,0 +1,73 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package sts + +const ( + + // ErrCodeExpiredTokenException for service response error code + // "ExpiredTokenException". + // + // The web identity token that was passed is expired or is not valid. Get a + // new identity token from the identity provider and then retry the request. + ErrCodeExpiredTokenException = "ExpiredTokenException" + + // ErrCodeIDPCommunicationErrorException for service response error code + // "IDPCommunicationError". + // + // The request could not be fulfilled because the non-AWS identity provider + // (IDP) that was asked to verify the incoming identity token could not be reached. + // This is often a transient error caused by network conditions. Retry the request + // a limited number of times so that you don't exceed the request rate. If the + // error persists, the non-AWS identity provider might be down or not responding. + ErrCodeIDPCommunicationErrorException = "IDPCommunicationError" + + // ErrCodeIDPRejectedClaimException for service response error code + // "IDPRejectedClaim". + // + // The identity provider (IdP) reported that authentication failed. This might + // be because the claim is invalid. + // + // If this error is returned for the AssumeRoleWithWebIdentity operation, it + // can also mean that the claim has expired or has been explicitly revoked. + ErrCodeIDPRejectedClaimException = "IDPRejectedClaim" + + // ErrCodeInvalidAuthorizationMessageException for service response error code + // "InvalidAuthorizationMessageException". + // + // The error returned if the message passed to DecodeAuthorizationMessage was + // invalid. This can happen if the token contains invalid characters, such as + // linebreaks. + ErrCodeInvalidAuthorizationMessageException = "InvalidAuthorizationMessageException" + + // ErrCodeInvalidIdentityTokenException for service response error code + // "InvalidIdentityToken". + // + // The web identity token that was passed could not be validated by AWS. Get + // a new identity token from the identity provider and then retry the request. + ErrCodeInvalidIdentityTokenException = "InvalidIdentityToken" + + // ErrCodeMalformedPolicyDocumentException for service response error code + // "MalformedPolicyDocument". + // + // The request was rejected because the policy document was malformed. The error + // message describes the specific error. + ErrCodeMalformedPolicyDocumentException = "MalformedPolicyDocument" + + // ErrCodePackedPolicyTooLargeException for service response error code + // "PackedPolicyTooLarge". + // + // The request was rejected because the policy document was too large. The error + // message describes how big the policy document is, in packed form, as a percentage + // of what the API allows. + ErrCodePackedPolicyTooLargeException = "PackedPolicyTooLarge" + + // ErrCodeRegionDisabledException for service response error code + // "RegionDisabledException". + // + // STS is not activated in the requested region for the account that is being + // asked to generate credentials. The account administrator must use the IAM + // console to activate STS in that region. For more information, see Activating + // and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) + // in the IAM User Guide. + ErrCodeRegionDisabledException = "RegionDisabledException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/service.go b/vendor/github.com/aws/aws-sdk-go/service/sts/service.go new file mode 100644 index 00000000..1ee5839e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/service.go @@ -0,0 +1,93 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package sts + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/query" +) + +// STS provides the API operation methods for making requests to +// AWS Security Token Service. See this package's package overview docs +// for details on the service. +// +// STS methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type STS struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "sts" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the STS client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a STS client from just a session. +// svc := sts.New(mySession) +// +// // Create a STS client with additional configuration +// svc := sts.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *STS { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *STS { + svc := &STS{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2011-06-15", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(query.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(query.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(query.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(query.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a STS operation and runs any +// custom request initialization. +func (c *STS) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/stsiface/interface.go b/vendor/github.com/aws/aws-sdk-go/service/sts/stsiface/interface.go new file mode 100644 index 00000000..1eba20b0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/stsiface/interface.go @@ -0,0 +1,92 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package stsiface provides an interface to enable mocking the AWS Security Token Service service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package stsiface + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/service/sts" +) + +// STSAPI provides an interface to enable mocking the +// sts.STS service client's API operation, +// paginators, and waiters. This make unit testing your code that calls out +// to the SDK's service client's calls easier. +// +// The best way to use this interface is so the SDK's service client's calls +// can be stubbed out for unit testing your code with the SDK without needing +// to inject custom request handlers into the SDK's request pipeline. +// +// // myFunc uses an SDK service client to make a request to +// // AWS Security Token Service. +// func myFunc(svc stsiface.STSAPI) bool { +// // Make svc.AssumeRole request +// } +// +// func main() { +// sess := session.New() +// svc := sts.New(sess) +// +// myFunc(svc) +// } +// +// In your _test.go file: +// +// // Define a mock struct to be used in your unit tests of myFunc. +// type mockSTSClient struct { +// stsiface.STSAPI +// } +// func (m *mockSTSClient) AssumeRole(input *sts.AssumeRoleInput) (*sts.AssumeRoleOutput, error) { +// // mock response/functionality +// } +// +// func TestMyFunc(t *testing.T) { +// // Setup Test +// mockSvc := &mockSTSClient{} +// +// myfunc(mockSvc) +// +// // Verify myFunc's functionality +// } +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. Its suggested to use the pattern above for testing, or using +// tooling to generate mocks to satisfy the interfaces. +type STSAPI interface { + AssumeRole(*sts.AssumeRoleInput) (*sts.AssumeRoleOutput, error) + AssumeRoleWithContext(aws.Context, *sts.AssumeRoleInput, ...request.Option) (*sts.AssumeRoleOutput, error) + AssumeRoleRequest(*sts.AssumeRoleInput) (*request.Request, *sts.AssumeRoleOutput) + + AssumeRoleWithSAML(*sts.AssumeRoleWithSAMLInput) (*sts.AssumeRoleWithSAMLOutput, error) + AssumeRoleWithSAMLWithContext(aws.Context, *sts.AssumeRoleWithSAMLInput, ...request.Option) (*sts.AssumeRoleWithSAMLOutput, error) + AssumeRoleWithSAMLRequest(*sts.AssumeRoleWithSAMLInput) (*request.Request, *sts.AssumeRoleWithSAMLOutput) + + AssumeRoleWithWebIdentity(*sts.AssumeRoleWithWebIdentityInput) (*sts.AssumeRoleWithWebIdentityOutput, error) + AssumeRoleWithWebIdentityWithContext(aws.Context, *sts.AssumeRoleWithWebIdentityInput, ...request.Option) (*sts.AssumeRoleWithWebIdentityOutput, error) + AssumeRoleWithWebIdentityRequest(*sts.AssumeRoleWithWebIdentityInput) (*request.Request, *sts.AssumeRoleWithWebIdentityOutput) + + DecodeAuthorizationMessage(*sts.DecodeAuthorizationMessageInput) (*sts.DecodeAuthorizationMessageOutput, error) + DecodeAuthorizationMessageWithContext(aws.Context, *sts.DecodeAuthorizationMessageInput, ...request.Option) (*sts.DecodeAuthorizationMessageOutput, error) + DecodeAuthorizationMessageRequest(*sts.DecodeAuthorizationMessageInput) (*request.Request, *sts.DecodeAuthorizationMessageOutput) + + GetCallerIdentity(*sts.GetCallerIdentityInput) (*sts.GetCallerIdentityOutput, error) + GetCallerIdentityWithContext(aws.Context, *sts.GetCallerIdentityInput, ...request.Option) (*sts.GetCallerIdentityOutput, error) + GetCallerIdentityRequest(*sts.GetCallerIdentityInput) (*request.Request, *sts.GetCallerIdentityOutput) + + GetFederationToken(*sts.GetFederationTokenInput) (*sts.GetFederationTokenOutput, error) + GetFederationTokenWithContext(aws.Context, *sts.GetFederationTokenInput, ...request.Option) (*sts.GetFederationTokenOutput, error) + GetFederationTokenRequest(*sts.GetFederationTokenInput) (*request.Request, *sts.GetFederationTokenOutput) + + GetSessionToken(*sts.GetSessionTokenInput) (*sts.GetSessionTokenOutput, error) + GetSessionTokenWithContext(aws.Context, *sts.GetSessionTokenInput, ...request.Option) (*sts.GetSessionTokenOutput, error) + GetSessionTokenRequest(*sts.GetSessionTokenInput) (*request.Request, *sts.GetSessionTokenOutput) +} + +var _ STSAPI = (*sts.STS)(nil) diff --git a/vendor/github.com/beorn7/perks/LICENSE b/vendor/github.com/beorn7/perks/LICENSE new file mode 100644 index 00000000..339177be --- /dev/null +++ b/vendor/github.com/beorn7/perks/LICENSE @@ -0,0 +1,20 @@ +Copyright (C) 2013 Blake Mizerany + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/beorn7/perks/quantile/stream.go b/vendor/github.com/beorn7/perks/quantile/stream.go new file mode 100644 index 00000000..d7d14f8e --- /dev/null +++ b/vendor/github.com/beorn7/perks/quantile/stream.go @@ -0,0 +1,316 @@ +// Package quantile computes approximate quantiles over an unbounded data +// stream within low memory and CPU bounds. +// +// A small amount of accuracy is traded to achieve the above properties. +// +// Multiple streams can be merged before calling Query to generate a single set +// of results. This is meaningful when the streams represent the same type of +// data. See Merge and Samples. +// +// For more detailed information about the algorithm used, see: +// +// Effective Computation of Biased Quantiles over Data Streams +// +// http://www.cs.rutgers.edu/~muthu/bquant.pdf +package quantile + +import ( + "math" + "sort" +) + +// Sample holds an observed value and meta information for compression. JSON +// tags have been added for convenience. +type Sample struct { + Value float64 `json:",string"` + Width float64 `json:",string"` + Delta float64 `json:",string"` +} + +// Samples represents a slice of samples. It implements sort.Interface. +type Samples []Sample + +func (a Samples) Len() int { return len(a) } +func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value } +func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] } + +type invariant func(s *stream, r float64) float64 + +// NewLowBiased returns an initialized Stream for low-biased quantiles +// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but +// error guarantees can still be given even for the lower ranks of the data +// distribution. +// +// The provided epsilon is a relative error, i.e. the true quantile of a value +// returned by a query is guaranteed to be within (1±Epsilon)*Quantile. +// +// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error +// properties. +func NewLowBiased(epsilon float64) *Stream { + ƒ := func(s *stream, r float64) float64 { + return 2 * epsilon * r + } + return newStream(ƒ) +} + +// NewHighBiased returns an initialized Stream for high-biased quantiles +// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but +// error guarantees can still be given even for the higher ranks of the data +// distribution. +// +// The provided epsilon is a relative error, i.e. the true quantile of a value +// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile). +// +// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error +// properties. +func NewHighBiased(epsilon float64) *Stream { + ƒ := func(s *stream, r float64) float64 { + return 2 * epsilon * (s.n - r) + } + return newStream(ƒ) +} + +// NewTargeted returns an initialized Stream concerned with a particular set of +// quantile values that are supplied a priori. Knowing these a priori reduces +// space and computation time. The targets map maps the desired quantiles to +// their absolute errors, i.e. the true quantile of a value returned by a query +// is guaranteed to be within (Quantile±Epsilon). +// +// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties. +func NewTargeted(targetMap map[float64]float64) *Stream { + // Convert map to slice to avoid slow iterations on a map. + // ƒ is called on the hot path, so converting the map to a slice + // beforehand results in significant CPU savings. + targets := targetMapToSlice(targetMap) + + ƒ := func(s *stream, r float64) float64 { + var m = math.MaxFloat64 + var f float64 + for _, t := range targets { + if t.quantile*s.n <= r { + f = (2 * t.epsilon * r) / t.quantile + } else { + f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile) + } + if f < m { + m = f + } + } + return m + } + return newStream(ƒ) +} + +type target struct { + quantile float64 + epsilon float64 +} + +func targetMapToSlice(targetMap map[float64]float64) []target { + targets := make([]target, 0, len(targetMap)) + + for quantile, epsilon := range targetMap { + t := target{ + quantile: quantile, + epsilon: epsilon, + } + targets = append(targets, t) + } + + return targets +} + +// Stream computes quantiles for a stream of float64s. It is not thread-safe by +// design. Take care when using across multiple goroutines. +type Stream struct { + *stream + b Samples + sorted bool +} + +func newStream(ƒ invariant) *Stream { + x := &stream{ƒ: ƒ} + return &Stream{x, make(Samples, 0, 500), true} +} + +// Insert inserts v into the stream. +func (s *Stream) Insert(v float64) { + s.insert(Sample{Value: v, Width: 1}) +} + +func (s *Stream) insert(sample Sample) { + s.b = append(s.b, sample) + s.sorted = false + if len(s.b) == cap(s.b) { + s.flush() + } +} + +// Query returns the computed qth percentiles value. If s was created with +// NewTargeted, and q is not in the set of quantiles provided a priori, Query +// will return an unspecified result. +func (s *Stream) Query(q float64) float64 { + if !s.flushed() { + // Fast path when there hasn't been enough data for a flush; + // this also yields better accuracy for small sets of data. + l := len(s.b) + if l == 0 { + return 0 + } + i := int(math.Ceil(float64(l) * q)) + if i > 0 { + i -= 1 + } + s.maybeSort() + return s.b[i].Value + } + s.flush() + return s.stream.query(q) +} + +// Merge merges samples into the underlying streams samples. This is handy when +// merging multiple streams from separate threads, database shards, etc. +// +// ATTENTION: This method is broken and does not yield correct results. The +// underlying algorithm is not capable of merging streams correctly. +func (s *Stream) Merge(samples Samples) { + sort.Sort(samples) + s.stream.merge(samples) +} + +// Reset reinitializes and clears the list reusing the samples buffer memory. +func (s *Stream) Reset() { + s.stream.reset() + s.b = s.b[:0] +} + +// Samples returns stream samples held by s. +func (s *Stream) Samples() Samples { + if !s.flushed() { + return s.b + } + s.flush() + return s.stream.samples() +} + +// Count returns the total number of samples observed in the stream +// since initialization. +func (s *Stream) Count() int { + return len(s.b) + s.stream.count() +} + +func (s *Stream) flush() { + s.maybeSort() + s.stream.merge(s.b) + s.b = s.b[:0] +} + +func (s *Stream) maybeSort() { + if !s.sorted { + s.sorted = true + sort.Sort(s.b) + } +} + +func (s *Stream) flushed() bool { + return len(s.stream.l) > 0 +} + +type stream struct { + n float64 + l []Sample + ƒ invariant +} + +func (s *stream) reset() { + s.l = s.l[:0] + s.n = 0 +} + +func (s *stream) insert(v float64) { + s.merge(Samples{{v, 1, 0}}) +} + +func (s *stream) merge(samples Samples) { + // TODO(beorn7): This tries to merge not only individual samples, but + // whole summaries. The paper doesn't mention merging summaries at + // all. Unittests show that the merging is inaccurate. Find out how to + // do merges properly. + var r float64 + i := 0 + for _, sample := range samples { + for ; i < len(s.l); i++ { + c := s.l[i] + if c.Value > sample.Value { + // Insert at position i. + s.l = append(s.l, Sample{}) + copy(s.l[i+1:], s.l[i:]) + s.l[i] = Sample{ + sample.Value, + sample.Width, + math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1), + // TODO(beorn7): How to calculate delta correctly? + } + i++ + goto inserted + } + r += c.Width + } + s.l = append(s.l, Sample{sample.Value, sample.Width, 0}) + i++ + inserted: + s.n += sample.Width + r += sample.Width + } + s.compress() +} + +func (s *stream) count() int { + return int(s.n) +} + +func (s *stream) query(q float64) float64 { + t := math.Ceil(q * s.n) + t += math.Ceil(s.ƒ(s, t) / 2) + p := s.l[0] + var r float64 + for _, c := range s.l[1:] { + r += p.Width + if r+c.Width+c.Delta > t { + return p.Value + } + p = c + } + return p.Value +} + +func (s *stream) compress() { + if len(s.l) < 2 { + return + } + x := s.l[len(s.l)-1] + xi := len(s.l) - 1 + r := s.n - 1 - x.Width + + for i := len(s.l) - 2; i >= 0; i-- { + c := s.l[i] + if c.Width+x.Width+x.Delta <= s.ƒ(s, r) { + x.Width += c.Width + s.l[xi] = x + // Remove element at i. + copy(s.l[i:], s.l[i+1:]) + s.l = s.l[:len(s.l)-1] + xi -= 1 + } else { + x = c + xi = i + } + r -= c.Width + } +} + +func (s *stream) samples() Samples { + samples := make(Samples, len(s.l)) + copy(samples, s.l) + return samples +} diff --git a/vendor/github.com/davecgh/go-spew/LICENSE b/vendor/github.com/davecgh/go-spew/LICENSE new file mode 100644 index 00000000..c8364161 --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/LICENSE @@ -0,0 +1,15 @@ +ISC License + +Copyright (c) 2012-2016 Dave Collins + +Permission to use, copy, modify, and distribute this software for any +purpose with or without fee is hereby granted, provided that the above +copyright notice and this permission notice appear in all copies. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF +OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/vendor/github.com/davecgh/go-spew/spew/bypass.go b/vendor/github.com/davecgh/go-spew/spew/bypass.go new file mode 100644 index 00000000..8a4a6589 --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/bypass.go @@ -0,0 +1,152 @@ +// Copyright (c) 2015-2016 Dave Collins +// +// Permission to use, copy, modify, and distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF +// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +// NOTE: Due to the following build constraints, this file will only be compiled +// when the code is not running on Google App Engine, compiled by GopherJS, and +// "-tags safe" is not added to the go build command line. The "disableunsafe" +// tag is deprecated and thus should not be used. +// +build !js,!appengine,!safe,!disableunsafe + +package spew + +import ( + "reflect" + "unsafe" +) + +const ( + // UnsafeDisabled is a build-time constant which specifies whether or + // not access to the unsafe package is available. + UnsafeDisabled = false + + // ptrSize is the size of a pointer on the current arch. + ptrSize = unsafe.Sizeof((*byte)(nil)) +) + +var ( + // offsetPtr, offsetScalar, and offsetFlag are the offsets for the + // internal reflect.Value fields. These values are valid before golang + // commit ecccf07e7f9d which changed the format. The are also valid + // after commit 82f48826c6c7 which changed the format again to mirror + // the original format. Code in the init function updates these offsets + // as necessary. + offsetPtr = uintptr(ptrSize) + offsetScalar = uintptr(0) + offsetFlag = uintptr(ptrSize * 2) + + // flagKindWidth and flagKindShift indicate various bits that the + // reflect package uses internally to track kind information. + // + // flagRO indicates whether or not the value field of a reflect.Value is + // read-only. + // + // flagIndir indicates whether the value field of a reflect.Value is + // the actual data or a pointer to the data. + // + // These values are valid before golang commit 90a7c3c86944 which + // changed their positions. Code in the init function updates these + // flags as necessary. + flagKindWidth = uintptr(5) + flagKindShift = uintptr(flagKindWidth - 1) + flagRO = uintptr(1 << 0) + flagIndir = uintptr(1 << 1) +) + +func init() { + // Older versions of reflect.Value stored small integers directly in the + // ptr field (which is named val in the older versions). Versions + // between commits ecccf07e7f9d and 82f48826c6c7 added a new field named + // scalar for this purpose which unfortunately came before the flag + // field, so the offset of the flag field is different for those + // versions. + // + // This code constructs a new reflect.Value from a known small integer + // and checks if the size of the reflect.Value struct indicates it has + // the scalar field. When it does, the offsets are updated accordingly. + vv := reflect.ValueOf(0xf00) + if unsafe.Sizeof(vv) == (ptrSize * 4) { + offsetScalar = ptrSize * 2 + offsetFlag = ptrSize * 3 + } + + // Commit 90a7c3c86944 changed the flag positions such that the low + // order bits are the kind. This code extracts the kind from the flags + // field and ensures it's the correct type. When it's not, the flag + // order has been changed to the newer format, so the flags are updated + // accordingly. + upf := unsafe.Pointer(uintptr(unsafe.Pointer(&vv)) + offsetFlag) + upfv := *(*uintptr)(upf) + flagKindMask := uintptr((1<>flagKindShift != uintptr(reflect.Int) { + flagKindShift = 0 + flagRO = 1 << 5 + flagIndir = 1 << 6 + + // Commit adf9b30e5594 modified the flags to separate the + // flagRO flag into two bits which specifies whether or not the + // field is embedded. This causes flagIndir to move over a bit + // and means that flagRO is the combination of either of the + // original flagRO bit and the new bit. + // + // This code detects the change by extracting what used to be + // the indirect bit to ensure it's set. When it's not, the flag + // order has been changed to the newer format, so the flags are + // updated accordingly. + if upfv&flagIndir == 0 { + flagRO = 3 << 5 + flagIndir = 1 << 7 + } + } +} + +// unsafeReflectValue converts the passed reflect.Value into a one that bypasses +// the typical safety restrictions preventing access to unaddressable and +// unexported data. It works by digging the raw pointer to the underlying +// value out of the protected value and generating a new unprotected (unsafe) +// reflect.Value to it. +// +// This allows us to check for implementations of the Stringer and error +// interfaces to be used for pretty printing ordinarily unaddressable and +// inaccessible values such as unexported struct fields. +func unsafeReflectValue(v reflect.Value) (rv reflect.Value) { + indirects := 1 + vt := v.Type() + upv := unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetPtr) + rvf := *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetFlag)) + if rvf&flagIndir != 0 { + vt = reflect.PtrTo(v.Type()) + indirects++ + } else if offsetScalar != 0 { + // The value is in the scalar field when it's not one of the + // reference types. + switch vt.Kind() { + case reflect.Uintptr: + case reflect.Chan: + case reflect.Func: + case reflect.Map: + case reflect.Ptr: + case reflect.UnsafePointer: + default: + upv = unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + + offsetScalar) + } + } + + pv := reflect.NewAt(vt, upv) + rv = pv + for i := 0; i < indirects; i++ { + rv = rv.Elem() + } + return rv +} diff --git a/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go new file mode 100644 index 00000000..1fe3cf3d --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go @@ -0,0 +1,38 @@ +// Copyright (c) 2015-2016 Dave Collins +// +// Permission to use, copy, modify, and distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF +// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +// NOTE: Due to the following build constraints, this file will only be compiled +// when the code is running on Google App Engine, compiled by GopherJS, or +// "-tags safe" is added to the go build command line. The "disableunsafe" +// tag is deprecated and thus should not be used. +// +build js appengine safe disableunsafe + +package spew + +import "reflect" + +const ( + // UnsafeDisabled is a build-time constant which specifies whether or + // not access to the unsafe package is available. + UnsafeDisabled = true +) + +// unsafeReflectValue typically converts the passed reflect.Value into a one +// that bypasses the typical safety restrictions preventing access to +// unaddressable and unexported data. However, doing this relies on access to +// the unsafe package. This is a stub version which simply returns the passed +// reflect.Value when the unsafe package is not available. +func unsafeReflectValue(v reflect.Value) reflect.Value { + return v +} diff --git a/vendor/github.com/davecgh/go-spew/spew/common.go b/vendor/github.com/davecgh/go-spew/spew/common.go new file mode 100644 index 00000000..7c519ff4 --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/common.go @@ -0,0 +1,341 @@ +/* + * Copyright (c) 2013-2016 Dave Collins + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +package spew + +import ( + "bytes" + "fmt" + "io" + "reflect" + "sort" + "strconv" +) + +// Some constants in the form of bytes to avoid string overhead. This mirrors +// the technique used in the fmt package. +var ( + panicBytes = []byte("(PANIC=") + plusBytes = []byte("+") + iBytes = []byte("i") + trueBytes = []byte("true") + falseBytes = []byte("false") + interfaceBytes = []byte("(interface {})") + commaNewlineBytes = []byte(",\n") + newlineBytes = []byte("\n") + openBraceBytes = []byte("{") + openBraceNewlineBytes = []byte("{\n") + closeBraceBytes = []byte("}") + asteriskBytes = []byte("*") + colonBytes = []byte(":") + colonSpaceBytes = []byte(": ") + openParenBytes = []byte("(") + closeParenBytes = []byte(")") + spaceBytes = []byte(" ") + pointerChainBytes = []byte("->") + nilAngleBytes = []byte("") + maxNewlineBytes = []byte("\n") + maxShortBytes = []byte("") + circularBytes = []byte("") + circularShortBytes = []byte("") + invalidAngleBytes = []byte("") + openBracketBytes = []byte("[") + closeBracketBytes = []byte("]") + percentBytes = []byte("%") + precisionBytes = []byte(".") + openAngleBytes = []byte("<") + closeAngleBytes = []byte(">") + openMapBytes = []byte("map[") + closeMapBytes = []byte("]") + lenEqualsBytes = []byte("len=") + capEqualsBytes = []byte("cap=") +) + +// hexDigits is used to map a decimal value to a hex digit. +var hexDigits = "0123456789abcdef" + +// catchPanic handles any panics that might occur during the handleMethods +// calls. +func catchPanic(w io.Writer, v reflect.Value) { + if err := recover(); err != nil { + w.Write(panicBytes) + fmt.Fprintf(w, "%v", err) + w.Write(closeParenBytes) + } +} + +// handleMethods attempts to call the Error and String methods on the underlying +// type the passed reflect.Value represents and outputes the result to Writer w. +// +// It handles panics in any called methods by catching and displaying the error +// as the formatted value. +func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) { + // We need an interface to check if the type implements the error or + // Stringer interface. However, the reflect package won't give us an + // interface on certain things like unexported struct fields in order + // to enforce visibility rules. We use unsafe, when it's available, + // to bypass these restrictions since this package does not mutate the + // values. + if !v.CanInterface() { + if UnsafeDisabled { + return false + } + + v = unsafeReflectValue(v) + } + + // Choose whether or not to do error and Stringer interface lookups against + // the base type or a pointer to the base type depending on settings. + // Technically calling one of these methods with a pointer receiver can + // mutate the value, however, types which choose to satisify an error or + // Stringer interface with a pointer receiver should not be mutating their + // state inside these interface methods. + if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() { + v = unsafeReflectValue(v) + } + if v.CanAddr() { + v = v.Addr() + } + + // Is it an error or Stringer? + switch iface := v.Interface().(type) { + case error: + defer catchPanic(w, v) + if cs.ContinueOnMethod { + w.Write(openParenBytes) + w.Write([]byte(iface.Error())) + w.Write(closeParenBytes) + w.Write(spaceBytes) + return false + } + + w.Write([]byte(iface.Error())) + return true + + case fmt.Stringer: + defer catchPanic(w, v) + if cs.ContinueOnMethod { + w.Write(openParenBytes) + w.Write([]byte(iface.String())) + w.Write(closeParenBytes) + w.Write(spaceBytes) + return false + } + w.Write([]byte(iface.String())) + return true + } + return false +} + +// printBool outputs a boolean value as true or false to Writer w. +func printBool(w io.Writer, val bool) { + if val { + w.Write(trueBytes) + } else { + w.Write(falseBytes) + } +} + +// printInt outputs a signed integer value to Writer w. +func printInt(w io.Writer, val int64, base int) { + w.Write([]byte(strconv.FormatInt(val, base))) +} + +// printUint outputs an unsigned integer value to Writer w. +func printUint(w io.Writer, val uint64, base int) { + w.Write([]byte(strconv.FormatUint(val, base))) +} + +// printFloat outputs a floating point value using the specified precision, +// which is expected to be 32 or 64bit, to Writer w. +func printFloat(w io.Writer, val float64, precision int) { + w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision))) +} + +// printComplex outputs a complex value using the specified float precision +// for the real and imaginary parts to Writer w. +func printComplex(w io.Writer, c complex128, floatPrecision int) { + r := real(c) + w.Write(openParenBytes) + w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision))) + i := imag(c) + if i >= 0 { + w.Write(plusBytes) + } + w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision))) + w.Write(iBytes) + w.Write(closeParenBytes) +} + +// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x' +// prefix to Writer w. +func printHexPtr(w io.Writer, p uintptr) { + // Null pointer. + num := uint64(p) + if num == 0 { + w.Write(nilAngleBytes) + return + } + + // Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix + buf := make([]byte, 18) + + // It's simpler to construct the hex string right to left. + base := uint64(16) + i := len(buf) - 1 + for num >= base { + buf[i] = hexDigits[num%base] + num /= base + i-- + } + buf[i] = hexDigits[num] + + // Add '0x' prefix. + i-- + buf[i] = 'x' + i-- + buf[i] = '0' + + // Strip unused leading bytes. + buf = buf[i:] + w.Write(buf) +} + +// valuesSorter implements sort.Interface to allow a slice of reflect.Value +// elements to be sorted. +type valuesSorter struct { + values []reflect.Value + strings []string // either nil or same len and values + cs *ConfigState +} + +// newValuesSorter initializes a valuesSorter instance, which holds a set of +// surrogate keys on which the data should be sorted. It uses flags in +// ConfigState to decide if and how to populate those surrogate keys. +func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface { + vs := &valuesSorter{values: values, cs: cs} + if canSortSimply(vs.values[0].Kind()) { + return vs + } + if !cs.DisableMethods { + vs.strings = make([]string, len(values)) + for i := range vs.values { + b := bytes.Buffer{} + if !handleMethods(cs, &b, vs.values[i]) { + vs.strings = nil + break + } + vs.strings[i] = b.String() + } + } + if vs.strings == nil && cs.SpewKeys { + vs.strings = make([]string, len(values)) + for i := range vs.values { + vs.strings[i] = Sprintf("%#v", vs.values[i].Interface()) + } + } + return vs +} + +// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted +// directly, or whether it should be considered for sorting by surrogate keys +// (if the ConfigState allows it). +func canSortSimply(kind reflect.Kind) bool { + // This switch parallels valueSortLess, except for the default case. + switch kind { + case reflect.Bool: + return true + case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int: + return true + case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint: + return true + case reflect.Float32, reflect.Float64: + return true + case reflect.String: + return true + case reflect.Uintptr: + return true + case reflect.Array: + return true + } + return false +} + +// Len returns the number of values in the slice. It is part of the +// sort.Interface implementation. +func (s *valuesSorter) Len() int { + return len(s.values) +} + +// Swap swaps the values at the passed indices. It is part of the +// sort.Interface implementation. +func (s *valuesSorter) Swap(i, j int) { + s.values[i], s.values[j] = s.values[j], s.values[i] + if s.strings != nil { + s.strings[i], s.strings[j] = s.strings[j], s.strings[i] + } +} + +// valueSortLess returns whether the first value should sort before the second +// value. It is used by valueSorter.Less as part of the sort.Interface +// implementation. +func valueSortLess(a, b reflect.Value) bool { + switch a.Kind() { + case reflect.Bool: + return !a.Bool() && b.Bool() + case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int: + return a.Int() < b.Int() + case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint: + return a.Uint() < b.Uint() + case reflect.Float32, reflect.Float64: + return a.Float() < b.Float() + case reflect.String: + return a.String() < b.String() + case reflect.Uintptr: + return a.Uint() < b.Uint() + case reflect.Array: + // Compare the contents of both arrays. + l := a.Len() + for i := 0; i < l; i++ { + av := a.Index(i) + bv := b.Index(i) + if av.Interface() == bv.Interface() { + continue + } + return valueSortLess(av, bv) + } + } + return a.String() < b.String() +} + +// Less returns whether the value at index i should sort before the +// value at index j. It is part of the sort.Interface implementation. +func (s *valuesSorter) Less(i, j int) bool { + if s.strings == nil { + return valueSortLess(s.values[i], s.values[j]) + } + return s.strings[i] < s.strings[j] +} + +// sortValues is a sort function that handles both native types and any type that +// can be converted to error or Stringer. Other inputs are sorted according to +// their Value.String() value to ensure display stability. +func sortValues(values []reflect.Value, cs *ConfigState) { + if len(values) == 0 { + return + } + sort.Sort(newValuesSorter(values, cs)) +} diff --git a/vendor/github.com/davecgh/go-spew/spew/config.go b/vendor/github.com/davecgh/go-spew/spew/config.go new file mode 100644 index 00000000..2e3d22f3 --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/config.go @@ -0,0 +1,306 @@ +/* + * Copyright (c) 2013-2016 Dave Collins + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +package spew + +import ( + "bytes" + "fmt" + "io" + "os" +) + +// ConfigState houses the configuration options used by spew to format and +// display values. There is a global instance, Config, that is used to control +// all top-level Formatter and Dump functionality. Each ConfigState instance +// provides methods equivalent to the top-level functions. +// +// The zero value for ConfigState provides no indentation. You would typically +// want to set it to a space or a tab. +// +// Alternatively, you can use NewDefaultConfig to get a ConfigState instance +// with default settings. See the documentation of NewDefaultConfig for default +// values. +type ConfigState struct { + // Indent specifies the string to use for each indentation level. The + // global config instance that all top-level functions use set this to a + // single space by default. If you would like more indentation, you might + // set this to a tab with "\t" or perhaps two spaces with " ". + Indent string + + // MaxDepth controls the maximum number of levels to descend into nested + // data structures. The default, 0, means there is no limit. + // + // NOTE: Circular data structures are properly detected, so it is not + // necessary to set this value unless you specifically want to limit deeply + // nested data structures. + MaxDepth int + + // DisableMethods specifies whether or not error and Stringer interfaces are + // invoked for types that implement them. + DisableMethods bool + + // DisablePointerMethods specifies whether or not to check for and invoke + // error and Stringer interfaces on types which only accept a pointer + // receiver when the current type is not a pointer. + // + // NOTE: This might be an unsafe action since calling one of these methods + // with a pointer receiver could technically mutate the value, however, + // in practice, types which choose to satisify an error or Stringer + // interface with a pointer receiver should not be mutating their state + // inside these interface methods. As a result, this option relies on + // access to the unsafe package, so it will not have any effect when + // running in environments without access to the unsafe package such as + // Google App Engine or with the "safe" build tag specified. + DisablePointerMethods bool + + // DisablePointerAddresses specifies whether to disable the printing of + // pointer addresses. This is useful when diffing data structures in tests. + DisablePointerAddresses bool + + // DisableCapacities specifies whether to disable the printing of capacities + // for arrays, slices, maps and channels. This is useful when diffing + // data structures in tests. + DisableCapacities bool + + // ContinueOnMethod specifies whether or not recursion should continue once + // a custom error or Stringer interface is invoked. The default, false, + // means it will print the results of invoking the custom error or Stringer + // interface and return immediately instead of continuing to recurse into + // the internals of the data type. + // + // NOTE: This flag does not have any effect if method invocation is disabled + // via the DisableMethods or DisablePointerMethods options. + ContinueOnMethod bool + + // SortKeys specifies map keys should be sorted before being printed. Use + // this to have a more deterministic, diffable output. Note that only + // native types (bool, int, uint, floats, uintptr and string) and types + // that support the error or Stringer interfaces (if methods are + // enabled) are supported, with other types sorted according to the + // reflect.Value.String() output which guarantees display stability. + SortKeys bool + + // SpewKeys specifies that, as a last resort attempt, map keys should + // be spewed to strings and sorted by those strings. This is only + // considered if SortKeys is true. + SpewKeys bool +} + +// Config is the active configuration of the top-level functions. +// The configuration can be changed by modifying the contents of spew.Config. +var Config = ConfigState{Indent: " "} + +// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the formatted string as a value that satisfies error. See NewFormatter +// for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) { + return fmt.Errorf(format, c.convertArgs(a)...) +} + +// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) { + return fmt.Fprint(w, c.convertArgs(a)...) +} + +// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) { + return fmt.Fprintf(w, format, c.convertArgs(a)...) +} + +// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it +// passed with a Formatter interface returned by c.NewFormatter. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) { + return fmt.Fprintln(w, c.convertArgs(a)...) +} + +// Print is a wrapper for fmt.Print that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Print(c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Print(a ...interface{}) (n int, err error) { + return fmt.Print(c.convertArgs(a)...) +} + +// Printf is a wrapper for fmt.Printf that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) { + return fmt.Printf(format, c.convertArgs(a)...) +} + +// Println is a wrapper for fmt.Println that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Println(c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Println(a ...interface{}) (n int, err error) { + return fmt.Println(c.convertArgs(a)...) +} + +// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the resulting string. See NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Sprint(a ...interface{}) string { + return fmt.Sprint(c.convertArgs(a)...) +} + +// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were +// passed with a Formatter interface returned by c.NewFormatter. It returns +// the resulting string. See NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Sprintf(format string, a ...interface{}) string { + return fmt.Sprintf(format, c.convertArgs(a)...) +} + +// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it +// were passed with a Formatter interface returned by c.NewFormatter. It +// returns the resulting string. See NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b)) +func (c *ConfigState) Sprintln(a ...interface{}) string { + return fmt.Sprintln(c.convertArgs(a)...) +} + +/* +NewFormatter returns a custom formatter that satisfies the fmt.Formatter +interface. As a result, it integrates cleanly with standard fmt package +printing functions. The formatter is useful for inline printing of smaller data +types similar to the standard %v format specifier. + +The custom formatter only responds to the %v (most compact), %+v (adds pointer +addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb +combinations. Any other verbs such as %x and %q will be sent to the the +standard fmt package for formatting. In addition, the custom formatter ignores +the width and precision arguments (however they will still work on the format +specifiers not handled by the custom formatter). + +Typically this function shouldn't be called directly. It is much easier to make +use of the custom formatter by calling one of the convenience functions such as +c.Printf, c.Println, or c.Printf. +*/ +func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter { + return newFormatter(c, v) +} + +// Fdump formats and displays the passed arguments to io.Writer w. It formats +// exactly the same as Dump. +func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) { + fdump(c, w, a...) +} + +/* +Dump displays the passed parameters to standard out with newlines, customizable +indentation, and additional debug information such as complete types and all +pointer addresses used to indirect to the final value. It provides the +following features over the built-in printing facilities provided by the fmt +package: + + * Pointers are dereferenced and followed + * Circular data structures are detected and handled properly + * Custom Stringer/error interfaces are optionally invoked, including + on unexported types + * Custom types which only implement the Stringer/error interfaces via + a pointer receiver are optionally invoked when passing non-pointer + variables + * Byte arrays and slices are dumped like the hexdump -C command which + includes offsets, byte values in hex, and ASCII output + +The configuration options are controlled by modifying the public members +of c. See ConfigState for options documentation. + +See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to +get the formatted result as a string. +*/ +func (c *ConfigState) Dump(a ...interface{}) { + fdump(c, os.Stdout, a...) +} + +// Sdump returns a string with the passed arguments formatted exactly the same +// as Dump. +func (c *ConfigState) Sdump(a ...interface{}) string { + var buf bytes.Buffer + fdump(c, &buf, a...) + return buf.String() +} + +// convertArgs accepts a slice of arguments and returns a slice of the same +// length with each argument converted to a spew Formatter interface using +// the ConfigState associated with s. +func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) { + formatters = make([]interface{}, len(args)) + for index, arg := range args { + formatters[index] = newFormatter(c, arg) + } + return formatters +} + +// NewDefaultConfig returns a ConfigState with the following default settings. +// +// Indent: " " +// MaxDepth: 0 +// DisableMethods: false +// DisablePointerMethods: false +// ContinueOnMethod: false +// SortKeys: false +func NewDefaultConfig() *ConfigState { + return &ConfigState{Indent: " "} +} diff --git a/vendor/github.com/davecgh/go-spew/spew/doc.go b/vendor/github.com/davecgh/go-spew/spew/doc.go new file mode 100644 index 00000000..aacaac6f --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/doc.go @@ -0,0 +1,211 @@ +/* + * Copyright (c) 2013-2016 Dave Collins + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +/* +Package spew implements a deep pretty printer for Go data structures to aid in +debugging. + +A quick overview of the additional features spew provides over the built-in +printing facilities for Go data types are as follows: + + * Pointers are dereferenced and followed + * Circular data structures are detected and handled properly + * Custom Stringer/error interfaces are optionally invoked, including + on unexported types + * Custom types which only implement the Stringer/error interfaces via + a pointer receiver are optionally invoked when passing non-pointer + variables + * Byte arrays and slices are dumped like the hexdump -C command which + includes offsets, byte values in hex, and ASCII output (only when using + Dump style) + +There are two different approaches spew allows for dumping Go data structures: + + * Dump style which prints with newlines, customizable indentation, + and additional debug information such as types and all pointer addresses + used to indirect to the final value + * A custom Formatter interface that integrates cleanly with the standard fmt + package and replaces %v, %+v, %#v, and %#+v to provide inline printing + similar to the default %v while providing the additional functionality + outlined above and passing unsupported format verbs such as %x and %q + along to fmt + +Quick Start + +This section demonstrates how to quickly get started with spew. See the +sections below for further details on formatting and configuration options. + +To dump a variable with full newlines, indentation, type, and pointer +information use Dump, Fdump, or Sdump: + spew.Dump(myVar1, myVar2, ...) + spew.Fdump(someWriter, myVar1, myVar2, ...) + str := spew.Sdump(myVar1, myVar2, ...) + +Alternatively, if you would prefer to use format strings with a compacted inline +printing style, use the convenience wrappers Printf, Fprintf, etc with +%v (most compact), %+v (adds pointer addresses), %#v (adds types), or +%#+v (adds types and pointer addresses): + spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2) + spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4) + spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2) + spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4) + +Configuration Options + +Configuration of spew is handled by fields in the ConfigState type. For +convenience, all of the top-level functions use a global state available +via the spew.Config global. + +It is also possible to create a ConfigState instance that provides methods +equivalent to the top-level functions. This allows concurrent configuration +options. See the ConfigState documentation for more details. + +The following configuration options are available: + * Indent + String to use for each indentation level for Dump functions. + It is a single space by default. A popular alternative is "\t". + + * MaxDepth + Maximum number of levels to descend into nested data structures. + There is no limit by default. + + * DisableMethods + Disables invocation of error and Stringer interface methods. + Method invocation is enabled by default. + + * DisablePointerMethods + Disables invocation of error and Stringer interface methods on types + which only accept pointer receivers from non-pointer variables. + Pointer method invocation is enabled by default. + + * DisablePointerAddresses + DisablePointerAddresses specifies whether to disable the printing of + pointer addresses. This is useful when diffing data structures in tests. + + * DisableCapacities + DisableCapacities specifies whether to disable the printing of + capacities for arrays, slices, maps and channels. This is useful when + diffing data structures in tests. + + * ContinueOnMethod + Enables recursion into types after invoking error and Stringer interface + methods. Recursion after method invocation is disabled by default. + + * SortKeys + Specifies map keys should be sorted before being printed. Use + this to have a more deterministic, diffable output. Note that + only native types (bool, int, uint, floats, uintptr and string) + and types which implement error or Stringer interfaces are + supported with other types sorted according to the + reflect.Value.String() output which guarantees display + stability. Natural map order is used by default. + + * SpewKeys + Specifies that, as a last resort attempt, map keys should be + spewed to strings and sorted by those strings. This is only + considered if SortKeys is true. + +Dump Usage + +Simply call spew.Dump with a list of variables you want to dump: + + spew.Dump(myVar1, myVar2, ...) + +You may also call spew.Fdump if you would prefer to output to an arbitrary +io.Writer. For example, to dump to standard error: + + spew.Fdump(os.Stderr, myVar1, myVar2, ...) + +A third option is to call spew.Sdump to get the formatted output as a string: + + str := spew.Sdump(myVar1, myVar2, ...) + +Sample Dump Output + +See the Dump example for details on the setup of the types and variables being +shown here. + + (main.Foo) { + unexportedField: (*main.Bar)(0xf84002e210)({ + flag: (main.Flag) flagTwo, + data: (uintptr) + }), + ExportedField: (map[interface {}]interface {}) (len=1) { + (string) (len=3) "one": (bool) true + } + } + +Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C +command as shown. + ([]uint8) (len=32 cap=32) { + 00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... | + 00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0| + 00000020 31 32 |12| + } + +Custom Formatter + +Spew provides a custom formatter that implements the fmt.Formatter interface +so that it integrates cleanly with standard fmt package printing functions. The +formatter is useful for inline printing of smaller data types similar to the +standard %v format specifier. + +The custom formatter only responds to the %v (most compact), %+v (adds pointer +addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb +combinations. Any other verbs such as %x and %q will be sent to the the +standard fmt package for formatting. In addition, the custom formatter ignores +the width and precision arguments (however they will still work on the format +specifiers not handled by the custom formatter). + +Custom Formatter Usage + +The simplest way to make use of the spew custom formatter is to call one of the +convenience functions such as spew.Printf, spew.Println, or spew.Printf. The +functions have syntax you are most likely already familiar with: + + spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2) + spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4) + spew.Println(myVar, myVar2) + spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2) + spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4) + +See the Index for the full list convenience functions. + +Sample Formatter Output + +Double pointer to a uint8: + %v: <**>5 + %+v: <**>(0xf8400420d0->0xf8400420c8)5 + %#v: (**uint8)5 + %#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5 + +Pointer to circular struct with a uint8 field and a pointer to itself: + %v: <*>{1 <*>} + %+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)} + %#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)} + %#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)} + +See the Printf example for details on the setup of variables being shown +here. + +Errors + +Since it is possible for custom Stringer/error interfaces to panic, spew +detects them and handles them internally by printing the panic information +inline with the output. Since spew is intended to provide deep pretty printing +capabilities on structures, it intentionally does not return any errors. +*/ +package spew diff --git a/vendor/github.com/davecgh/go-spew/spew/dump.go b/vendor/github.com/davecgh/go-spew/spew/dump.go new file mode 100644 index 00000000..df1d582a --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/dump.go @@ -0,0 +1,509 @@ +/* + * Copyright (c) 2013-2016 Dave Collins + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +package spew + +import ( + "bytes" + "encoding/hex" + "fmt" + "io" + "os" + "reflect" + "regexp" + "strconv" + "strings" +) + +var ( + // uint8Type is a reflect.Type representing a uint8. It is used to + // convert cgo types to uint8 slices for hexdumping. + uint8Type = reflect.TypeOf(uint8(0)) + + // cCharRE is a regular expression that matches a cgo char. + // It is used to detect character arrays to hexdump them. + cCharRE = regexp.MustCompile("^.*\\._Ctype_char$") + + // cUnsignedCharRE is a regular expression that matches a cgo unsigned + // char. It is used to detect unsigned character arrays to hexdump + // them. + cUnsignedCharRE = regexp.MustCompile("^.*\\._Ctype_unsignedchar$") + + // cUint8tCharRE is a regular expression that matches a cgo uint8_t. + // It is used to detect uint8_t arrays to hexdump them. + cUint8tCharRE = regexp.MustCompile("^.*\\._Ctype_uint8_t$") +) + +// dumpState contains information about the state of a dump operation. +type dumpState struct { + w io.Writer + depth int + pointers map[uintptr]int + ignoreNextType bool + ignoreNextIndent bool + cs *ConfigState +} + +// indent performs indentation according to the depth level and cs.Indent +// option. +func (d *dumpState) indent() { + if d.ignoreNextIndent { + d.ignoreNextIndent = false + return + } + d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth)) +} + +// unpackValue returns values inside of non-nil interfaces when possible. +// This is useful for data types like structs, arrays, slices, and maps which +// can contain varying types packed inside an interface. +func (d *dumpState) unpackValue(v reflect.Value) reflect.Value { + if v.Kind() == reflect.Interface && !v.IsNil() { + v = v.Elem() + } + return v +} + +// dumpPtr handles formatting of pointers by indirecting them as necessary. +func (d *dumpState) dumpPtr(v reflect.Value) { + // Remove pointers at or below the current depth from map used to detect + // circular refs. + for k, depth := range d.pointers { + if depth >= d.depth { + delete(d.pointers, k) + } + } + + // Keep list of all dereferenced pointers to show later. + pointerChain := make([]uintptr, 0) + + // Figure out how many levels of indirection there are by dereferencing + // pointers and unpacking interfaces down the chain while detecting circular + // references. + nilFound := false + cycleFound := false + indirects := 0 + ve := v + for ve.Kind() == reflect.Ptr { + if ve.IsNil() { + nilFound = true + break + } + indirects++ + addr := ve.Pointer() + pointerChain = append(pointerChain, addr) + if pd, ok := d.pointers[addr]; ok && pd < d.depth { + cycleFound = true + indirects-- + break + } + d.pointers[addr] = d.depth + + ve = ve.Elem() + if ve.Kind() == reflect.Interface { + if ve.IsNil() { + nilFound = true + break + } + ve = ve.Elem() + } + } + + // Display type information. + d.w.Write(openParenBytes) + d.w.Write(bytes.Repeat(asteriskBytes, indirects)) + d.w.Write([]byte(ve.Type().String())) + d.w.Write(closeParenBytes) + + // Display pointer information. + if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 { + d.w.Write(openParenBytes) + for i, addr := range pointerChain { + if i > 0 { + d.w.Write(pointerChainBytes) + } + printHexPtr(d.w, addr) + } + d.w.Write(closeParenBytes) + } + + // Display dereferenced value. + d.w.Write(openParenBytes) + switch { + case nilFound == true: + d.w.Write(nilAngleBytes) + + case cycleFound == true: + d.w.Write(circularBytes) + + default: + d.ignoreNextType = true + d.dump(ve) + } + d.w.Write(closeParenBytes) +} + +// dumpSlice handles formatting of arrays and slices. Byte (uint8 under +// reflection) arrays and slices are dumped in hexdump -C fashion. +func (d *dumpState) dumpSlice(v reflect.Value) { + // Determine whether this type should be hex dumped or not. Also, + // for types which should be hexdumped, try to use the underlying data + // first, then fall back to trying to convert them to a uint8 slice. + var buf []uint8 + doConvert := false + doHexDump := false + numEntries := v.Len() + if numEntries > 0 { + vt := v.Index(0).Type() + vts := vt.String() + switch { + // C types that need to be converted. + case cCharRE.MatchString(vts): + fallthrough + case cUnsignedCharRE.MatchString(vts): + fallthrough + case cUint8tCharRE.MatchString(vts): + doConvert = true + + // Try to use existing uint8 slices and fall back to converting + // and copying if that fails. + case vt.Kind() == reflect.Uint8: + // We need an addressable interface to convert the type + // to a byte slice. However, the reflect package won't + // give us an interface on certain things like + // unexported struct fields in order to enforce + // visibility rules. We use unsafe, when available, to + // bypass these restrictions since this package does not + // mutate the values. + vs := v + if !vs.CanInterface() || !vs.CanAddr() { + vs = unsafeReflectValue(vs) + } + if !UnsafeDisabled { + vs = vs.Slice(0, numEntries) + + // Use the existing uint8 slice if it can be + // type asserted. + iface := vs.Interface() + if slice, ok := iface.([]uint8); ok { + buf = slice + doHexDump = true + break + } + } + + // The underlying data needs to be converted if it can't + // be type asserted to a uint8 slice. + doConvert = true + } + + // Copy and convert the underlying type if needed. + if doConvert && vt.ConvertibleTo(uint8Type) { + // Convert and copy each element into a uint8 byte + // slice. + buf = make([]uint8, numEntries) + for i := 0; i < numEntries; i++ { + vv := v.Index(i) + buf[i] = uint8(vv.Convert(uint8Type).Uint()) + } + doHexDump = true + } + } + + // Hexdump the entire slice as needed. + if doHexDump { + indent := strings.Repeat(d.cs.Indent, d.depth) + str := indent + hex.Dump(buf) + str = strings.Replace(str, "\n", "\n"+indent, -1) + str = strings.TrimRight(str, d.cs.Indent) + d.w.Write([]byte(str)) + return + } + + // Recursively call dump for each item. + for i := 0; i < numEntries; i++ { + d.dump(d.unpackValue(v.Index(i))) + if i < (numEntries - 1) { + d.w.Write(commaNewlineBytes) + } else { + d.w.Write(newlineBytes) + } + } +} + +// dump is the main workhorse for dumping a value. It uses the passed reflect +// value to figure out what kind of object we are dealing with and formats it +// appropriately. It is a recursive function, however circular data structures +// are detected and handled properly. +func (d *dumpState) dump(v reflect.Value) { + // Handle invalid reflect values immediately. + kind := v.Kind() + if kind == reflect.Invalid { + d.w.Write(invalidAngleBytes) + return + } + + // Handle pointers specially. + if kind == reflect.Ptr { + d.indent() + d.dumpPtr(v) + return + } + + // Print type information unless already handled elsewhere. + if !d.ignoreNextType { + d.indent() + d.w.Write(openParenBytes) + d.w.Write([]byte(v.Type().String())) + d.w.Write(closeParenBytes) + d.w.Write(spaceBytes) + } + d.ignoreNextType = false + + // Display length and capacity if the built-in len and cap functions + // work with the value's kind and the len/cap itself is non-zero. + valueLen, valueCap := 0, 0 + switch v.Kind() { + case reflect.Array, reflect.Slice, reflect.Chan: + valueLen, valueCap = v.Len(), v.Cap() + case reflect.Map, reflect.String: + valueLen = v.Len() + } + if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 { + d.w.Write(openParenBytes) + if valueLen != 0 { + d.w.Write(lenEqualsBytes) + printInt(d.w, int64(valueLen), 10) + } + if !d.cs.DisableCapacities && valueCap != 0 { + if valueLen != 0 { + d.w.Write(spaceBytes) + } + d.w.Write(capEqualsBytes) + printInt(d.w, int64(valueCap), 10) + } + d.w.Write(closeParenBytes) + d.w.Write(spaceBytes) + } + + // Call Stringer/error interfaces if they exist and the handle methods flag + // is enabled + if !d.cs.DisableMethods { + if (kind != reflect.Invalid) && (kind != reflect.Interface) { + if handled := handleMethods(d.cs, d.w, v); handled { + return + } + } + } + + switch kind { + case reflect.Invalid: + // Do nothing. We should never get here since invalid has already + // been handled above. + + case reflect.Bool: + printBool(d.w, v.Bool()) + + case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int: + printInt(d.w, v.Int(), 10) + + case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint: + printUint(d.w, v.Uint(), 10) + + case reflect.Float32: + printFloat(d.w, v.Float(), 32) + + case reflect.Float64: + printFloat(d.w, v.Float(), 64) + + case reflect.Complex64: + printComplex(d.w, v.Complex(), 32) + + case reflect.Complex128: + printComplex(d.w, v.Complex(), 64) + + case reflect.Slice: + if v.IsNil() { + d.w.Write(nilAngleBytes) + break + } + fallthrough + + case reflect.Array: + d.w.Write(openBraceNewlineBytes) + d.depth++ + if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) { + d.indent() + d.w.Write(maxNewlineBytes) + } else { + d.dumpSlice(v) + } + d.depth-- + d.indent() + d.w.Write(closeBraceBytes) + + case reflect.String: + d.w.Write([]byte(strconv.Quote(v.String()))) + + case reflect.Interface: + // The only time we should get here is for nil interfaces due to + // unpackValue calls. + if v.IsNil() { + d.w.Write(nilAngleBytes) + } + + case reflect.Ptr: + // Do nothing. We should never get here since pointers have already + // been handled above. + + case reflect.Map: + // nil maps should be indicated as different than empty maps + if v.IsNil() { + d.w.Write(nilAngleBytes) + break + } + + d.w.Write(openBraceNewlineBytes) + d.depth++ + if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) { + d.indent() + d.w.Write(maxNewlineBytes) + } else { + numEntries := v.Len() + keys := v.MapKeys() + if d.cs.SortKeys { + sortValues(keys, d.cs) + } + for i, key := range keys { + d.dump(d.unpackValue(key)) + d.w.Write(colonSpaceBytes) + d.ignoreNextIndent = true + d.dump(d.unpackValue(v.MapIndex(key))) + if i < (numEntries - 1) { + d.w.Write(commaNewlineBytes) + } else { + d.w.Write(newlineBytes) + } + } + } + d.depth-- + d.indent() + d.w.Write(closeBraceBytes) + + case reflect.Struct: + d.w.Write(openBraceNewlineBytes) + d.depth++ + if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) { + d.indent() + d.w.Write(maxNewlineBytes) + } else { + vt := v.Type() + numFields := v.NumField() + for i := 0; i < numFields; i++ { + d.indent() + vtf := vt.Field(i) + d.w.Write([]byte(vtf.Name)) + d.w.Write(colonSpaceBytes) + d.ignoreNextIndent = true + d.dump(d.unpackValue(v.Field(i))) + if i < (numFields - 1) { + d.w.Write(commaNewlineBytes) + } else { + d.w.Write(newlineBytes) + } + } + } + d.depth-- + d.indent() + d.w.Write(closeBraceBytes) + + case reflect.Uintptr: + printHexPtr(d.w, uintptr(v.Uint())) + + case reflect.UnsafePointer, reflect.Chan, reflect.Func: + printHexPtr(d.w, v.Pointer()) + + // There were not any other types at the time this code was written, but + // fall back to letting the default fmt package handle it in case any new + // types are added. + default: + if v.CanInterface() { + fmt.Fprintf(d.w, "%v", v.Interface()) + } else { + fmt.Fprintf(d.w, "%v", v.String()) + } + } +} + +// fdump is a helper function to consolidate the logic from the various public +// methods which take varying writers and config states. +func fdump(cs *ConfigState, w io.Writer, a ...interface{}) { + for _, arg := range a { + if arg == nil { + w.Write(interfaceBytes) + w.Write(spaceBytes) + w.Write(nilAngleBytes) + w.Write(newlineBytes) + continue + } + + d := dumpState{w: w, cs: cs} + d.pointers = make(map[uintptr]int) + d.dump(reflect.ValueOf(arg)) + d.w.Write(newlineBytes) + } +} + +// Fdump formats and displays the passed arguments to io.Writer w. It formats +// exactly the same as Dump. +func Fdump(w io.Writer, a ...interface{}) { + fdump(&Config, w, a...) +} + +// Sdump returns a string with the passed arguments formatted exactly the same +// as Dump. +func Sdump(a ...interface{}) string { + var buf bytes.Buffer + fdump(&Config, &buf, a...) + return buf.String() +} + +/* +Dump displays the passed parameters to standard out with newlines, customizable +indentation, and additional debug information such as complete types and all +pointer addresses used to indirect to the final value. It provides the +following features over the built-in printing facilities provided by the fmt +package: + + * Pointers are dereferenced and followed + * Circular data structures are detected and handled properly + * Custom Stringer/error interfaces are optionally invoked, including + on unexported types + * Custom types which only implement the Stringer/error interfaces via + a pointer receiver are optionally invoked when passing non-pointer + variables + * Byte arrays and slices are dumped like the hexdump -C command which + includes offsets, byte values in hex, and ASCII output + +The configuration options are controlled by an exported package global, +spew.Config. See ConfigState for options documentation. + +See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to +get the formatted result as a string. +*/ +func Dump(a ...interface{}) { + fdump(&Config, os.Stdout, a...) +} diff --git a/vendor/github.com/davecgh/go-spew/spew/format.go b/vendor/github.com/davecgh/go-spew/spew/format.go new file mode 100644 index 00000000..c49875ba --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/format.go @@ -0,0 +1,419 @@ +/* + * Copyright (c) 2013-2016 Dave Collins + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +package spew + +import ( + "bytes" + "fmt" + "reflect" + "strconv" + "strings" +) + +// supportedFlags is a list of all the character flags supported by fmt package. +const supportedFlags = "0-+# " + +// formatState implements the fmt.Formatter interface and contains information +// about the state of a formatting operation. The NewFormatter function can +// be used to get a new Formatter which can be used directly as arguments +// in standard fmt package printing calls. +type formatState struct { + value interface{} + fs fmt.State + depth int + pointers map[uintptr]int + ignoreNextType bool + cs *ConfigState +} + +// buildDefaultFormat recreates the original format string without precision +// and width information to pass in to fmt.Sprintf in the case of an +// unrecognized type. Unless new types are added to the language, this +// function won't ever be called. +func (f *formatState) buildDefaultFormat() (format string) { + buf := bytes.NewBuffer(percentBytes) + + for _, flag := range supportedFlags { + if f.fs.Flag(int(flag)) { + buf.WriteRune(flag) + } + } + + buf.WriteRune('v') + + format = buf.String() + return format +} + +// constructOrigFormat recreates the original format string including precision +// and width information to pass along to the standard fmt package. This allows +// automatic deferral of all format strings this package doesn't support. +func (f *formatState) constructOrigFormat(verb rune) (format string) { + buf := bytes.NewBuffer(percentBytes) + + for _, flag := range supportedFlags { + if f.fs.Flag(int(flag)) { + buf.WriteRune(flag) + } + } + + if width, ok := f.fs.Width(); ok { + buf.WriteString(strconv.Itoa(width)) + } + + if precision, ok := f.fs.Precision(); ok { + buf.Write(precisionBytes) + buf.WriteString(strconv.Itoa(precision)) + } + + buf.WriteRune(verb) + + format = buf.String() + return format +} + +// unpackValue returns values inside of non-nil interfaces when possible and +// ensures that types for values which have been unpacked from an interface +// are displayed when the show types flag is also set. +// This is useful for data types like structs, arrays, slices, and maps which +// can contain varying types packed inside an interface. +func (f *formatState) unpackValue(v reflect.Value) reflect.Value { + if v.Kind() == reflect.Interface { + f.ignoreNextType = false + if !v.IsNil() { + v = v.Elem() + } + } + return v +} + +// formatPtr handles formatting of pointers by indirecting them as necessary. +func (f *formatState) formatPtr(v reflect.Value) { + // Display nil if top level pointer is nil. + showTypes := f.fs.Flag('#') + if v.IsNil() && (!showTypes || f.ignoreNextType) { + f.fs.Write(nilAngleBytes) + return + } + + // Remove pointers at or below the current depth from map used to detect + // circular refs. + for k, depth := range f.pointers { + if depth >= f.depth { + delete(f.pointers, k) + } + } + + // Keep list of all dereferenced pointers to possibly show later. + pointerChain := make([]uintptr, 0) + + // Figure out how many levels of indirection there are by derferencing + // pointers and unpacking interfaces down the chain while detecting circular + // references. + nilFound := false + cycleFound := false + indirects := 0 + ve := v + for ve.Kind() == reflect.Ptr { + if ve.IsNil() { + nilFound = true + break + } + indirects++ + addr := ve.Pointer() + pointerChain = append(pointerChain, addr) + if pd, ok := f.pointers[addr]; ok && pd < f.depth { + cycleFound = true + indirects-- + break + } + f.pointers[addr] = f.depth + + ve = ve.Elem() + if ve.Kind() == reflect.Interface { + if ve.IsNil() { + nilFound = true + break + } + ve = ve.Elem() + } + } + + // Display type or indirection level depending on flags. + if showTypes && !f.ignoreNextType { + f.fs.Write(openParenBytes) + f.fs.Write(bytes.Repeat(asteriskBytes, indirects)) + f.fs.Write([]byte(ve.Type().String())) + f.fs.Write(closeParenBytes) + } else { + if nilFound || cycleFound { + indirects += strings.Count(ve.Type().String(), "*") + } + f.fs.Write(openAngleBytes) + f.fs.Write([]byte(strings.Repeat("*", indirects))) + f.fs.Write(closeAngleBytes) + } + + // Display pointer information depending on flags. + if f.fs.Flag('+') && (len(pointerChain) > 0) { + f.fs.Write(openParenBytes) + for i, addr := range pointerChain { + if i > 0 { + f.fs.Write(pointerChainBytes) + } + printHexPtr(f.fs, addr) + } + f.fs.Write(closeParenBytes) + } + + // Display dereferenced value. + switch { + case nilFound == true: + f.fs.Write(nilAngleBytes) + + case cycleFound == true: + f.fs.Write(circularShortBytes) + + default: + f.ignoreNextType = true + f.format(ve) + } +} + +// format is the main workhorse for providing the Formatter interface. It +// uses the passed reflect value to figure out what kind of object we are +// dealing with and formats it appropriately. It is a recursive function, +// however circular data structures are detected and handled properly. +func (f *formatState) format(v reflect.Value) { + // Handle invalid reflect values immediately. + kind := v.Kind() + if kind == reflect.Invalid { + f.fs.Write(invalidAngleBytes) + return + } + + // Handle pointers specially. + if kind == reflect.Ptr { + f.formatPtr(v) + return + } + + // Print type information unless already handled elsewhere. + if !f.ignoreNextType && f.fs.Flag('#') { + f.fs.Write(openParenBytes) + f.fs.Write([]byte(v.Type().String())) + f.fs.Write(closeParenBytes) + } + f.ignoreNextType = false + + // Call Stringer/error interfaces if they exist and the handle methods + // flag is enabled. + if !f.cs.DisableMethods { + if (kind != reflect.Invalid) && (kind != reflect.Interface) { + if handled := handleMethods(f.cs, f.fs, v); handled { + return + } + } + } + + switch kind { + case reflect.Invalid: + // Do nothing. We should never get here since invalid has already + // been handled above. + + case reflect.Bool: + printBool(f.fs, v.Bool()) + + case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int: + printInt(f.fs, v.Int(), 10) + + case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint: + printUint(f.fs, v.Uint(), 10) + + case reflect.Float32: + printFloat(f.fs, v.Float(), 32) + + case reflect.Float64: + printFloat(f.fs, v.Float(), 64) + + case reflect.Complex64: + printComplex(f.fs, v.Complex(), 32) + + case reflect.Complex128: + printComplex(f.fs, v.Complex(), 64) + + case reflect.Slice: + if v.IsNil() { + f.fs.Write(nilAngleBytes) + break + } + fallthrough + + case reflect.Array: + f.fs.Write(openBracketBytes) + f.depth++ + if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) { + f.fs.Write(maxShortBytes) + } else { + numEntries := v.Len() + for i := 0; i < numEntries; i++ { + if i > 0 { + f.fs.Write(spaceBytes) + } + f.ignoreNextType = true + f.format(f.unpackValue(v.Index(i))) + } + } + f.depth-- + f.fs.Write(closeBracketBytes) + + case reflect.String: + f.fs.Write([]byte(v.String())) + + case reflect.Interface: + // The only time we should get here is for nil interfaces due to + // unpackValue calls. + if v.IsNil() { + f.fs.Write(nilAngleBytes) + } + + case reflect.Ptr: + // Do nothing. We should never get here since pointers have already + // been handled above. + + case reflect.Map: + // nil maps should be indicated as different than empty maps + if v.IsNil() { + f.fs.Write(nilAngleBytes) + break + } + + f.fs.Write(openMapBytes) + f.depth++ + if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) { + f.fs.Write(maxShortBytes) + } else { + keys := v.MapKeys() + if f.cs.SortKeys { + sortValues(keys, f.cs) + } + for i, key := range keys { + if i > 0 { + f.fs.Write(spaceBytes) + } + f.ignoreNextType = true + f.format(f.unpackValue(key)) + f.fs.Write(colonBytes) + f.ignoreNextType = true + f.format(f.unpackValue(v.MapIndex(key))) + } + } + f.depth-- + f.fs.Write(closeMapBytes) + + case reflect.Struct: + numFields := v.NumField() + f.fs.Write(openBraceBytes) + f.depth++ + if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) { + f.fs.Write(maxShortBytes) + } else { + vt := v.Type() + for i := 0; i < numFields; i++ { + if i > 0 { + f.fs.Write(spaceBytes) + } + vtf := vt.Field(i) + if f.fs.Flag('+') || f.fs.Flag('#') { + f.fs.Write([]byte(vtf.Name)) + f.fs.Write(colonBytes) + } + f.format(f.unpackValue(v.Field(i))) + } + } + f.depth-- + f.fs.Write(closeBraceBytes) + + case reflect.Uintptr: + printHexPtr(f.fs, uintptr(v.Uint())) + + case reflect.UnsafePointer, reflect.Chan, reflect.Func: + printHexPtr(f.fs, v.Pointer()) + + // There were not any other types at the time this code was written, but + // fall back to letting the default fmt package handle it if any get added. + default: + format := f.buildDefaultFormat() + if v.CanInterface() { + fmt.Fprintf(f.fs, format, v.Interface()) + } else { + fmt.Fprintf(f.fs, format, v.String()) + } + } +} + +// Format satisfies the fmt.Formatter interface. See NewFormatter for usage +// details. +func (f *formatState) Format(fs fmt.State, verb rune) { + f.fs = fs + + // Use standard formatting for verbs that are not v. + if verb != 'v' { + format := f.constructOrigFormat(verb) + fmt.Fprintf(fs, format, f.value) + return + } + + if f.value == nil { + if fs.Flag('#') { + fs.Write(interfaceBytes) + } + fs.Write(nilAngleBytes) + return + } + + f.format(reflect.ValueOf(f.value)) +} + +// newFormatter is a helper function to consolidate the logic from the various +// public methods which take varying config states. +func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter { + fs := &formatState{value: v, cs: cs} + fs.pointers = make(map[uintptr]int) + return fs +} + +/* +NewFormatter returns a custom formatter that satisfies the fmt.Formatter +interface. As a result, it integrates cleanly with standard fmt package +printing functions. The formatter is useful for inline printing of smaller data +types similar to the standard %v format specifier. + +The custom formatter only responds to the %v (most compact), %+v (adds pointer +addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb +combinations. Any other verbs such as %x and %q will be sent to the the +standard fmt package for formatting. In addition, the custom formatter ignores +the width and precision arguments (however they will still work on the format +specifiers not handled by the custom formatter). + +Typically this function shouldn't be called directly. It is much easier to make +use of the custom formatter by calling one of the convenience functions such as +Printf, Println, or Fprintf. +*/ +func NewFormatter(v interface{}) fmt.Formatter { + return newFormatter(&Config, v) +} diff --git a/vendor/github.com/davecgh/go-spew/spew/spew.go b/vendor/github.com/davecgh/go-spew/spew/spew.go new file mode 100644 index 00000000..32c0e338 --- /dev/null +++ b/vendor/github.com/davecgh/go-spew/spew/spew.go @@ -0,0 +1,148 @@ +/* + * Copyright (c) 2013-2016 Dave Collins + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +package spew + +import ( + "fmt" + "io" +) + +// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the formatted string as a value that satisfies error. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b)) +func Errorf(format string, a ...interface{}) (err error) { + return fmt.Errorf(format, convertArgs(a)...) +} + +// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b)) +func Fprint(w io.Writer, a ...interface{}) (n int, err error) { + return fmt.Fprint(w, convertArgs(a)...) +} + +// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b)) +func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) { + return fmt.Fprintf(w, format, convertArgs(a)...) +} + +// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it +// passed with a default Formatter interface returned by NewFormatter. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b)) +func Fprintln(w io.Writer, a ...interface{}) (n int, err error) { + return fmt.Fprintln(w, convertArgs(a)...) +} + +// Print is a wrapper for fmt.Print that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b)) +func Print(a ...interface{}) (n int, err error) { + return fmt.Print(convertArgs(a)...) +} + +// Printf is a wrapper for fmt.Printf that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b)) +func Printf(format string, a ...interface{}) (n int, err error) { + return fmt.Printf(format, convertArgs(a)...) +} + +// Println is a wrapper for fmt.Println that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the number of bytes written and any write error encountered. See +// NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b)) +func Println(a ...interface{}) (n int, err error) { + return fmt.Println(convertArgs(a)...) +} + +// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the resulting string. See NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b)) +func Sprint(a ...interface{}) string { + return fmt.Sprint(convertArgs(a)...) +} + +// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were +// passed with a default Formatter interface returned by NewFormatter. It +// returns the resulting string. See NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b)) +func Sprintf(format string, a ...interface{}) string { + return fmt.Sprintf(format, convertArgs(a)...) +} + +// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it +// were passed with a default Formatter interface returned by NewFormatter. It +// returns the resulting string. See NewFormatter for formatting details. +// +// This function is shorthand for the following syntax: +// +// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b)) +func Sprintln(a ...interface{}) string { + return fmt.Sprintln(convertArgs(a)...) +} + +// convertArgs accepts a slice of arguments and returns a slice of the same +// length with each argument converted to a default spew Formatter interface. +func convertArgs(args []interface{}) (formatters []interface{}) { + formatters = make([]interface{}, len(args)) + for index, arg := range args { + formatters[index] = NewFormatter(arg) + } + return formatters +} diff --git a/vendor/github.com/go-errors/errors/LICENSE.MIT b/vendor/github.com/go-errors/errors/LICENSE.MIT new file mode 100644 index 00000000..c9a5b2ee --- /dev/null +++ b/vendor/github.com/go-errors/errors/LICENSE.MIT @@ -0,0 +1,7 @@ +Copyright (c) 2015 Conrad Irwin + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/go-errors/errors/error.go b/vendor/github.com/go-errors/errors/error.go new file mode 100644 index 00000000..60062a43 --- /dev/null +++ b/vendor/github.com/go-errors/errors/error.go @@ -0,0 +1,217 @@ +// Package errors provides errors that have stack-traces. +// +// This is particularly useful when you want to understand the +// state of execution when an error was returned unexpectedly. +// +// It provides the type *Error which implements the standard +// golang error interface, so you can use this library interchangably +// with code that is expecting a normal error return. +// +// For example: +// +// package crashy +// +// import "github.com/go-errors/errors" +// +// var Crashed = errors.Errorf("oh dear") +// +// func Crash() error { +// return errors.New(Crashed) +// } +// +// This can be called as follows: +// +// package main +// +// import ( +// "crashy" +// "fmt" +// "github.com/go-errors/errors" +// ) +// +// func main() { +// err := crashy.Crash() +// if err != nil { +// if errors.Is(err, crashy.Crashed) { +// fmt.Println(err.(*errors.Error).ErrorStack()) +// } else { +// panic(err) +// } +// } +// } +// +// This package was original written to allow reporting to Bugsnag, +// but after I found similar packages by Facebook and Dropbox, it +// was moved to one canonical location so everyone can benefit. +package errors + +import ( + "bytes" + "fmt" + "reflect" + "runtime" +) + +// The maximum number of stackframes on any error. +var MaxStackDepth = 50 + +// Error is an error with an attached stacktrace. It can be used +// wherever the builtin error interface is expected. +type Error struct { + Err error + stack []uintptr + frames []StackFrame + prefix string +} + +// New makes an Error from the given value. If that value is already an +// error then it will be used directly, if not, it will be passed to +// fmt.Errorf("%v"). The stacktrace will point to the line of code that +// called New. +func New(e interface{}) *Error { + var err error + + switch e := e.(type) { + case error: + err = e + default: + err = fmt.Errorf("%v", e) + } + + stack := make([]uintptr, MaxStackDepth) + length := runtime.Callers(2, stack[:]) + return &Error{ + Err: err, + stack: stack[:length], + } +} + +// Wrap makes an Error from the given value. If that value is already an +// error then it will be used directly, if not, it will be passed to +// fmt.Errorf("%v"). The skip parameter indicates how far up the stack +// to start the stacktrace. 0 is from the current call, 1 from its caller, etc. +func Wrap(e interface{}, skip int) *Error { + var err error + + switch e := e.(type) { + case *Error: + return e + case error: + err = e + default: + err = fmt.Errorf("%v", e) + } + + stack := make([]uintptr, MaxStackDepth) + length := runtime.Callers(2+skip, stack[:]) + return &Error{ + Err: err, + stack: stack[:length], + } +} + +// WrapPrefix makes an Error from the given value. If that value is already an +// error then it will be used directly, if not, it will be passed to +// fmt.Errorf("%v"). The prefix parameter is used to add a prefix to the +// error message when calling Error(). The skip parameter indicates how far +// up the stack to start the stacktrace. 0 is from the current call, +// 1 from its caller, etc. +func WrapPrefix(e interface{}, prefix string, skip int) *Error { + + err := Wrap(e, 1+skip) + + if err.prefix != "" { + prefix = fmt.Sprintf("%s: %s", prefix, err.prefix) + } + + return &Error{ + Err: err.Err, + stack: err.stack, + prefix: prefix, + } + +} + +// Is detects whether the error is equal to a given error. Errors +// are considered equal by this function if they are the same object, +// or if they both contain the same error inside an errors.Error. +func Is(e error, original error) bool { + + if e == original { + return true + } + + if e, ok := e.(*Error); ok { + return Is(e.Err, original) + } + + if original, ok := original.(*Error); ok { + return Is(e, original.Err) + } + + return false +} + +// Errorf creates a new error with the given message. You can use it +// as a drop-in replacement for fmt.Errorf() to provide descriptive +// errors in return values. +func Errorf(format string, a ...interface{}) *Error { + return Wrap(fmt.Errorf(format, a...), 1) +} + +// Error returns the underlying error's message. +func (err *Error) Error() string { + + msg := err.Err.Error() + if err.prefix != "" { + msg = fmt.Sprintf("%s: %s", err.prefix, msg) + } + + return msg +} + +// Stack returns the callstack formatted the same way that go does +// in runtime/debug.Stack() +func (err *Error) Stack() []byte { + buf := bytes.Buffer{} + + for _, frame := range err.StackFrames() { + buf.WriteString(frame.String()) + } + + return buf.Bytes() +} + +// Callers satisfies the bugsnag ErrorWithCallerS() interface +// so that the stack can be read out. +func (err *Error) Callers() []uintptr { + return err.stack +} + +// ErrorStack returns a string that contains both the +// error message and the callstack. +func (err *Error) ErrorStack() string { + return err.TypeName() + " " + err.Error() + "\n" + string(err.Stack()) +} + +// StackFrames returns an array of frames containing information about the +// stack. +func (err *Error) StackFrames() []StackFrame { + if err.frames == nil { + err.frames = make([]StackFrame, len(err.stack)) + + for i, pc := range err.stack { + err.frames[i] = NewStackFrame(pc) + } + } + + return err.frames +} + +// TypeName returns the type this error. e.g. *errors.stringError. +func (err *Error) TypeName() string { + if _, ok := err.Err.(uncaughtPanic); ok { + return "panic" + } + return reflect.TypeOf(err.Err).String() +} diff --git a/vendor/github.com/go-errors/errors/parse_panic.go b/vendor/github.com/go-errors/errors/parse_panic.go new file mode 100644 index 00000000..cc37052d --- /dev/null +++ b/vendor/github.com/go-errors/errors/parse_panic.go @@ -0,0 +1,127 @@ +package errors + +import ( + "strconv" + "strings" +) + +type uncaughtPanic struct{ message string } + +func (p uncaughtPanic) Error() string { + return p.message +} + +// ParsePanic allows you to get an error object from the output of a go program +// that panicked. This is particularly useful with https://github.com/mitchellh/panicwrap. +func ParsePanic(text string) (*Error, error) { + lines := strings.Split(text, "\n") + + state := "start" + + var message string + var stack []StackFrame + + for i := 0; i < len(lines); i++ { + line := lines[i] + + if state == "start" { + if strings.HasPrefix(line, "panic: ") { + message = strings.TrimPrefix(line, "panic: ") + state = "seek" + } else { + return nil, Errorf("bugsnag.panicParser: Invalid line (no prefix): %s", line) + } + + } else if state == "seek" { + if strings.HasPrefix(line, "goroutine ") && strings.HasSuffix(line, "[running]:") { + state = "parsing" + } + + } else if state == "parsing" { + if line == "" { + state = "done" + break + } + createdBy := false + if strings.HasPrefix(line, "created by ") { + line = strings.TrimPrefix(line, "created by ") + createdBy = true + } + + i++ + + if i >= len(lines) { + return nil, Errorf("bugsnag.panicParser: Invalid line (unpaired): %s", line) + } + + frame, err := parsePanicFrame(line, lines[i], createdBy) + if err != nil { + return nil, err + } + + stack = append(stack, *frame) + if createdBy { + state = "done" + break + } + } + } + + if state == "done" || state == "parsing" { + return &Error{Err: uncaughtPanic{message}, frames: stack}, nil + } + return nil, Errorf("could not parse panic: %v", text) +} + +// The lines we're passing look like this: +// +// main.(*foo).destruct(0xc208067e98) +// /0/go/src/github.com/bugsnag/bugsnag-go/pan/main.go:22 +0x151 +func parsePanicFrame(name string, line string, createdBy bool) (*StackFrame, error) { + idx := strings.LastIndex(name, "(") + if idx == -1 && !createdBy { + return nil, Errorf("bugsnag.panicParser: Invalid line (no call): %s", name) + } + if idx != -1 { + name = name[:idx] + } + pkg := "" + + if lastslash := strings.LastIndex(name, "/"); lastslash >= 0 { + pkg += name[:lastslash] + "/" + name = name[lastslash+1:] + } + if period := strings.Index(name, "."); period >= 0 { + pkg += name[:period] + name = name[period+1:] + } + + name = strings.Replace(name, "·", ".", -1) + + if !strings.HasPrefix(line, "\t") { + return nil, Errorf("bugsnag.panicParser: Invalid line (no tab): %s", line) + } + + idx = strings.LastIndex(line, ":") + if idx == -1 { + return nil, Errorf("bugsnag.panicParser: Invalid line (no line number): %s", line) + } + file := line[1:idx] + + number := line[idx+1:] + if idx = strings.Index(number, " +"); idx > -1 { + number = number[:idx] + } + + lno, err := strconv.ParseInt(number, 10, 32) + if err != nil { + return nil, Errorf("bugsnag.panicParser: Invalid line (bad line number): %s", line) + } + + return &StackFrame{ + File: file, + LineNumber: int(lno), + Package: pkg, + Name: name, + }, nil +} diff --git a/vendor/github.com/go-errors/errors/stackframe.go b/vendor/github.com/go-errors/errors/stackframe.go new file mode 100644 index 00000000..750ab9a5 --- /dev/null +++ b/vendor/github.com/go-errors/errors/stackframe.go @@ -0,0 +1,102 @@ +package errors + +import ( + "bytes" + "fmt" + "io/ioutil" + "runtime" + "strings" +) + +// A StackFrame contains all necessary information about to generate a line +// in a callstack. +type StackFrame struct { + // The path to the file containing this ProgramCounter + File string + // The LineNumber in that file + LineNumber int + // The Name of the function that contains this ProgramCounter + Name string + // The Package that contains this function + Package string + // The underlying ProgramCounter + ProgramCounter uintptr +} + +// NewStackFrame popoulates a stack frame object from the program counter. +func NewStackFrame(pc uintptr) (frame StackFrame) { + + frame = StackFrame{ProgramCounter: pc} + if frame.Func() == nil { + return + } + frame.Package, frame.Name = packageAndName(frame.Func()) + + // pc -1 because the program counters we use are usually return addresses, + // and we want to show the line that corresponds to the function call + frame.File, frame.LineNumber = frame.Func().FileLine(pc - 1) + return + +} + +// Func returns the function that contained this frame. +func (frame *StackFrame) Func() *runtime.Func { + if frame.ProgramCounter == 0 { + return nil + } + return runtime.FuncForPC(frame.ProgramCounter) +} + +// String returns the stackframe formatted in the same way as go does +// in runtime/debug.Stack() +func (frame *StackFrame) String() string { + str := fmt.Sprintf("%s:%d (0x%x)\n", frame.File, frame.LineNumber, frame.ProgramCounter) + + source, err := frame.SourceLine() + if err != nil { + return str + } + + return str + fmt.Sprintf("\t%s: %s\n", frame.Name, source) +} + +// SourceLine gets the line of code (from File and Line) of the original source if possible. +func (frame *StackFrame) SourceLine() (string, error) { + data, err := ioutil.ReadFile(frame.File) + + if err != nil { + return "", New(err) + } + + lines := bytes.Split(data, []byte{'\n'}) + if frame.LineNumber <= 0 || frame.LineNumber >= len(lines) { + return "???", nil + } + // -1 because line-numbers are 1 based, but our array is 0 based + return string(bytes.Trim(lines[frame.LineNumber-1], " \t")), nil +} + +func packageAndName(fn *runtime.Func) (string, string) { + name := fn.Name() + pkg := "" + + // The name includes the path name to the package, which is unnecessary + // since the file name is already included. Plus, it has center dots. + // That is, we see + // runtime/debug.*T·ptrmethod + // and want + // *T.ptrmethod + // Since the package path might contains dots (e.g. code.google.com/...), + // we first remove the path prefix if there is one. + if lastslash := strings.LastIndex(name, "/"); lastslash >= 0 { + pkg += name[:lastslash] + "/" + name = name[lastslash+1:] + } + if period := strings.Index(name, "."); period >= 0 { + pkg += name[:period] + name = name[period+1:] + } + + name = strings.Replace(name, "·", ".", -1) + return pkg, name +} diff --git a/vendor/github.com/go-ini/ini/LICENSE b/vendor/github.com/go-ini/ini/LICENSE new file mode 100644 index 00000000..d361bbcd --- /dev/null +++ b/vendor/github.com/go-ini/ini/LICENSE @@ -0,0 +1,191 @@ +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and +distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the copyright +owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other entities +that control, are controlled by, or are under common control with that entity. +For the purposes of this definition, "control" means (i) the power, direct or +indirect, to cause the direction or management of such entity, whether by +contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the +outstanding shares, or (iii) beneficial ownership of such entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising +permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, including +but not limited to software source code, documentation source, and configuration +files. + +"Object" form shall mean any form resulting from mechanical transformation or +translation of a Source form, including but not limited to compiled object code, +generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, made +available under the License, as indicated by a copyright notice that is included +in or attached to the work (an example is provided in the Appendix below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that +is based on (or derived from) the Work and for which the editorial revisions, +annotations, elaborations, or other modifications represent, as a whole, an +original work of authorship. For the purposes of this License, Derivative Works +shall not include works that remain separable from, or merely link (or bind by +name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original version +of the Work and any modifications or additions to that Work or Derivative Works +thereof, that is intentionally submitted to Licensor for inclusion in the Work +by the copyright owner or by an individual or Legal Entity authorized to submit +on behalf of the copyright owner. For the purposes of this definition, +"submitted" means any form of electronic, verbal, or written communication sent +to the Licensor or its representatives, including but not limited to +communication on electronic mailing lists, source code control systems, and +issue tracking systems that are managed by, or on behalf of, the Licensor for +the purpose of discussing and improving the Work, but excluding communication +that is conspicuously marked or otherwise designated in writing by the copyright +owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf +of whom a Contribution has been received by Licensor and subsequently +incorporated within the Work. + +2. Grant of Copyright License. + +Subject to the terms and conditions of this License, each Contributor hereby +grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, +irrevocable copyright license to reproduce, prepare Derivative Works of, +publicly display, publicly perform, sublicense, and distribute the Work and such +Derivative Works in Source or Object form. + +3. Grant of Patent License. + +Subject to the terms and conditions of this License, each Contributor hereby +grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, +irrevocable (except as stated in this section) patent license to make, have +made, use, offer to sell, sell, import, and otherwise transfer the Work, where +such license applies only to those patent claims licensable by such Contributor +that are necessarily infringed by their Contribution(s) alone or by combination +of their Contribution(s) with the Work to which such Contribution(s) was +submitted. If You institute patent litigation against any entity (including a +cross-claim or counterclaim in a lawsuit) alleging that the Work or a +Contribution incorporated within the Work constitutes direct or contributory +patent infringement, then any patent licenses granted to You under this License +for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. + +You may reproduce and distribute copies of the Work or Derivative Works thereof +in any medium, with or without modifications, and in Source or Object form, +provided that You meet the following conditions: + +You must give any other recipients of the Work or Derivative Works a copy of +this License; and +You must cause any modified files to carry prominent notices stating that You +changed the files; and +You must retain, in the Source form of any Derivative Works that You distribute, +all copyright, patent, trademark, and attribution notices from the Source form +of the Work, excluding those notices that do not pertain to any part of the +Derivative Works; and +If the Work includes a "NOTICE" text file as part of its distribution, then any +Derivative Works that You distribute must include a readable copy of the +attribution notices contained within such NOTICE file, excluding those notices +that do not pertain to any part of the Derivative Works, in at least one of the +following places: within a NOTICE text file distributed as part of the +Derivative Works; within the Source form or documentation, if provided along +with the Derivative Works; or, within a display generated by the Derivative +Works, if and wherever such third-party notices normally appear. The contents of +the NOTICE file are for informational purposes only and do not modify the +License. You may add Your own attribution notices within Derivative Works that +You distribute, alongside or as an addendum to the NOTICE text from the Work, +provided that such additional attribution notices cannot be construed as +modifying the License. +You may add Your own copyright statement to Your modifications and may provide +additional or different license terms and conditions for use, reproduction, or +distribution of Your modifications, or for any such Derivative Works as a whole, +provided Your use, reproduction, and distribution of the Work otherwise complies +with the conditions stated in this License. + +5. Submission of Contributions. + +Unless You explicitly state otherwise, any Contribution intentionally submitted +for inclusion in the Work by You to the Licensor shall be under the terms and +conditions of this License, without any additional terms or conditions. +Notwithstanding the above, nothing herein shall supersede or modify the terms of +any separate license agreement you may have executed with Licensor regarding +such Contributions. + +6. Trademarks. + +This License does not grant permission to use the trade names, trademarks, +service marks, or product names of the Licensor, except as required for +reasonable and customary use in describing the origin of the Work and +reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. + +Unless required by applicable law or agreed to in writing, Licensor provides the +Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, +including, without limitation, any warranties or conditions of TITLE, +NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are +solely responsible for determining the appropriateness of using or +redistributing the Work and assume any risks associated with Your exercise of +permissions under this License. + +8. Limitation of Liability. + +In no event and under no legal theory, whether in tort (including negligence), +contract, or otherwise, unless required by applicable law (such as deliberate +and grossly negligent acts) or agreed to in writing, shall any Contributor be +liable to You for damages, including any direct, indirect, special, incidental, +or consequential damages of any character arising as a result of this License or +out of the use or inability to use the Work (including but not limited to +damages for loss of goodwill, work stoppage, computer failure or malfunction, or +any and all other commercial damages or losses), even if such Contributor has +been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. + +While redistributing the Work or Derivative Works thereof, You may choose to +offer, and charge a fee for, acceptance of support, warranty, indemnity, or +other liability obligations and/or rights consistent with this License. However, +in accepting such obligations, You may act only on Your own behalf and on Your +sole responsibility, not on behalf of any other Contributor, and only if You +agree to indemnify, defend, and hold each Contributor harmless for any liability +incurred by, or claims asserted against, such Contributor by reason of your +accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work + +To apply the Apache License to your work, attach the following boilerplate +notice, with the fields enclosed by brackets "[]" replaced with your own +identifying information. (Don't include the brackets!) The text should be +enclosed in the appropriate comment syntax for the file format. We also +recommend that a file or class name and description of purpose be included on +the same "printed page" as the copyright notice for easier identification within +third-party archives. + + Copyright 2014 Unknwon + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/go-ini/ini/error.go b/vendor/github.com/go-ini/ini/error.go new file mode 100644 index 00000000..80afe743 --- /dev/null +++ b/vendor/github.com/go-ini/ini/error.go @@ -0,0 +1,32 @@ +// Copyright 2016 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +package ini + +import ( + "fmt" +) + +type ErrDelimiterNotFound struct { + Line string +} + +func IsErrDelimiterNotFound(err error) bool { + _, ok := err.(ErrDelimiterNotFound) + return ok +} + +func (err ErrDelimiterNotFound) Error() string { + return fmt.Sprintf("key-value delimiter not found: %s", err.Line) +} diff --git a/vendor/github.com/go-ini/ini/file.go b/vendor/github.com/go-ini/ini/file.go new file mode 100644 index 00000000..d7982c32 --- /dev/null +++ b/vendor/github.com/go-ini/ini/file.go @@ -0,0 +1,407 @@ +// Copyright 2017 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +package ini + +import ( + "bytes" + "errors" + "fmt" + "io" + "io/ioutil" + "os" + "strings" + "sync" +) + +// File represents a combination of a or more INI file(s) in memory. +type File struct { + options LoadOptions + dataSources []dataSource + + // Should make things safe, but sometimes doesn't matter. + BlockMode bool + lock sync.RWMutex + + // To keep data in order. + sectionList []string + // Actual data is stored here. + sections map[string]*Section + + NameMapper + ValueMapper +} + +// newFile initializes File object with given data sources. +func newFile(dataSources []dataSource, opts LoadOptions) *File { + return &File{ + BlockMode: true, + dataSources: dataSources, + sections: make(map[string]*Section), + sectionList: make([]string, 0, 10), + options: opts, + } +} + +// Empty returns an empty file object. +func Empty() *File { + // Ignore error here, we sure our data is good. + f, _ := Load([]byte("")) + return f +} + +// NewSection creates a new section. +func (f *File) NewSection(name string) (*Section, error) { + if len(name) == 0 { + return nil, errors.New("error creating new section: empty section name") + } else if f.options.Insensitive && name != DEFAULT_SECTION { + name = strings.ToLower(name) + } + + if f.BlockMode { + f.lock.Lock() + defer f.lock.Unlock() + } + + if inSlice(name, f.sectionList) { + return f.sections[name], nil + } + + f.sectionList = append(f.sectionList, name) + f.sections[name] = newSection(f, name) + return f.sections[name], nil +} + +// NewRawSection creates a new section with an unparseable body. +func (f *File) NewRawSection(name, body string) (*Section, error) { + section, err := f.NewSection(name) + if err != nil { + return nil, err + } + + section.isRawSection = true + section.rawBody = body + return section, nil +} + +// NewSections creates a list of sections. +func (f *File) NewSections(names ...string) (err error) { + for _, name := range names { + if _, err = f.NewSection(name); err != nil { + return err + } + } + return nil +} + +// GetSection returns section by given name. +func (f *File) GetSection(name string) (*Section, error) { + if len(name) == 0 { + name = DEFAULT_SECTION + } + if f.options.Insensitive { + name = strings.ToLower(name) + } + + if f.BlockMode { + f.lock.RLock() + defer f.lock.RUnlock() + } + + sec := f.sections[name] + if sec == nil { + return nil, fmt.Errorf("section '%s' does not exist", name) + } + return sec, nil +} + +// Section assumes named section exists and returns a zero-value when not. +func (f *File) Section(name string) *Section { + sec, err := f.GetSection(name) + if err != nil { + // Note: It's OK here because the only possible error is empty section name, + // but if it's empty, this piece of code won't be executed. + sec, _ = f.NewSection(name) + return sec + } + return sec +} + +// Section returns list of Section. +func (f *File) Sections() []*Section { + if f.BlockMode { + f.lock.RLock() + defer f.lock.RUnlock() + } + + sections := make([]*Section, len(f.sectionList)) + for i, name := range f.sectionList { + sections[i] = f.sections[name] + } + return sections +} + +// ChildSections returns a list of child sections of given section name. +func (f *File) ChildSections(name string) []*Section { + return f.Section(name).ChildSections() +} + +// SectionStrings returns list of section names. +func (f *File) SectionStrings() []string { + list := make([]string, len(f.sectionList)) + copy(list, f.sectionList) + return list +} + +// DeleteSection deletes a section. +func (f *File) DeleteSection(name string) { + if f.BlockMode { + f.lock.Lock() + defer f.lock.Unlock() + } + + if len(name) == 0 { + name = DEFAULT_SECTION + } + + for i, s := range f.sectionList { + if s == name { + f.sectionList = append(f.sectionList[:i], f.sectionList[i+1:]...) + delete(f.sections, name) + return + } + } +} + +func (f *File) reload(s dataSource) error { + r, err := s.ReadCloser() + if err != nil { + return err + } + defer r.Close() + + return f.parse(r) +} + +// Reload reloads and parses all data sources. +func (f *File) Reload() (err error) { + for _, s := range f.dataSources { + if err = f.reload(s); err != nil { + // In loose mode, we create an empty default section for nonexistent files. + if os.IsNotExist(err) && f.options.Loose { + f.parse(bytes.NewBuffer(nil)) + continue + } + return err + } + } + return nil +} + +// Append appends one or more data sources and reloads automatically. +func (f *File) Append(source interface{}, others ...interface{}) error { + ds, err := parseDataSource(source) + if err != nil { + return err + } + f.dataSources = append(f.dataSources, ds) + for _, s := range others { + ds, err = parseDataSource(s) + if err != nil { + return err + } + f.dataSources = append(f.dataSources, ds) + } + return f.Reload() +} + +func (f *File) writeToBuffer(indent string) (*bytes.Buffer, error) { + equalSign := "=" + if PrettyFormat || PrettyEqual { + equalSign = " = " + } + + // Use buffer to make sure target is safe until finish encoding. + buf := bytes.NewBuffer(nil) + for i, sname := range f.sectionList { + sec := f.Section(sname) + if len(sec.Comment) > 0 { + if sec.Comment[0] != '#' && sec.Comment[0] != ';' { + sec.Comment = "; " + sec.Comment + } else { + sec.Comment = sec.Comment[:1] + " " + strings.TrimSpace(sec.Comment[1:]) + } + if _, err := buf.WriteString(sec.Comment + LineBreak); err != nil { + return nil, err + } + } + + if i > 0 || DefaultHeader { + if _, err := buf.WriteString("[" + sname + "]" + LineBreak); err != nil { + return nil, err + } + } else { + // Write nothing if default section is empty + if len(sec.keyList) == 0 { + continue + } + } + + if sec.isRawSection { + if _, err := buf.WriteString(sec.rawBody); err != nil { + return nil, err + } + + if PrettySection { + // Put a line between sections + if _, err := buf.WriteString(LineBreak); err != nil { + return nil, err + } + } + continue + } + + // Count and generate alignment length and buffer spaces using the + // longest key. Keys may be modifed if they contain certain characters so + // we need to take that into account in our calculation. + alignLength := 0 + if PrettyFormat { + for _, kname := range sec.keyList { + keyLength := len(kname) + // First case will surround key by ` and second by """ + if strings.ContainsAny(kname, "\"=:") { + keyLength += 2 + } else if strings.Contains(kname, "`") { + keyLength += 6 + } + + if keyLength > alignLength { + alignLength = keyLength + } + } + } + alignSpaces := bytes.Repeat([]byte(" "), alignLength) + + KEY_LIST: + for _, kname := range sec.keyList { + key := sec.Key(kname) + if len(key.Comment) > 0 { + if len(indent) > 0 && sname != DEFAULT_SECTION { + buf.WriteString(indent) + } + if key.Comment[0] != '#' && key.Comment[0] != ';' { + key.Comment = "; " + key.Comment + } else { + key.Comment = key.Comment[:1] + " " + strings.TrimSpace(key.Comment[1:]) + } + + // Support multiline comments + key.Comment = strings.Replace(key.Comment, "\n", "\n; ", -1) + + if _, err := buf.WriteString(key.Comment + LineBreak); err != nil { + return nil, err + } + } + + if len(indent) > 0 && sname != DEFAULT_SECTION { + buf.WriteString(indent) + } + + switch { + case key.isAutoIncrement: + kname = "-" + case strings.ContainsAny(kname, "\"=:"): + kname = "`" + kname + "`" + case strings.Contains(kname, "`"): + kname = `"""` + kname + `"""` + } + + for _, val := range key.ValueWithShadows() { + if _, err := buf.WriteString(kname); err != nil { + return nil, err + } + + if key.isBooleanType { + if kname != sec.keyList[len(sec.keyList)-1] { + buf.WriteString(LineBreak) + } + continue KEY_LIST + } + + // Write out alignment spaces before "=" sign + if PrettyFormat { + buf.Write(alignSpaces[:alignLength-len(kname)]) + } + + // In case key value contains "\n", "`", "\"", "#" or ";" + if strings.ContainsAny(val, "\n`") { + val = `"""` + val + `"""` + } else if !f.options.IgnoreInlineComment && strings.ContainsAny(val, "#;") { + val = "`" + val + "`" + } + if _, err := buf.WriteString(equalSign + val + LineBreak); err != nil { + return nil, err + } + } + + for _, val := range key.nestedValues { + if _, err := buf.WriteString(indent + " " + val + LineBreak); err != nil { + return nil, err + } + } + } + + if PrettySection { + // Put a line between sections + if _, err := buf.WriteString(LineBreak); err != nil { + return nil, err + } + } + } + + return buf, nil +} + +// WriteToIndent writes content into io.Writer with given indention. +// If PrettyFormat has been set to be true, +// it will align "=" sign with spaces under each section. +func (f *File) WriteToIndent(w io.Writer, indent string) (int64, error) { + buf, err := f.writeToBuffer(indent) + if err != nil { + return 0, err + } + return buf.WriteTo(w) +} + +// WriteTo writes file content into io.Writer. +func (f *File) WriteTo(w io.Writer) (int64, error) { + return f.WriteToIndent(w, "") +} + +// SaveToIndent writes content to file system with given value indention. +func (f *File) SaveToIndent(filename, indent string) error { + // Note: Because we are truncating with os.Create, + // so it's safer to save to a temporary file location and rename afte done. + buf, err := f.writeToBuffer(indent) + if err != nil { + return err + } + + return ioutil.WriteFile(filename, buf.Bytes(), 0666) +} + +// SaveTo writes content to file system. +func (f *File) SaveTo(filename string) error { + return f.SaveToIndent(filename, "") +} diff --git a/vendor/github.com/go-ini/ini/ini.go b/vendor/github.com/go-ini/ini/ini.go new file mode 100644 index 00000000..15ebc8f7 --- /dev/null +++ b/vendor/github.com/go-ini/ini/ini.go @@ -0,0 +1,207 @@ +// Copyright 2014 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +// Package ini provides INI file read and write functionality in Go. +package ini + +import ( + "bytes" + "fmt" + "io" + "io/ioutil" + "os" + "regexp" + "runtime" +) + +const ( + // Name for default section. You can use this constant or the string literal. + // In most of cases, an empty string is all you need to access the section. + DEFAULT_SECTION = "DEFAULT" + + // Maximum allowed depth when recursively substituing variable names. + _DEPTH_VALUES = 99 + _VERSION = "1.37.0" +) + +// Version returns current package version literal. +func Version() string { + return _VERSION +} + +var ( + // Delimiter to determine or compose a new line. + // This variable will be changed to "\r\n" automatically on Windows + // at package init time. + LineBreak = "\n" + + // Variable regexp pattern: %(variable)s + varPattern = regexp.MustCompile(`%\(([^\)]+)\)s`) + + // Indicate whether to align "=" sign with spaces to produce pretty output + // or reduce all possible spaces for compact format. + PrettyFormat = true + + // Place spaces around "=" sign even when PrettyFormat is false + PrettyEqual = false + + // Explicitly write DEFAULT section header + DefaultHeader = false + + // Indicate whether to put a line between sections + PrettySection = true +) + +func init() { + if runtime.GOOS == "windows" { + LineBreak = "\r\n" + } +} + +func inSlice(str string, s []string) bool { + for _, v := range s { + if str == v { + return true + } + } + return false +} + +// dataSource is an interface that returns object which can be read and closed. +type dataSource interface { + ReadCloser() (io.ReadCloser, error) +} + +// sourceFile represents an object that contains content on the local file system. +type sourceFile struct { + name string +} + +func (s sourceFile) ReadCloser() (_ io.ReadCloser, err error) { + return os.Open(s.name) +} + +// sourceData represents an object that contains content in memory. +type sourceData struct { + data []byte +} + +func (s *sourceData) ReadCloser() (io.ReadCloser, error) { + return ioutil.NopCloser(bytes.NewReader(s.data)), nil +} + +// sourceReadCloser represents an input stream with Close method. +type sourceReadCloser struct { + reader io.ReadCloser +} + +func (s *sourceReadCloser) ReadCloser() (io.ReadCloser, error) { + return s.reader, nil +} + +func parseDataSource(source interface{}) (dataSource, error) { + switch s := source.(type) { + case string: + return sourceFile{s}, nil + case []byte: + return &sourceData{s}, nil + case io.ReadCloser: + return &sourceReadCloser{s}, nil + default: + return nil, fmt.Errorf("error parsing data source: unknown type '%s'", s) + } +} + +type LoadOptions struct { + // Loose indicates whether the parser should ignore nonexistent files or return error. + Loose bool + // Insensitive indicates whether the parser forces all section and key names to lowercase. + Insensitive bool + // IgnoreContinuation indicates whether to ignore continuation lines while parsing. + IgnoreContinuation bool + // IgnoreInlineComment indicates whether to ignore comments at the end of value and treat it as part of value. + IgnoreInlineComment bool + // AllowBooleanKeys indicates whether to allow boolean type keys or treat as value is missing. + // This type of keys are mostly used in my.cnf. + AllowBooleanKeys bool + // AllowShadows indicates whether to keep track of keys with same name under same section. + AllowShadows bool + // AllowNestedValues indicates whether to allow AWS-like nested values. + // Docs: http://docs.aws.amazon.com/cli/latest/topic/config-vars.html#nested-values + AllowNestedValues bool + // AllowPythonMultilineValues indicates whether to allow Python-like multi-line values. + // Docs: https://docs.python.org/3/library/configparser.html#supported-ini-file-structure + // Relevant quote: Values can also span multiple lines, as long as they are indented deeper + // than the first line of the value. + AllowPythonMultilineValues bool + // SpaceBeforeInlineComment indicates whether to allow comment symbols (\# and \;) inside value. + // Docs: https://docs.python.org/2/library/configparser.html + // Quote: Comments may appear on their own in an otherwise empty line, or may be entered in lines holding values or section names. + // In the latter case, they need to be preceded by a whitespace character to be recognized as a comment. + SpaceBeforeInlineComment bool + // UnescapeValueDoubleQuotes indicates whether to unescape double quotes inside value to regular format + // when value is surrounded by double quotes, e.g. key="a \"value\"" => key=a "value" + UnescapeValueDoubleQuotes bool + // UnescapeValueCommentSymbols indicates to unescape comment symbols (\# and \;) inside value to regular format + // when value is NOT surrounded by any quotes. + // Note: UNSTABLE, behavior might change to only unescape inside double quotes but may noy necessary at all. + UnescapeValueCommentSymbols bool + // Some INI formats allow group blocks that store a block of raw content that doesn't otherwise + // conform to key/value pairs. Specify the names of those blocks here. + UnparseableSections []string +} + +func LoadSources(opts LoadOptions, source interface{}, others ...interface{}) (_ *File, err error) { + sources := make([]dataSource, len(others)+1) + sources[0], err = parseDataSource(source) + if err != nil { + return nil, err + } + for i := range others { + sources[i+1], err = parseDataSource(others[i]) + if err != nil { + return nil, err + } + } + f := newFile(sources, opts) + if err = f.Reload(); err != nil { + return nil, err + } + return f, nil +} + +// Load loads and parses from INI data sources. +// Arguments can be mixed of file name with string type, or raw data in []byte. +// It will return error if list contains nonexistent files. +func Load(source interface{}, others ...interface{}) (*File, error) { + return LoadSources(LoadOptions{}, source, others...) +} + +// LooseLoad has exactly same functionality as Load function +// except it ignores nonexistent files instead of returning error. +func LooseLoad(source interface{}, others ...interface{}) (*File, error) { + return LoadSources(LoadOptions{Loose: true}, source, others...) +} + +// InsensitiveLoad has exactly same functionality as Load function +// except it forces all section and key names to be lowercased. +func InsensitiveLoad(source interface{}, others ...interface{}) (*File, error) { + return LoadSources(LoadOptions{Insensitive: true}, source, others...) +} + +// InsensitiveLoad has exactly same functionality as Load function +// except it allows have shadow keys. +func ShadowLoad(source interface{}, others ...interface{}) (*File, error) { + return LoadSources(LoadOptions{AllowShadows: true}, source, others...) +} diff --git a/vendor/github.com/go-ini/ini/key.go b/vendor/github.com/go-ini/ini/key.go new file mode 100644 index 00000000..7c8566a1 --- /dev/null +++ b/vendor/github.com/go-ini/ini/key.go @@ -0,0 +1,751 @@ +// Copyright 2014 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +package ini + +import ( + "bytes" + "errors" + "fmt" + "strconv" + "strings" + "time" +) + +// Key represents a key under a section. +type Key struct { + s *Section + Comment string + name string + value string + isAutoIncrement bool + isBooleanType bool + + isShadow bool + shadows []*Key + + nestedValues []string +} + +// newKey simply return a key object with given values. +func newKey(s *Section, name, val string) *Key { + return &Key{ + s: s, + name: name, + value: val, + } +} + +func (k *Key) addShadow(val string) error { + if k.isShadow { + return errors.New("cannot add shadow to another shadow key") + } else if k.isAutoIncrement || k.isBooleanType { + return errors.New("cannot add shadow to auto-increment or boolean key") + } + + shadow := newKey(k.s, k.name, val) + shadow.isShadow = true + k.shadows = append(k.shadows, shadow) + return nil +} + +// AddShadow adds a new shadow key to itself. +func (k *Key) AddShadow(val string) error { + if !k.s.f.options.AllowShadows { + return errors.New("shadow key is not allowed") + } + return k.addShadow(val) +} + +func (k *Key) addNestedValue(val string) error { + if k.isAutoIncrement || k.isBooleanType { + return errors.New("cannot add nested value to auto-increment or boolean key") + } + + k.nestedValues = append(k.nestedValues, val) + return nil +} + +func (k *Key) AddNestedValue(val string) error { + if !k.s.f.options.AllowNestedValues { + return errors.New("nested value is not allowed") + } + return k.addNestedValue(val) +} + +// ValueMapper represents a mapping function for values, e.g. os.ExpandEnv +type ValueMapper func(string) string + +// Name returns name of key. +func (k *Key) Name() string { + return k.name +} + +// Value returns raw value of key for performance purpose. +func (k *Key) Value() string { + return k.value +} + +// ValueWithShadows returns raw values of key and its shadows if any. +func (k *Key) ValueWithShadows() []string { + if len(k.shadows) == 0 { + return []string{k.value} + } + vals := make([]string, len(k.shadows)+1) + vals[0] = k.value + for i := range k.shadows { + vals[i+1] = k.shadows[i].value + } + return vals +} + +// NestedValues returns nested values stored in the key. +// It is possible returned value is nil if no nested values stored in the key. +func (k *Key) NestedValues() []string { + return k.nestedValues +} + +// transformValue takes a raw value and transforms to its final string. +func (k *Key) transformValue(val string) string { + if k.s.f.ValueMapper != nil { + val = k.s.f.ValueMapper(val) + } + + // Fail-fast if no indicate char found for recursive value + if !strings.Contains(val, "%") { + return val + } + for i := 0; i < _DEPTH_VALUES; i++ { + vr := varPattern.FindString(val) + if len(vr) == 0 { + break + } + + // Take off leading '%(' and trailing ')s'. + noption := strings.TrimLeft(vr, "%(") + noption = strings.TrimRight(noption, ")s") + + // Search in the same section. + nk, err := k.s.GetKey(noption) + if err != nil || k == nk { + // Search again in default section. + nk, _ = k.s.f.Section("").GetKey(noption) + } + + // Substitute by new value and take off leading '%(' and trailing ')s'. + val = strings.Replace(val, vr, nk.value, -1) + } + return val +} + +// String returns string representation of value. +func (k *Key) String() string { + return k.transformValue(k.value) +} + +// Validate accepts a validate function which can +// return modifed result as key value. +func (k *Key) Validate(fn func(string) string) string { + return fn(k.String()) +} + +// parseBool returns the boolean value represented by the string. +// +// It accepts 1, t, T, TRUE, true, True, YES, yes, Yes, y, ON, on, On, +// 0, f, F, FALSE, false, False, NO, no, No, n, OFF, off, Off. +// Any other value returns an error. +func parseBool(str string) (value bool, err error) { + switch str { + case "1", "t", "T", "true", "TRUE", "True", "YES", "yes", "Yes", "y", "ON", "on", "On": + return true, nil + case "0", "f", "F", "false", "FALSE", "False", "NO", "no", "No", "n", "OFF", "off", "Off": + return false, nil + } + return false, fmt.Errorf("parsing \"%s\": invalid syntax", str) +} + +// Bool returns bool type value. +func (k *Key) Bool() (bool, error) { + return parseBool(k.String()) +} + +// Float64 returns float64 type value. +func (k *Key) Float64() (float64, error) { + return strconv.ParseFloat(k.String(), 64) +} + +// Int returns int type value. +func (k *Key) Int() (int, error) { + return strconv.Atoi(k.String()) +} + +// Int64 returns int64 type value. +func (k *Key) Int64() (int64, error) { + return strconv.ParseInt(k.String(), 10, 64) +} + +// Uint returns uint type valued. +func (k *Key) Uint() (uint, error) { + u, e := strconv.ParseUint(k.String(), 10, 64) + return uint(u), e +} + +// Uint64 returns uint64 type value. +func (k *Key) Uint64() (uint64, error) { + return strconv.ParseUint(k.String(), 10, 64) +} + +// Duration returns time.Duration type value. +func (k *Key) Duration() (time.Duration, error) { + return time.ParseDuration(k.String()) +} + +// TimeFormat parses with given format and returns time.Time type value. +func (k *Key) TimeFormat(format string) (time.Time, error) { + return time.Parse(format, k.String()) +} + +// Time parses with RFC3339 format and returns time.Time type value. +func (k *Key) Time() (time.Time, error) { + return k.TimeFormat(time.RFC3339) +} + +// MustString returns default value if key value is empty. +func (k *Key) MustString(defaultVal string) string { + val := k.String() + if len(val) == 0 { + k.value = defaultVal + return defaultVal + } + return val +} + +// MustBool always returns value without error, +// it returns false if error occurs. +func (k *Key) MustBool(defaultVal ...bool) bool { + val, err := k.Bool() + if len(defaultVal) > 0 && err != nil { + k.value = strconv.FormatBool(defaultVal[0]) + return defaultVal[0] + } + return val +} + +// MustFloat64 always returns value without error, +// it returns 0.0 if error occurs. +func (k *Key) MustFloat64(defaultVal ...float64) float64 { + val, err := k.Float64() + if len(defaultVal) > 0 && err != nil { + k.value = strconv.FormatFloat(defaultVal[0], 'f', -1, 64) + return defaultVal[0] + } + return val +} + +// MustInt always returns value without error, +// it returns 0 if error occurs. +func (k *Key) MustInt(defaultVal ...int) int { + val, err := k.Int() + if len(defaultVal) > 0 && err != nil { + k.value = strconv.FormatInt(int64(defaultVal[0]), 10) + return defaultVal[0] + } + return val +} + +// MustInt64 always returns value without error, +// it returns 0 if error occurs. +func (k *Key) MustInt64(defaultVal ...int64) int64 { + val, err := k.Int64() + if len(defaultVal) > 0 && err != nil { + k.value = strconv.FormatInt(defaultVal[0], 10) + return defaultVal[0] + } + return val +} + +// MustUint always returns value without error, +// it returns 0 if error occurs. +func (k *Key) MustUint(defaultVal ...uint) uint { + val, err := k.Uint() + if len(defaultVal) > 0 && err != nil { + k.value = strconv.FormatUint(uint64(defaultVal[0]), 10) + return defaultVal[0] + } + return val +} + +// MustUint64 always returns value without error, +// it returns 0 if error occurs. +func (k *Key) MustUint64(defaultVal ...uint64) uint64 { + val, err := k.Uint64() + if len(defaultVal) > 0 && err != nil { + k.value = strconv.FormatUint(defaultVal[0], 10) + return defaultVal[0] + } + return val +} + +// MustDuration always returns value without error, +// it returns zero value if error occurs. +func (k *Key) MustDuration(defaultVal ...time.Duration) time.Duration { + val, err := k.Duration() + if len(defaultVal) > 0 && err != nil { + k.value = defaultVal[0].String() + return defaultVal[0] + } + return val +} + +// MustTimeFormat always parses with given format and returns value without error, +// it returns zero value if error occurs. +func (k *Key) MustTimeFormat(format string, defaultVal ...time.Time) time.Time { + val, err := k.TimeFormat(format) + if len(defaultVal) > 0 && err != nil { + k.value = defaultVal[0].Format(format) + return defaultVal[0] + } + return val +} + +// MustTime always parses with RFC3339 format and returns value without error, +// it returns zero value if error occurs. +func (k *Key) MustTime(defaultVal ...time.Time) time.Time { + return k.MustTimeFormat(time.RFC3339, defaultVal...) +} + +// In always returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) In(defaultVal string, candidates []string) string { + val := k.String() + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InFloat64 always returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InFloat64(defaultVal float64, candidates []float64) float64 { + val := k.MustFloat64() + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InInt always returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InInt(defaultVal int, candidates []int) int { + val := k.MustInt() + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InInt64 always returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InInt64(defaultVal int64, candidates []int64) int64 { + val := k.MustInt64() + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InUint always returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InUint(defaultVal uint, candidates []uint) uint { + val := k.MustUint() + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InUint64 always returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InUint64(defaultVal uint64, candidates []uint64) uint64 { + val := k.MustUint64() + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InTimeFormat always parses with given format and returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InTimeFormat(format string, defaultVal time.Time, candidates []time.Time) time.Time { + val := k.MustTimeFormat(format) + for _, cand := range candidates { + if val == cand { + return val + } + } + return defaultVal +} + +// InTime always parses with RFC3339 format and returns value without error, +// it returns default value if error occurs or doesn't fit into candidates. +func (k *Key) InTime(defaultVal time.Time, candidates []time.Time) time.Time { + return k.InTimeFormat(time.RFC3339, defaultVal, candidates) +} + +// RangeFloat64 checks if value is in given range inclusively, +// and returns default value if it's not. +func (k *Key) RangeFloat64(defaultVal, min, max float64) float64 { + val := k.MustFloat64() + if val < min || val > max { + return defaultVal + } + return val +} + +// RangeInt checks if value is in given range inclusively, +// and returns default value if it's not. +func (k *Key) RangeInt(defaultVal, min, max int) int { + val := k.MustInt() + if val < min || val > max { + return defaultVal + } + return val +} + +// RangeInt64 checks if value is in given range inclusively, +// and returns default value if it's not. +func (k *Key) RangeInt64(defaultVal, min, max int64) int64 { + val := k.MustInt64() + if val < min || val > max { + return defaultVal + } + return val +} + +// RangeTimeFormat checks if value with given format is in given range inclusively, +// and returns default value if it's not. +func (k *Key) RangeTimeFormat(format string, defaultVal, min, max time.Time) time.Time { + val := k.MustTimeFormat(format) + if val.Unix() < min.Unix() || val.Unix() > max.Unix() { + return defaultVal + } + return val +} + +// RangeTime checks if value with RFC3339 format is in given range inclusively, +// and returns default value if it's not. +func (k *Key) RangeTime(defaultVal, min, max time.Time) time.Time { + return k.RangeTimeFormat(time.RFC3339, defaultVal, min, max) +} + +// Strings returns list of string divided by given delimiter. +func (k *Key) Strings(delim string) []string { + str := k.String() + if len(str) == 0 { + return []string{} + } + + runes := []rune(str) + vals := make([]string, 0, 2) + var buf bytes.Buffer + escape := false + idx := 0 + for { + if escape { + escape = false + if runes[idx] != '\\' && !strings.HasPrefix(string(runes[idx:]), delim) { + buf.WriteRune('\\') + } + buf.WriteRune(runes[idx]) + } else { + if runes[idx] == '\\' { + escape = true + } else if strings.HasPrefix(string(runes[idx:]), delim) { + idx += len(delim) - 1 + vals = append(vals, strings.TrimSpace(buf.String())) + buf.Reset() + } else { + buf.WriteRune(runes[idx]) + } + } + idx += 1 + if idx == len(runes) { + break + } + } + + if buf.Len() > 0 { + vals = append(vals, strings.TrimSpace(buf.String())) + } + + return vals +} + +// StringsWithShadows returns list of string divided by given delimiter. +// Shadows will also be appended if any. +func (k *Key) StringsWithShadows(delim string) []string { + vals := k.ValueWithShadows() + results := make([]string, 0, len(vals)*2) + for i := range vals { + if len(vals) == 0 { + continue + } + + results = append(results, strings.Split(vals[i], delim)...) + } + + for i := range results { + results[i] = k.transformValue(strings.TrimSpace(results[i])) + } + return results +} + +// Float64s returns list of float64 divided by given delimiter. Any invalid input will be treated as zero value. +func (k *Key) Float64s(delim string) []float64 { + vals, _ := k.parseFloat64s(k.Strings(delim), true, false) + return vals +} + +// Ints returns list of int divided by given delimiter. Any invalid input will be treated as zero value. +func (k *Key) Ints(delim string) []int { + vals, _ := k.parseInts(k.Strings(delim), true, false) + return vals +} + +// Int64s returns list of int64 divided by given delimiter. Any invalid input will be treated as zero value. +func (k *Key) Int64s(delim string) []int64 { + vals, _ := k.parseInt64s(k.Strings(delim), true, false) + return vals +} + +// Uints returns list of uint divided by given delimiter. Any invalid input will be treated as zero value. +func (k *Key) Uints(delim string) []uint { + vals, _ := k.parseUints(k.Strings(delim), true, false) + return vals +} + +// Uint64s returns list of uint64 divided by given delimiter. Any invalid input will be treated as zero value. +func (k *Key) Uint64s(delim string) []uint64 { + vals, _ := k.parseUint64s(k.Strings(delim), true, false) + return vals +} + +// TimesFormat parses with given format and returns list of time.Time divided by given delimiter. +// Any invalid input will be treated as zero value (0001-01-01 00:00:00 +0000 UTC). +func (k *Key) TimesFormat(format, delim string) []time.Time { + vals, _ := k.parseTimesFormat(format, k.Strings(delim), true, false) + return vals +} + +// Times parses with RFC3339 format and returns list of time.Time divided by given delimiter. +// Any invalid input will be treated as zero value (0001-01-01 00:00:00 +0000 UTC). +func (k *Key) Times(delim string) []time.Time { + return k.TimesFormat(time.RFC3339, delim) +} + +// ValidFloat64s returns list of float64 divided by given delimiter. If some value is not float, then +// it will not be included to result list. +func (k *Key) ValidFloat64s(delim string) []float64 { + vals, _ := k.parseFloat64s(k.Strings(delim), false, false) + return vals +} + +// ValidInts returns list of int divided by given delimiter. If some value is not integer, then it will +// not be included to result list. +func (k *Key) ValidInts(delim string) []int { + vals, _ := k.parseInts(k.Strings(delim), false, false) + return vals +} + +// ValidInt64s returns list of int64 divided by given delimiter. If some value is not 64-bit integer, +// then it will not be included to result list. +func (k *Key) ValidInt64s(delim string) []int64 { + vals, _ := k.parseInt64s(k.Strings(delim), false, false) + return vals +} + +// ValidUints returns list of uint divided by given delimiter. If some value is not unsigned integer, +// then it will not be included to result list. +func (k *Key) ValidUints(delim string) []uint { + vals, _ := k.parseUints(k.Strings(delim), false, false) + return vals +} + +// ValidUint64s returns list of uint64 divided by given delimiter. If some value is not 64-bit unsigned +// integer, then it will not be included to result list. +func (k *Key) ValidUint64s(delim string) []uint64 { + vals, _ := k.parseUint64s(k.Strings(delim), false, false) + return vals +} + +// ValidTimesFormat parses with given format and returns list of time.Time divided by given delimiter. +func (k *Key) ValidTimesFormat(format, delim string) []time.Time { + vals, _ := k.parseTimesFormat(format, k.Strings(delim), false, false) + return vals +} + +// ValidTimes parses with RFC3339 format and returns list of time.Time divided by given delimiter. +func (k *Key) ValidTimes(delim string) []time.Time { + return k.ValidTimesFormat(time.RFC3339, delim) +} + +// StrictFloat64s returns list of float64 divided by given delimiter or error on first invalid input. +func (k *Key) StrictFloat64s(delim string) ([]float64, error) { + return k.parseFloat64s(k.Strings(delim), false, true) +} + +// StrictInts returns list of int divided by given delimiter or error on first invalid input. +func (k *Key) StrictInts(delim string) ([]int, error) { + return k.parseInts(k.Strings(delim), false, true) +} + +// StrictInt64s returns list of int64 divided by given delimiter or error on first invalid input. +func (k *Key) StrictInt64s(delim string) ([]int64, error) { + return k.parseInt64s(k.Strings(delim), false, true) +} + +// StrictUints returns list of uint divided by given delimiter or error on first invalid input. +func (k *Key) StrictUints(delim string) ([]uint, error) { + return k.parseUints(k.Strings(delim), false, true) +} + +// StrictUint64s returns list of uint64 divided by given delimiter or error on first invalid input. +func (k *Key) StrictUint64s(delim string) ([]uint64, error) { + return k.parseUint64s(k.Strings(delim), false, true) +} + +// StrictTimesFormat parses with given format and returns list of time.Time divided by given delimiter +// or error on first invalid input. +func (k *Key) StrictTimesFormat(format, delim string) ([]time.Time, error) { + return k.parseTimesFormat(format, k.Strings(delim), false, true) +} + +// StrictTimes parses with RFC3339 format and returns list of time.Time divided by given delimiter +// or error on first invalid input. +func (k *Key) StrictTimes(delim string) ([]time.Time, error) { + return k.StrictTimesFormat(time.RFC3339, delim) +} + +// parseFloat64s transforms strings to float64s. +func (k *Key) parseFloat64s(strs []string, addInvalid, returnOnInvalid bool) ([]float64, error) { + vals := make([]float64, 0, len(strs)) + for _, str := range strs { + val, err := strconv.ParseFloat(str, 64) + if err != nil && returnOnInvalid { + return nil, err + } + if err == nil || addInvalid { + vals = append(vals, val) + } + } + return vals, nil +} + +// parseInts transforms strings to ints. +func (k *Key) parseInts(strs []string, addInvalid, returnOnInvalid bool) ([]int, error) { + vals := make([]int, 0, len(strs)) + for _, str := range strs { + val, err := strconv.Atoi(str) + if err != nil && returnOnInvalid { + return nil, err + } + if err == nil || addInvalid { + vals = append(vals, val) + } + } + return vals, nil +} + +// parseInt64s transforms strings to int64s. +func (k *Key) parseInt64s(strs []string, addInvalid, returnOnInvalid bool) ([]int64, error) { + vals := make([]int64, 0, len(strs)) + for _, str := range strs { + val, err := strconv.ParseInt(str, 10, 64) + if err != nil && returnOnInvalid { + return nil, err + } + if err == nil || addInvalid { + vals = append(vals, val) + } + } + return vals, nil +} + +// parseUints transforms strings to uints. +func (k *Key) parseUints(strs []string, addInvalid, returnOnInvalid bool) ([]uint, error) { + vals := make([]uint, 0, len(strs)) + for _, str := range strs { + val, err := strconv.ParseUint(str, 10, 0) + if err != nil && returnOnInvalid { + return nil, err + } + if err == nil || addInvalid { + vals = append(vals, uint(val)) + } + } + return vals, nil +} + +// parseUint64s transforms strings to uint64s. +func (k *Key) parseUint64s(strs []string, addInvalid, returnOnInvalid bool) ([]uint64, error) { + vals := make([]uint64, 0, len(strs)) + for _, str := range strs { + val, err := strconv.ParseUint(str, 10, 64) + if err != nil && returnOnInvalid { + return nil, err + } + if err == nil || addInvalid { + vals = append(vals, val) + } + } + return vals, nil +} + +// parseTimesFormat transforms strings to times in given format. +func (k *Key) parseTimesFormat(format string, strs []string, addInvalid, returnOnInvalid bool) ([]time.Time, error) { + vals := make([]time.Time, 0, len(strs)) + for _, str := range strs { + val, err := time.Parse(format, str) + if err != nil && returnOnInvalid { + return nil, err + } + if err == nil || addInvalid { + vals = append(vals, val) + } + } + return vals, nil +} + +// SetValue changes key value. +func (k *Key) SetValue(v string) { + if k.s.f.BlockMode { + k.s.f.lock.Lock() + defer k.s.f.lock.Unlock() + } + + k.value = v + k.s.keysHash[k.name] = v +} diff --git a/vendor/github.com/go-ini/ini/parser.go b/vendor/github.com/go-ini/ini/parser.go new file mode 100644 index 00000000..d5aa2db6 --- /dev/null +++ b/vendor/github.com/go-ini/ini/parser.go @@ -0,0 +1,490 @@ +// Copyright 2015 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +package ini + +import ( + "bufio" + "bytes" + "fmt" + "io" + "regexp" + "strconv" + "strings" + "unicode" +) + +var pythonMultiline = regexp.MustCompile("^(\\s+)([^\n]+)") + +type tokenType int + +const ( + _TOKEN_INVALID tokenType = iota + _TOKEN_COMMENT + _TOKEN_SECTION + _TOKEN_KEY +) + +type parser struct { + buf *bufio.Reader + isEOF bool + count int + comment *bytes.Buffer +} + +func newParser(r io.Reader) *parser { + return &parser{ + buf: bufio.NewReader(r), + count: 1, + comment: &bytes.Buffer{}, + } +} + +// BOM handles header of UTF-8, UTF-16 LE and UTF-16 BE's BOM format. +// http://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding +func (p *parser) BOM() error { + mask, err := p.buf.Peek(2) + if err != nil && err != io.EOF { + return err + } else if len(mask) < 2 { + return nil + } + + switch { + case mask[0] == 254 && mask[1] == 255: + fallthrough + case mask[0] == 255 && mask[1] == 254: + p.buf.Read(mask) + case mask[0] == 239 && mask[1] == 187: + mask, err := p.buf.Peek(3) + if err != nil && err != io.EOF { + return err + } else if len(mask) < 3 { + return nil + } + if mask[2] == 191 { + p.buf.Read(mask) + } + } + return nil +} + +func (p *parser) readUntil(delim byte) ([]byte, error) { + data, err := p.buf.ReadBytes(delim) + if err != nil { + if err == io.EOF { + p.isEOF = true + } else { + return nil, err + } + } + return data, nil +} + +func cleanComment(in []byte) ([]byte, bool) { + i := bytes.IndexAny(in, "#;") + if i == -1 { + return nil, false + } + return in[i:], true +} + +func readKeyName(in []byte) (string, int, error) { + line := string(in) + + // Check if key name surrounded by quotes. + var keyQuote string + if line[0] == '"' { + if len(line) > 6 && string(line[0:3]) == `"""` { + keyQuote = `"""` + } else { + keyQuote = `"` + } + } else if line[0] == '`' { + keyQuote = "`" + } + + // Get out key name + endIdx := -1 + if len(keyQuote) > 0 { + startIdx := len(keyQuote) + // FIXME: fail case -> """"""name"""=value + pos := strings.Index(line[startIdx:], keyQuote) + if pos == -1 { + return "", -1, fmt.Errorf("missing closing key quote: %s", line) + } + pos += startIdx + + // Find key-value delimiter + i := strings.IndexAny(line[pos+startIdx:], "=:") + if i < 0 { + return "", -1, ErrDelimiterNotFound{line} + } + endIdx = pos + i + return strings.TrimSpace(line[startIdx:pos]), endIdx + startIdx + 1, nil + } + + endIdx = strings.IndexAny(line, "=:") + if endIdx < 0 { + return "", -1, ErrDelimiterNotFound{line} + } + return strings.TrimSpace(line[0:endIdx]), endIdx + 1, nil +} + +func (p *parser) readMultilines(line, val, valQuote string) (string, error) { + for { + data, err := p.readUntil('\n') + if err != nil { + return "", err + } + next := string(data) + + pos := strings.LastIndex(next, valQuote) + if pos > -1 { + val += next[:pos] + + comment, has := cleanComment([]byte(next[pos:])) + if has { + p.comment.Write(bytes.TrimSpace(comment)) + } + break + } + val += next + if p.isEOF { + return "", fmt.Errorf("missing closing key quote from '%s' to '%s'", line, next) + } + } + return val, nil +} + +func (p *parser) readContinuationLines(val string) (string, error) { + for { + data, err := p.readUntil('\n') + if err != nil { + return "", err + } + next := strings.TrimSpace(string(data)) + + if len(next) == 0 { + break + } + val += next + if val[len(val)-1] != '\\' { + break + } + val = val[:len(val)-1] + } + return val, nil +} + +// hasSurroundedQuote check if and only if the first and last characters +// are quotes \" or \'. +// It returns false if any other parts also contain same kind of quotes. +func hasSurroundedQuote(in string, quote byte) bool { + return len(in) >= 2 && in[0] == quote && in[len(in)-1] == quote && + strings.IndexByte(in[1:], quote) == len(in)-2 +} + +func (p *parser) readValue(in []byte, + parserBufferSize int, + ignoreContinuation, ignoreInlineComment, unescapeValueDoubleQuotes, unescapeValueCommentSymbols, allowPythonMultilines, spaceBeforeInlineComment bool) (string, error) { + + line := strings.TrimLeftFunc(string(in), unicode.IsSpace) + if len(line) == 0 { + return "", nil + } + + var valQuote string + if len(line) > 3 && string(line[0:3]) == `"""` { + valQuote = `"""` + } else if line[0] == '`' { + valQuote = "`" + } else if unescapeValueDoubleQuotes && line[0] == '"' { + valQuote = `"` + } + + if len(valQuote) > 0 { + startIdx := len(valQuote) + pos := strings.LastIndex(line[startIdx:], valQuote) + // Check for multi-line value + if pos == -1 { + return p.readMultilines(line, line[startIdx:], valQuote) + } + + if unescapeValueDoubleQuotes && valQuote == `"` { + return strings.Replace(line[startIdx:pos+startIdx], `\"`, `"`, -1), nil + } + return line[startIdx : pos+startIdx], nil + } + + lastChar := line[len(line)-1] + // Won't be able to reach here if value only contains whitespace + line = strings.TrimSpace(line) + trimmedLastChar := line[len(line)-1] + + // Check continuation lines when desired + if !ignoreContinuation && trimmedLastChar == '\\' { + return p.readContinuationLines(line[:len(line)-1]) + } + + // Check if ignore inline comment + if !ignoreInlineComment { + var i int + if spaceBeforeInlineComment { + i = strings.Index(line, " #") + if i == -1 { + i = strings.Index(line, " ;") + } + + } else { + i = strings.IndexAny(line, "#;") + } + + if i > -1 { + p.comment.WriteString(line[i:]) + line = strings.TrimSpace(line[:i]) + } + + } + + // Trim single and double quotes + if hasSurroundedQuote(line, '\'') || + hasSurroundedQuote(line, '"') { + line = line[1 : len(line)-1] + } else if len(valQuote) == 0 && unescapeValueCommentSymbols { + if strings.Contains(line, `\;`) { + line = strings.Replace(line, `\;`, ";", -1) + } + if strings.Contains(line, `\#`) { + line = strings.Replace(line, `\#`, "#", -1) + } + } else if allowPythonMultilines && lastChar == '\n' { + parserBufferPeekResult, _ := p.buf.Peek(parserBufferSize) + peekBuffer := bytes.NewBuffer(parserBufferPeekResult) + + identSize := -1 + val := line + + for { + peekData, peekErr := peekBuffer.ReadBytes('\n') + if peekErr != nil { + if peekErr == io.EOF { + return val, nil + } + return "", peekErr + } + + peekMatches := pythonMultiline.FindStringSubmatch(string(peekData)) + if len(peekMatches) != 3 { + return val, nil + } + + currentIdentSize := len(peekMatches[1]) + // NOTE: Return if not a python-ini multi-line value. + if currentIdentSize < 0 { + return val, nil + } + identSize = currentIdentSize + + // NOTE: Just advance the parser reader (buffer) in-sync with the peek buffer. + _, err := p.readUntil('\n') + if err != nil { + return "", err + } + + val += fmt.Sprintf("\n%s", peekMatches[2]) + } + + // NOTE: If it was a Python multi-line value, + // return the appended value. + if identSize > 0 { + return val, nil + } + } + + return line, nil +} + +// parse parses data through an io.Reader. +func (f *File) parse(reader io.Reader) (err error) { + p := newParser(reader) + if err = p.BOM(); err != nil { + return fmt.Errorf("BOM: %v", err) + } + + // Ignore error because default section name is never empty string. + name := DEFAULT_SECTION + if f.options.Insensitive { + name = strings.ToLower(DEFAULT_SECTION) + } + section, _ := f.NewSection(name) + + // This "last" is not strictly equivalent to "previous one" if current key is not the first nested key + var isLastValueEmpty bool + var lastRegularKey *Key + + var line []byte + var inUnparseableSection bool + + // NOTE: Iterate and increase `currentPeekSize` until + // the size of the parser buffer is found. + // TODO: When Golang 1.10 is the lowest version supported, + // replace with `parserBufferSize := p.buf.Size()`. + parserBufferSize := 0 + // NOTE: Peek 1kb at a time. + currentPeekSize := 1024 + + if f.options.AllowPythonMultilineValues { + for { + peekBytes, _ := p.buf.Peek(currentPeekSize) + peekBytesLength := len(peekBytes) + + if parserBufferSize >= peekBytesLength { + break + } + + currentPeekSize *= 2 + parserBufferSize = peekBytesLength + } + } + + for !p.isEOF { + line, err = p.readUntil('\n') + if err != nil { + return err + } + + if f.options.AllowNestedValues && + isLastValueEmpty && len(line) > 0 { + if line[0] == ' ' || line[0] == '\t' { + lastRegularKey.addNestedValue(string(bytes.TrimSpace(line))) + continue + } + } + + line = bytes.TrimLeftFunc(line, unicode.IsSpace) + if len(line) == 0 { + continue + } + + // Comments + if line[0] == '#' || line[0] == ';' { + // Note: we do not care ending line break, + // it is needed for adding second line, + // so just clean it once at the end when set to value. + p.comment.Write(line) + continue + } + + // Section + if line[0] == '[' { + // Read to the next ']' (TODO: support quoted strings) + // TODO(unknwon): use LastIndexByte when stop supporting Go1.4 + closeIdx := bytes.LastIndex(line, []byte("]")) + if closeIdx == -1 { + return fmt.Errorf("unclosed section: %s", line) + } + + name := string(line[1:closeIdx]) + section, err = f.NewSection(name) + if err != nil { + return err + } + + comment, has := cleanComment(line[closeIdx+1:]) + if has { + p.comment.Write(comment) + } + + section.Comment = strings.TrimSpace(p.comment.String()) + + // Reset aotu-counter and comments + p.comment.Reset() + p.count = 1 + + inUnparseableSection = false + for i := range f.options.UnparseableSections { + if f.options.UnparseableSections[i] == name || + (f.options.Insensitive && strings.ToLower(f.options.UnparseableSections[i]) == strings.ToLower(name)) { + inUnparseableSection = true + continue + } + } + continue + } + + if inUnparseableSection { + section.isRawSection = true + section.rawBody += string(line) + continue + } + + kname, offset, err := readKeyName(line) + if err != nil { + // Treat as boolean key when desired, and whole line is key name. + if IsErrDelimiterNotFound(err) && f.options.AllowBooleanKeys { + kname, err := p.readValue(line, + parserBufferSize, + f.options.IgnoreContinuation, + f.options.IgnoreInlineComment, + f.options.UnescapeValueDoubleQuotes, + f.options.UnescapeValueCommentSymbols, + f.options.AllowPythonMultilineValues, + f.options.SpaceBeforeInlineComment) + if err != nil { + return err + } + key, err := section.NewBooleanKey(kname) + if err != nil { + return err + } + key.Comment = strings.TrimSpace(p.comment.String()) + p.comment.Reset() + continue + } + return err + } + + // Auto increment. + isAutoIncr := false + if kname == "-" { + isAutoIncr = true + kname = "#" + strconv.Itoa(p.count) + p.count++ + } + + value, err := p.readValue(line[offset:], + parserBufferSize, + f.options.IgnoreContinuation, + f.options.IgnoreInlineComment, + f.options.UnescapeValueDoubleQuotes, + f.options.UnescapeValueCommentSymbols, + f.options.AllowPythonMultilineValues, + f.options.SpaceBeforeInlineComment) + if err != nil { + return err + } + isLastValueEmpty = len(value) == 0 + + key, err := section.NewKey(kname, value) + if err != nil { + return err + } + key.isAutoIncrement = isAutoIncr + key.Comment = strings.TrimSpace(p.comment.String()) + p.comment.Reset() + lastRegularKey = key + } + return nil +} diff --git a/vendor/github.com/go-ini/ini/section.go b/vendor/github.com/go-ini/ini/section.go new file mode 100644 index 00000000..d8a40261 --- /dev/null +++ b/vendor/github.com/go-ini/ini/section.go @@ -0,0 +1,257 @@ +// Copyright 2014 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +package ini + +import ( + "errors" + "fmt" + "strings" +) + +// Section represents a config section. +type Section struct { + f *File + Comment string + name string + keys map[string]*Key + keyList []string + keysHash map[string]string + + isRawSection bool + rawBody string +} + +func newSection(f *File, name string) *Section { + return &Section{ + f: f, + name: name, + keys: make(map[string]*Key), + keyList: make([]string, 0, 10), + keysHash: make(map[string]string), + } +} + +// Name returns name of Section. +func (s *Section) Name() string { + return s.name +} + +// Body returns rawBody of Section if the section was marked as unparseable. +// It still follows the other rules of the INI format surrounding leading/trailing whitespace. +func (s *Section) Body() string { + return strings.TrimSpace(s.rawBody) +} + +// SetBody updates body content only if section is raw. +func (s *Section) SetBody(body string) { + if !s.isRawSection { + return + } + s.rawBody = body +} + +// NewKey creates a new key to given section. +func (s *Section) NewKey(name, val string) (*Key, error) { + if len(name) == 0 { + return nil, errors.New("error creating new key: empty key name") + } else if s.f.options.Insensitive { + name = strings.ToLower(name) + } + + if s.f.BlockMode { + s.f.lock.Lock() + defer s.f.lock.Unlock() + } + + if inSlice(name, s.keyList) { + if s.f.options.AllowShadows { + if err := s.keys[name].addShadow(val); err != nil { + return nil, err + } + } else { + s.keys[name].value = val + } + return s.keys[name], nil + } + + s.keyList = append(s.keyList, name) + s.keys[name] = newKey(s, name, val) + s.keysHash[name] = val + return s.keys[name], nil +} + +// NewBooleanKey creates a new boolean type key to given section. +func (s *Section) NewBooleanKey(name string) (*Key, error) { + key, err := s.NewKey(name, "true") + if err != nil { + return nil, err + } + + key.isBooleanType = true + return key, nil +} + +// GetKey returns key in section by given name. +func (s *Section) GetKey(name string) (*Key, error) { + // FIXME: change to section level lock? + if s.f.BlockMode { + s.f.lock.RLock() + } + if s.f.options.Insensitive { + name = strings.ToLower(name) + } + key := s.keys[name] + if s.f.BlockMode { + s.f.lock.RUnlock() + } + + if key == nil { + // Check if it is a child-section. + sname := s.name + for { + if i := strings.LastIndex(sname, "."); i > -1 { + sname = sname[:i] + sec, err := s.f.GetSection(sname) + if err != nil { + continue + } + return sec.GetKey(name) + } else { + break + } + } + return nil, fmt.Errorf("error when getting key of section '%s': key '%s' not exists", s.name, name) + } + return key, nil +} + +// HasKey returns true if section contains a key with given name. +func (s *Section) HasKey(name string) bool { + key, _ := s.GetKey(name) + return key != nil +} + +// Haskey is a backwards-compatible name for HasKey. +// TODO: delete me in v2 +func (s *Section) Haskey(name string) bool { + return s.HasKey(name) +} + +// HasValue returns true if section contains given raw value. +func (s *Section) HasValue(value string) bool { + if s.f.BlockMode { + s.f.lock.RLock() + defer s.f.lock.RUnlock() + } + + for _, k := range s.keys { + if value == k.value { + return true + } + } + return false +} + +// Key assumes named Key exists in section and returns a zero-value when not. +func (s *Section) Key(name string) *Key { + key, err := s.GetKey(name) + if err != nil { + // It's OK here because the only possible error is empty key name, + // but if it's empty, this piece of code won't be executed. + key, _ = s.NewKey(name, "") + return key + } + return key +} + +// Keys returns list of keys of section. +func (s *Section) Keys() []*Key { + keys := make([]*Key, len(s.keyList)) + for i := range s.keyList { + keys[i] = s.Key(s.keyList[i]) + } + return keys +} + +// ParentKeys returns list of keys of parent section. +func (s *Section) ParentKeys() []*Key { + var parentKeys []*Key + sname := s.name + for { + if i := strings.LastIndex(sname, "."); i > -1 { + sname = sname[:i] + sec, err := s.f.GetSection(sname) + if err != nil { + continue + } + parentKeys = append(parentKeys, sec.Keys()...) + } else { + break + } + + } + return parentKeys +} + +// KeyStrings returns list of key names of section. +func (s *Section) KeyStrings() []string { + list := make([]string, len(s.keyList)) + copy(list, s.keyList) + return list +} + +// KeysHash returns keys hash consisting of names and values. +func (s *Section) KeysHash() map[string]string { + if s.f.BlockMode { + s.f.lock.RLock() + defer s.f.lock.RUnlock() + } + + hash := map[string]string{} + for key, value := range s.keysHash { + hash[key] = value + } + return hash +} + +// DeleteKey deletes a key from section. +func (s *Section) DeleteKey(name string) { + if s.f.BlockMode { + s.f.lock.Lock() + defer s.f.lock.Unlock() + } + + for i, k := range s.keyList { + if k == name { + s.keyList = append(s.keyList[:i], s.keyList[i+1:]...) + delete(s.keys, name) + return + } + } +} + +// ChildSections returns a list of child sections of current section. +// For example, "[parent.child1]" and "[parent.child12]" are child sections +// of section "[parent]". +func (s *Section) ChildSections() []*Section { + prefix := s.name + "." + children := make([]*Section, 0, 3) + for _, name := range s.f.sectionList { + if strings.HasPrefix(name, prefix) { + children = append(children, s.f.sections[name]) + } + } + return children +} diff --git a/vendor/github.com/go-ini/ini/struct.go b/vendor/github.com/go-ini/ini/struct.go new file mode 100644 index 00000000..9719dc69 --- /dev/null +++ b/vendor/github.com/go-ini/ini/struct.go @@ -0,0 +1,512 @@ +// Copyright 2014 Unknwon +// +// Licensed under the Apache License, Version 2.0 (the "License"): you may +// not use this file except in compliance with the License. You may obtain +// a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations +// under the License. + +package ini + +import ( + "bytes" + "errors" + "fmt" + "reflect" + "strings" + "time" + "unicode" +) + +// NameMapper represents a ini tag name mapper. +type NameMapper func(string) string + +// Built-in name getters. +var ( + // AllCapsUnderscore converts to format ALL_CAPS_UNDERSCORE. + AllCapsUnderscore NameMapper = func(raw string) string { + newstr := make([]rune, 0, len(raw)) + for i, chr := range raw { + if isUpper := 'A' <= chr && chr <= 'Z'; isUpper { + if i > 0 { + newstr = append(newstr, '_') + } + } + newstr = append(newstr, unicode.ToUpper(chr)) + } + return string(newstr) + } + // TitleUnderscore converts to format title_underscore. + TitleUnderscore NameMapper = func(raw string) string { + newstr := make([]rune, 0, len(raw)) + for i, chr := range raw { + if isUpper := 'A' <= chr && chr <= 'Z'; isUpper { + if i > 0 { + newstr = append(newstr, '_') + } + chr -= ('A' - 'a') + } + newstr = append(newstr, chr) + } + return string(newstr) + } +) + +func (s *Section) parseFieldName(raw, actual string) string { + if len(actual) > 0 { + return actual + } + if s.f.NameMapper != nil { + return s.f.NameMapper(raw) + } + return raw +} + +func parseDelim(actual string) string { + if len(actual) > 0 { + return actual + } + return "," +} + +var reflectTime = reflect.TypeOf(time.Now()).Kind() + +// setSliceWithProperType sets proper values to slice based on its type. +func setSliceWithProperType(key *Key, field reflect.Value, delim string, allowShadow, isStrict bool) error { + var strs []string + if allowShadow { + strs = key.StringsWithShadows(delim) + } else { + strs = key.Strings(delim) + } + + numVals := len(strs) + if numVals == 0 { + return nil + } + + var vals interface{} + var err error + + sliceOf := field.Type().Elem().Kind() + switch sliceOf { + case reflect.String: + vals = strs + case reflect.Int: + vals, err = key.parseInts(strs, true, false) + case reflect.Int64: + vals, err = key.parseInt64s(strs, true, false) + case reflect.Uint: + vals, err = key.parseUints(strs, true, false) + case reflect.Uint64: + vals, err = key.parseUint64s(strs, true, false) + case reflect.Float64: + vals, err = key.parseFloat64s(strs, true, false) + case reflectTime: + vals, err = key.parseTimesFormat(time.RFC3339, strs, true, false) + default: + return fmt.Errorf("unsupported type '[]%s'", sliceOf) + } + if err != nil && isStrict { + return err + } + + slice := reflect.MakeSlice(field.Type(), numVals, numVals) + for i := 0; i < numVals; i++ { + switch sliceOf { + case reflect.String: + slice.Index(i).Set(reflect.ValueOf(vals.([]string)[i])) + case reflect.Int: + slice.Index(i).Set(reflect.ValueOf(vals.([]int)[i])) + case reflect.Int64: + slice.Index(i).Set(reflect.ValueOf(vals.([]int64)[i])) + case reflect.Uint: + slice.Index(i).Set(reflect.ValueOf(vals.([]uint)[i])) + case reflect.Uint64: + slice.Index(i).Set(reflect.ValueOf(vals.([]uint64)[i])) + case reflect.Float64: + slice.Index(i).Set(reflect.ValueOf(vals.([]float64)[i])) + case reflectTime: + slice.Index(i).Set(reflect.ValueOf(vals.([]time.Time)[i])) + } + } + field.Set(slice) + return nil +} + +func wrapStrictError(err error, isStrict bool) error { + if isStrict { + return err + } + return nil +} + +// setWithProperType sets proper value to field based on its type, +// but it does not return error for failing parsing, +// because we want to use default value that is already assigned to strcut. +func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim string, allowShadow, isStrict bool) error { + switch t.Kind() { + case reflect.String: + if len(key.String()) == 0 { + return nil + } + field.SetString(key.String()) + case reflect.Bool: + boolVal, err := key.Bool() + if err != nil { + return wrapStrictError(err, isStrict) + } + field.SetBool(boolVal) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + durationVal, err := key.Duration() + // Skip zero value + if err == nil && int64(durationVal) > 0 { + field.Set(reflect.ValueOf(durationVal)) + return nil + } + + intVal, err := key.Int64() + if err != nil { + return wrapStrictError(err, isStrict) + } + field.SetInt(intVal) + // byte is an alias for uint8, so supporting uint8 breaks support for byte + case reflect.Uint, reflect.Uint16, reflect.Uint32, reflect.Uint64: + durationVal, err := key.Duration() + // Skip zero value + if err == nil && int(durationVal) > 0 { + field.Set(reflect.ValueOf(durationVal)) + return nil + } + + uintVal, err := key.Uint64() + if err != nil { + return wrapStrictError(err, isStrict) + } + field.SetUint(uintVal) + + case reflect.Float32, reflect.Float64: + floatVal, err := key.Float64() + if err != nil { + return wrapStrictError(err, isStrict) + } + field.SetFloat(floatVal) + case reflectTime: + timeVal, err := key.Time() + if err != nil { + return wrapStrictError(err, isStrict) + } + field.Set(reflect.ValueOf(timeVal)) + case reflect.Slice: + return setSliceWithProperType(key, field, delim, allowShadow, isStrict) + default: + return fmt.Errorf("unsupported type '%s'", t) + } + return nil +} + +func parseTagOptions(tag string) (rawName string, omitEmpty bool, allowShadow bool) { + opts := strings.SplitN(tag, ",", 3) + rawName = opts[0] + if len(opts) > 1 { + omitEmpty = opts[1] == "omitempty" + } + if len(opts) > 2 { + allowShadow = opts[2] == "allowshadow" + } + return rawName, omitEmpty, allowShadow +} + +func (s *Section) mapTo(val reflect.Value, isStrict bool) error { + if val.Kind() == reflect.Ptr { + val = val.Elem() + } + typ := val.Type() + + for i := 0; i < typ.NumField(); i++ { + field := val.Field(i) + tpField := typ.Field(i) + + tag := tpField.Tag.Get("ini") + if tag == "-" { + continue + } + + rawName, _, allowShadow := parseTagOptions(tag) + fieldName := s.parseFieldName(tpField.Name, rawName) + if len(fieldName) == 0 || !field.CanSet() { + continue + } + + isAnonymous := tpField.Type.Kind() == reflect.Ptr && tpField.Anonymous + isStruct := tpField.Type.Kind() == reflect.Struct + if isAnonymous { + field.Set(reflect.New(tpField.Type.Elem())) + } + + if isAnonymous || isStruct { + if sec, err := s.f.GetSection(fieldName); err == nil { + if err = sec.mapTo(field, isStrict); err != nil { + return fmt.Errorf("error mapping field(%s): %v", fieldName, err) + } + continue + } + } + + if key, err := s.GetKey(fieldName); err == nil { + delim := parseDelim(tpField.Tag.Get("delim")) + if err = setWithProperType(tpField.Type, key, field, delim, allowShadow, isStrict); err != nil { + return fmt.Errorf("error mapping field(%s): %v", fieldName, err) + } + } + } + return nil +} + +// MapTo maps section to given struct. +func (s *Section) MapTo(v interface{}) error { + typ := reflect.TypeOf(v) + val := reflect.ValueOf(v) + if typ.Kind() == reflect.Ptr { + typ = typ.Elem() + val = val.Elem() + } else { + return errors.New("cannot map to non-pointer struct") + } + + return s.mapTo(val, false) +} + +// MapTo maps section to given struct in strict mode, +// which returns all possible error including value parsing error. +func (s *Section) StrictMapTo(v interface{}) error { + typ := reflect.TypeOf(v) + val := reflect.ValueOf(v) + if typ.Kind() == reflect.Ptr { + typ = typ.Elem() + val = val.Elem() + } else { + return errors.New("cannot map to non-pointer struct") + } + + return s.mapTo(val, true) +} + +// MapTo maps file to given struct. +func (f *File) MapTo(v interface{}) error { + return f.Section("").MapTo(v) +} + +// MapTo maps file to given struct in strict mode, +// which returns all possible error including value parsing error. +func (f *File) StrictMapTo(v interface{}) error { + return f.Section("").StrictMapTo(v) +} + +// MapTo maps data sources to given struct with name mapper. +func MapToWithMapper(v interface{}, mapper NameMapper, source interface{}, others ...interface{}) error { + cfg, err := Load(source, others...) + if err != nil { + return err + } + cfg.NameMapper = mapper + return cfg.MapTo(v) +} + +// StrictMapToWithMapper maps data sources to given struct with name mapper in strict mode, +// which returns all possible error including value parsing error. +func StrictMapToWithMapper(v interface{}, mapper NameMapper, source interface{}, others ...interface{}) error { + cfg, err := Load(source, others...) + if err != nil { + return err + } + cfg.NameMapper = mapper + return cfg.StrictMapTo(v) +} + +// MapTo maps data sources to given struct. +func MapTo(v, source interface{}, others ...interface{}) error { + return MapToWithMapper(v, nil, source, others...) +} + +// StrictMapTo maps data sources to given struct in strict mode, +// which returns all possible error including value parsing error. +func StrictMapTo(v, source interface{}, others ...interface{}) error { + return StrictMapToWithMapper(v, nil, source, others...) +} + +// reflectSliceWithProperType does the opposite thing as setSliceWithProperType. +func reflectSliceWithProperType(key *Key, field reflect.Value, delim string) error { + slice := field.Slice(0, field.Len()) + if field.Len() == 0 { + return nil + } + + var buf bytes.Buffer + sliceOf := field.Type().Elem().Kind() + for i := 0; i < field.Len(); i++ { + switch sliceOf { + case reflect.String: + buf.WriteString(slice.Index(i).String()) + case reflect.Int, reflect.Int64: + buf.WriteString(fmt.Sprint(slice.Index(i).Int())) + case reflect.Uint, reflect.Uint64: + buf.WriteString(fmt.Sprint(slice.Index(i).Uint())) + case reflect.Float64: + buf.WriteString(fmt.Sprint(slice.Index(i).Float())) + case reflectTime: + buf.WriteString(slice.Index(i).Interface().(time.Time).Format(time.RFC3339)) + default: + return fmt.Errorf("unsupported type '[]%s'", sliceOf) + } + buf.WriteString(delim) + } + key.SetValue(buf.String()[:buf.Len()-1]) + return nil +} + +// reflectWithProperType does the opposite thing as setWithProperType. +func reflectWithProperType(t reflect.Type, key *Key, field reflect.Value, delim string) error { + switch t.Kind() { + case reflect.String: + key.SetValue(field.String()) + case reflect.Bool: + key.SetValue(fmt.Sprint(field.Bool())) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + key.SetValue(fmt.Sprint(field.Int())) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + key.SetValue(fmt.Sprint(field.Uint())) + case reflect.Float32, reflect.Float64: + key.SetValue(fmt.Sprint(field.Float())) + case reflectTime: + key.SetValue(fmt.Sprint(field.Interface().(time.Time).Format(time.RFC3339))) + case reflect.Slice: + return reflectSliceWithProperType(key, field, delim) + default: + return fmt.Errorf("unsupported type '%s'", t) + } + return nil +} + +// CR: copied from encoding/json/encode.go with modifications of time.Time support. +// TODO: add more test coverage. +func isEmptyValue(v reflect.Value) bool { + switch v.Kind() { + case reflect.Array, reflect.Map, reflect.Slice, reflect.String: + return v.Len() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Interface, reflect.Ptr: + return v.IsNil() + case reflectTime: + t, ok := v.Interface().(time.Time) + return ok && t.IsZero() + } + return false +} + +func (s *Section) reflectFrom(val reflect.Value) error { + if val.Kind() == reflect.Ptr { + val = val.Elem() + } + typ := val.Type() + + for i := 0; i < typ.NumField(); i++ { + field := val.Field(i) + tpField := typ.Field(i) + + tag := tpField.Tag.Get("ini") + if tag == "-" { + continue + } + + opts := strings.SplitN(tag, ",", 2) + if len(opts) == 2 && opts[1] == "omitempty" && isEmptyValue(field) { + continue + } + + fieldName := s.parseFieldName(tpField.Name, opts[0]) + if len(fieldName) == 0 || !field.CanSet() { + continue + } + + if (tpField.Type.Kind() == reflect.Ptr && tpField.Anonymous) || + (tpField.Type.Kind() == reflect.Struct && tpField.Type.Name() != "Time") { + // Note: The only error here is section doesn't exist. + sec, err := s.f.GetSection(fieldName) + if err != nil { + // Note: fieldName can never be empty here, ignore error. + sec, _ = s.f.NewSection(fieldName) + } + + // Add comment from comment tag + if len(sec.Comment) == 0 { + sec.Comment = tpField.Tag.Get("comment") + } + + if err = sec.reflectFrom(field); err != nil { + return fmt.Errorf("error reflecting field (%s): %v", fieldName, err) + } + continue + } + + // Note: Same reason as secion. + key, err := s.GetKey(fieldName) + if err != nil { + key, _ = s.NewKey(fieldName, "") + } + + // Add comment from comment tag + if len(key.Comment) == 0 { + key.Comment = tpField.Tag.Get("comment") + } + + if err = reflectWithProperType(tpField.Type, key, field, parseDelim(tpField.Tag.Get("delim"))); err != nil { + return fmt.Errorf("error reflecting field (%s): %v", fieldName, err) + } + + } + return nil +} + +// ReflectFrom reflects secion from given struct. +func (s *Section) ReflectFrom(v interface{}) error { + typ := reflect.TypeOf(v) + val := reflect.ValueOf(v) + if typ.Kind() == reflect.Ptr { + typ = typ.Elem() + val = val.Elem() + } else { + return errors.New("cannot reflect from non-pointer struct") + } + + return s.reflectFrom(val) +} + +// ReflectFrom reflects file from given struct. +func (f *File) ReflectFrom(v interface{}) error { + return f.Section("").ReflectFrom(v) +} + +// ReflectFrom reflects data sources from given struct with name mapper. +func ReflectFromWithMapper(cfg *File, v interface{}, mapper NameMapper) error { + cfg.NameMapper = mapper + return cfg.ReflectFrom(v) +} + +// ReflectFrom reflects data sources from given struct. +func ReflectFrom(cfg *File, v interface{}) error { + return ReflectFromWithMapper(cfg, v, nil) +} diff --git a/vendor/github.com/golang/glog/LICENSE b/vendor/github.com/golang/glog/LICENSE new file mode 100644 index 00000000..37ec93a1 --- /dev/null +++ b/vendor/github.com/golang/glog/LICENSE @@ -0,0 +1,191 @@ +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and +distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the copyright +owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other entities +that control, are controlled by, or are under common control with that entity. +For the purposes of this definition, "control" means (i) the power, direct or +indirect, to cause the direction or management of such entity, whether by +contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the +outstanding shares, or (iii) beneficial ownership of such entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising +permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, including +but not limited to software source code, documentation source, and configuration +files. + +"Object" form shall mean any form resulting from mechanical transformation or +translation of a Source form, including but not limited to compiled object code, +generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, made +available under the License, as indicated by a copyright notice that is included +in or attached to the work (an example is provided in the Appendix below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that +is based on (or derived from) the Work and for which the editorial revisions, +annotations, elaborations, or other modifications represent, as a whole, an +original work of authorship. For the purposes of this License, Derivative Works +shall not include works that remain separable from, or merely link (or bind by +name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original version +of the Work and any modifications or additions to that Work or Derivative Works +thereof, that is intentionally submitted to Licensor for inclusion in the Work +by the copyright owner or by an individual or Legal Entity authorized to submit +on behalf of the copyright owner. For the purposes of this definition, +"submitted" means any form of electronic, verbal, or written communication sent +to the Licensor or its representatives, including but not limited to +communication on electronic mailing lists, source code control systems, and +issue tracking systems that are managed by, or on behalf of, the Licensor for +the purpose of discussing and improving the Work, but excluding communication +that is conspicuously marked or otherwise designated in writing by the copyright +owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf +of whom a Contribution has been received by Licensor and subsequently +incorporated within the Work. + +2. Grant of Copyright License. + +Subject to the terms and conditions of this License, each Contributor hereby +grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, +irrevocable copyright license to reproduce, prepare Derivative Works of, +publicly display, publicly perform, sublicense, and distribute the Work and such +Derivative Works in Source or Object form. + +3. Grant of Patent License. + +Subject to the terms and conditions of this License, each Contributor hereby +grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, +irrevocable (except as stated in this section) patent license to make, have +made, use, offer to sell, sell, import, and otherwise transfer the Work, where +such license applies only to those patent claims licensable by such Contributor +that are necessarily infringed by their Contribution(s) alone or by combination +of their Contribution(s) with the Work to which such Contribution(s) was +submitted. If You institute patent litigation against any entity (including a +cross-claim or counterclaim in a lawsuit) alleging that the Work or a +Contribution incorporated within the Work constitutes direct or contributory +patent infringement, then any patent licenses granted to You under this License +for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. + +You may reproduce and distribute copies of the Work or Derivative Works thereof +in any medium, with or without modifications, and in Source or Object form, +provided that You meet the following conditions: + +You must give any other recipients of the Work or Derivative Works a copy of +this License; and +You must cause any modified files to carry prominent notices stating that You +changed the files; and +You must retain, in the Source form of any Derivative Works that You distribute, +all copyright, patent, trademark, and attribution notices from the Source form +of the Work, excluding those notices that do not pertain to any part of the +Derivative Works; and +If the Work includes a "NOTICE" text file as part of its distribution, then any +Derivative Works that You distribute must include a readable copy of the +attribution notices contained within such NOTICE file, excluding those notices +that do not pertain to any part of the Derivative Works, in at least one of the +following places: within a NOTICE text file distributed as part of the +Derivative Works; within the Source form or documentation, if provided along +with the Derivative Works; or, within a display generated by the Derivative +Works, if and wherever such third-party notices normally appear. The contents of +the NOTICE file are for informational purposes only and do not modify the +License. You may add Your own attribution notices within Derivative Works that +You distribute, alongside or as an addendum to the NOTICE text from the Work, +provided that such additional attribution notices cannot be construed as +modifying the License. +You may add Your own copyright statement to Your modifications and may provide +additional or different license terms and conditions for use, reproduction, or +distribution of Your modifications, or for any such Derivative Works as a whole, +provided Your use, reproduction, and distribution of the Work otherwise complies +with the conditions stated in this License. + +5. Submission of Contributions. + +Unless You explicitly state otherwise, any Contribution intentionally submitted +for inclusion in the Work by You to the Licensor shall be under the terms and +conditions of this License, without any additional terms or conditions. +Notwithstanding the above, nothing herein shall supersede or modify the terms of +any separate license agreement you may have executed with Licensor regarding +such Contributions. + +6. Trademarks. + +This License does not grant permission to use the trade names, trademarks, +service marks, or product names of the Licensor, except as required for +reasonable and customary use in describing the origin of the Work and +reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. + +Unless required by applicable law or agreed to in writing, Licensor provides the +Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, +including, without limitation, any warranties or conditions of TITLE, +NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are +solely responsible for determining the appropriateness of using or +redistributing the Work and assume any risks associated with Your exercise of +permissions under this License. + +8. Limitation of Liability. + +In no event and under no legal theory, whether in tort (including negligence), +contract, or otherwise, unless required by applicable law (such as deliberate +and grossly negligent acts) or agreed to in writing, shall any Contributor be +liable to You for damages, including any direct, indirect, special, incidental, +or consequential damages of any character arising as a result of this License or +out of the use or inability to use the Work (including but not limited to +damages for loss of goodwill, work stoppage, computer failure or malfunction, or +any and all other commercial damages or losses), even if such Contributor has +been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. + +While redistributing the Work or Derivative Works thereof, You may choose to +offer, and charge a fee for, acceptance of support, warranty, indemnity, or +other liability obligations and/or rights consistent with this License. However, +in accepting such obligations, You may act only on Your own behalf and on Your +sole responsibility, not on behalf of any other Contributor, and only if You +agree to indemnify, defend, and hold each Contributor harmless for any liability +incurred by, or claims asserted against, such Contributor by reason of your +accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work + +To apply the Apache License to your work, attach the following boilerplate +notice, with the fields enclosed by brackets "[]" replaced with your own +identifying information. (Don't include the brackets!) The text should be +enclosed in the appropriate comment syntax for the file format. We also +recommend that a file or class name and description of purpose be included on +the same "printed page" as the copyright notice for easier identification within +third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/golang/glog/glog.go b/vendor/github.com/golang/glog/glog.go new file mode 100644 index 00000000..54bd7afd --- /dev/null +++ b/vendor/github.com/golang/glog/glog.go @@ -0,0 +1,1180 @@ +// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/ +// +// Copyright 2013 Google Inc. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. +// It provides functions Info, Warning, Error, Fatal, plus formatting variants such as +// Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. +// +// Basic examples: +// +// glog.Info("Prepare to repel boarders") +// +// glog.Fatalf("Initialization failed: %s", err) +// +// See the documentation for the V function for an explanation of these examples: +// +// if glog.V(2) { +// glog.Info("Starting transaction...") +// } +// +// glog.V(2).Infoln("Processed", nItems, "elements") +// +// Log output is buffered and written periodically using Flush. Programs +// should call Flush before exiting to guarantee all log output is written. +// +// By default, all log statements write to files in a temporary directory. +// This package provides several flags that modify this behavior. +// As a result, flag.Parse must be called before any logging is done. +// +// -logtostderr=false +// Logs are written to standard error instead of to files. +// -alsologtostderr=false +// Logs are written to standard error as well as to files. +// -stderrthreshold=ERROR +// Log events at or above this severity are logged to standard +// error as well as to files. +// -log_dir="" +// Log files will be written to this directory instead of the +// default temporary directory. +// +// Other flags provide aids to debugging. +// +// -log_backtrace_at="" +// When set to a file and line number holding a logging statement, +// such as +// -log_backtrace_at=gopherflakes.go:234 +// a stack trace will be written to the Info log whenever execution +// hits that statement. (Unlike with -vmodule, the ".go" must be +// present.) +// -v=0 +// Enable V-leveled logging at the specified level. +// -vmodule="" +// The syntax of the argument is a comma-separated list of pattern=N, +// where pattern is a literal file name (minus the ".go" suffix) or +// "glob" pattern and N is a V level. For instance, +// -vmodule=gopher*=3 +// sets the V level to 3 in all Go files whose names begin "gopher". +// +package glog + +import ( + "bufio" + "bytes" + "errors" + "flag" + "fmt" + "io" + stdLog "log" + "os" + "path/filepath" + "runtime" + "strconv" + "strings" + "sync" + "sync/atomic" + "time" +) + +// severity identifies the sort of log: info, warning etc. It also implements +// the flag.Value interface. The -stderrthreshold flag is of type severity and +// should be modified only through the flag.Value interface. The values match +// the corresponding constants in C++. +type severity int32 // sync/atomic int32 + +// These constants identify the log levels in order of increasing severity. +// A message written to a high-severity log file is also written to each +// lower-severity log file. +const ( + infoLog severity = iota + warningLog + errorLog + fatalLog + numSeverity = 4 +) + +const severityChar = "IWEF" + +var severityName = []string{ + infoLog: "INFO", + warningLog: "WARNING", + errorLog: "ERROR", + fatalLog: "FATAL", +} + +// get returns the value of the severity. +func (s *severity) get() severity { + return severity(atomic.LoadInt32((*int32)(s))) +} + +// set sets the value of the severity. +func (s *severity) set(val severity) { + atomic.StoreInt32((*int32)(s), int32(val)) +} + +// String is part of the flag.Value interface. +func (s *severity) String() string { + return strconv.FormatInt(int64(*s), 10) +} + +// Get is part of the flag.Value interface. +func (s *severity) Get() interface{} { + return *s +} + +// Set is part of the flag.Value interface. +func (s *severity) Set(value string) error { + var threshold severity + // Is it a known name? + if v, ok := severityByName(value); ok { + threshold = v + } else { + v, err := strconv.Atoi(value) + if err != nil { + return err + } + threshold = severity(v) + } + logging.stderrThreshold.set(threshold) + return nil +} + +func severityByName(s string) (severity, bool) { + s = strings.ToUpper(s) + for i, name := range severityName { + if name == s { + return severity(i), true + } + } + return 0, false +} + +// OutputStats tracks the number of output lines and bytes written. +type OutputStats struct { + lines int64 + bytes int64 +} + +// Lines returns the number of lines written. +func (s *OutputStats) Lines() int64 { + return atomic.LoadInt64(&s.lines) +} + +// Bytes returns the number of bytes written. +func (s *OutputStats) Bytes() int64 { + return atomic.LoadInt64(&s.bytes) +} + +// Stats tracks the number of lines of output and number of bytes +// per severity level. Values must be read with atomic.LoadInt64. +var Stats struct { + Info, Warning, Error OutputStats +} + +var severityStats = [numSeverity]*OutputStats{ + infoLog: &Stats.Info, + warningLog: &Stats.Warning, + errorLog: &Stats.Error, +} + +// Level is exported because it appears in the arguments to V and is +// the type of the v flag, which can be set programmatically. +// It's a distinct type because we want to discriminate it from logType. +// Variables of type level are only changed under logging.mu. +// The -v flag is read only with atomic ops, so the state of the logging +// module is consistent. + +// Level is treated as a sync/atomic int32. + +// Level specifies a level of verbosity for V logs. *Level implements +// flag.Value; the -v flag is of type Level and should be modified +// only through the flag.Value interface. +type Level int32 + +// get returns the value of the Level. +func (l *Level) get() Level { + return Level(atomic.LoadInt32((*int32)(l))) +} + +// set sets the value of the Level. +func (l *Level) set(val Level) { + atomic.StoreInt32((*int32)(l), int32(val)) +} + +// String is part of the flag.Value interface. +func (l *Level) String() string { + return strconv.FormatInt(int64(*l), 10) +} + +// Get is part of the flag.Value interface. +func (l *Level) Get() interface{} { + return *l +} + +// Set is part of the flag.Value interface. +func (l *Level) Set(value string) error { + v, err := strconv.Atoi(value) + if err != nil { + return err + } + logging.mu.Lock() + defer logging.mu.Unlock() + logging.setVState(Level(v), logging.vmodule.filter, false) + return nil +} + +// moduleSpec represents the setting of the -vmodule flag. +type moduleSpec struct { + filter []modulePat +} + +// modulePat contains a filter for the -vmodule flag. +// It holds a verbosity level and a file pattern to match. +type modulePat struct { + pattern string + literal bool // The pattern is a literal string + level Level +} + +// match reports whether the file matches the pattern. It uses a string +// comparison if the pattern contains no metacharacters. +func (m *modulePat) match(file string) bool { + if m.literal { + return file == m.pattern + } + match, _ := filepath.Match(m.pattern, file) + return match +} + +func (m *moduleSpec) String() string { + // Lock because the type is not atomic. TODO: clean this up. + logging.mu.Lock() + defer logging.mu.Unlock() + var b bytes.Buffer + for i, f := range m.filter { + if i > 0 { + b.WriteRune(',') + } + fmt.Fprintf(&b, "%s=%d", f.pattern, f.level) + } + return b.String() +} + +// Get is part of the (Go 1.2) flag.Getter interface. It always returns nil for this flag type since the +// struct is not exported. +func (m *moduleSpec) Get() interface{} { + return nil +} + +var errVmoduleSyntax = errors.New("syntax error: expect comma-separated list of filename=N") + +// Syntax: -vmodule=recordio=2,file=1,gfs*=3 +func (m *moduleSpec) Set(value string) error { + var filter []modulePat + for _, pat := range strings.Split(value, ",") { + if len(pat) == 0 { + // Empty strings such as from a trailing comma can be ignored. + continue + } + patLev := strings.Split(pat, "=") + if len(patLev) != 2 || len(patLev[0]) == 0 || len(patLev[1]) == 0 { + return errVmoduleSyntax + } + pattern := patLev[0] + v, err := strconv.Atoi(patLev[1]) + if err != nil { + return errors.New("syntax error: expect comma-separated list of filename=N") + } + if v < 0 { + return errors.New("negative value for vmodule level") + } + if v == 0 { + continue // Ignore. It's harmless but no point in paying the overhead. + } + // TODO: check syntax of filter? + filter = append(filter, modulePat{pattern, isLiteral(pattern), Level(v)}) + } + logging.mu.Lock() + defer logging.mu.Unlock() + logging.setVState(logging.verbosity, filter, true) + return nil +} + +// isLiteral reports whether the pattern is a literal string, that is, has no metacharacters +// that require filepath.Match to be called to match the pattern. +func isLiteral(pattern string) bool { + return !strings.ContainsAny(pattern, `\*?[]`) +} + +// traceLocation represents the setting of the -log_backtrace_at flag. +type traceLocation struct { + file string + line int +} + +// isSet reports whether the trace location has been specified. +// logging.mu is held. +func (t *traceLocation) isSet() bool { + return t.line > 0 +} + +// match reports whether the specified file and line matches the trace location. +// The argument file name is the full path, not the basename specified in the flag. +// logging.mu is held. +func (t *traceLocation) match(file string, line int) bool { + if t.line != line { + return false + } + if i := strings.LastIndex(file, "/"); i >= 0 { + file = file[i+1:] + } + return t.file == file +} + +func (t *traceLocation) String() string { + // Lock because the type is not atomic. TODO: clean this up. + logging.mu.Lock() + defer logging.mu.Unlock() + return fmt.Sprintf("%s:%d", t.file, t.line) +} + +// Get is part of the (Go 1.2) flag.Getter interface. It always returns nil for this flag type since the +// struct is not exported +func (t *traceLocation) Get() interface{} { + return nil +} + +var errTraceSyntax = errors.New("syntax error: expect file.go:234") + +// Syntax: -log_backtrace_at=gopherflakes.go:234 +// Note that unlike vmodule the file extension is included here. +func (t *traceLocation) Set(value string) error { + if value == "" { + // Unset. + t.line = 0 + t.file = "" + } + fields := strings.Split(value, ":") + if len(fields) != 2 { + return errTraceSyntax + } + file, line := fields[0], fields[1] + if !strings.Contains(file, ".") { + return errTraceSyntax + } + v, err := strconv.Atoi(line) + if err != nil { + return errTraceSyntax + } + if v <= 0 { + return errors.New("negative or zero value for level") + } + logging.mu.Lock() + defer logging.mu.Unlock() + t.line = v + t.file = file + return nil +} + +// flushSyncWriter is the interface satisfied by logging destinations. +type flushSyncWriter interface { + Flush() error + Sync() error + io.Writer +} + +func init() { + flag.BoolVar(&logging.toStderr, "logtostderr", false, "log to standard error instead of files") + flag.BoolVar(&logging.alsoToStderr, "alsologtostderr", false, "log to standard error as well as files") + flag.Var(&logging.verbosity, "v", "log level for V logs") + flag.Var(&logging.stderrThreshold, "stderrthreshold", "logs at or above this threshold go to stderr") + flag.Var(&logging.vmodule, "vmodule", "comma-separated list of pattern=N settings for file-filtered logging") + flag.Var(&logging.traceLocation, "log_backtrace_at", "when logging hits line file:N, emit a stack trace") + + // Default stderrThreshold is ERROR. + logging.stderrThreshold = errorLog + + logging.setVState(0, nil, false) + go logging.flushDaemon() +} + +// Flush flushes all pending log I/O. +func Flush() { + logging.lockAndFlushAll() +} + +// loggingT collects all the global state of the logging setup. +type loggingT struct { + // Boolean flags. Not handled atomically because the flag.Value interface + // does not let us avoid the =true, and that shorthand is necessary for + // compatibility. TODO: does this matter enough to fix? Seems unlikely. + toStderr bool // The -logtostderr flag. + alsoToStderr bool // The -alsologtostderr flag. + + // Level flag. Handled atomically. + stderrThreshold severity // The -stderrthreshold flag. + + // freeList is a list of byte buffers, maintained under freeListMu. + freeList *buffer + // freeListMu maintains the free list. It is separate from the main mutex + // so buffers can be grabbed and printed to without holding the main lock, + // for better parallelization. + freeListMu sync.Mutex + + // mu protects the remaining elements of this structure and is + // used to synchronize logging. + mu sync.Mutex + // file holds writer for each of the log types. + file [numSeverity]flushSyncWriter + // pcs is used in V to avoid an allocation when computing the caller's PC. + pcs [1]uintptr + // vmap is a cache of the V Level for each V() call site, identified by PC. + // It is wiped whenever the vmodule flag changes state. + vmap map[uintptr]Level + // filterLength stores the length of the vmodule filter chain. If greater + // than zero, it means vmodule is enabled. It may be read safely + // using sync.LoadInt32, but is only modified under mu. + filterLength int32 + // traceLocation is the state of the -log_backtrace_at flag. + traceLocation traceLocation + // These flags are modified only under lock, although verbosity may be fetched + // safely using atomic.LoadInt32. + vmodule moduleSpec // The state of the -vmodule flag. + verbosity Level // V logging level, the value of the -v flag/ +} + +// buffer holds a byte Buffer for reuse. The zero value is ready for use. +type buffer struct { + bytes.Buffer + tmp [64]byte // temporary byte array for creating headers. + next *buffer +} + +var logging loggingT + +// setVState sets a consistent state for V logging. +// l.mu is held. +func (l *loggingT) setVState(verbosity Level, filter []modulePat, setFilter bool) { + // Turn verbosity off so V will not fire while we are in transition. + logging.verbosity.set(0) + // Ditto for filter length. + atomic.StoreInt32(&logging.filterLength, 0) + + // Set the new filters and wipe the pc->Level map if the filter has changed. + if setFilter { + logging.vmodule.filter = filter + logging.vmap = make(map[uintptr]Level) + } + + // Things are consistent now, so enable filtering and verbosity. + // They are enabled in order opposite to that in V. + atomic.StoreInt32(&logging.filterLength, int32(len(filter))) + logging.verbosity.set(verbosity) +} + +// getBuffer returns a new, ready-to-use buffer. +func (l *loggingT) getBuffer() *buffer { + l.freeListMu.Lock() + b := l.freeList + if b != nil { + l.freeList = b.next + } + l.freeListMu.Unlock() + if b == nil { + b = new(buffer) + } else { + b.next = nil + b.Reset() + } + return b +} + +// putBuffer returns a buffer to the free list. +func (l *loggingT) putBuffer(b *buffer) { + if b.Len() >= 256 { + // Let big buffers die a natural death. + return + } + l.freeListMu.Lock() + b.next = l.freeList + l.freeList = b + l.freeListMu.Unlock() +} + +var timeNow = time.Now // Stubbed out for testing. + +/* +header formats a log header as defined by the C++ implementation. +It returns a buffer containing the formatted header and the user's file and line number. +The depth specifies how many stack frames above lives the source line to be identified in the log message. + +Log lines have this form: + Lmmdd hh:mm:ss.uuuuuu threadid file:line] msg... +where the fields are defined as follows: + L A single character, representing the log level (eg 'I' for INFO) + mm The month (zero padded; ie May is '05') + dd The day (zero padded) + hh:mm:ss.uuuuuu Time in hours, minutes and fractional seconds + threadid The space-padded thread ID as returned by GetTID() + file The file name + line The line number + msg The user-supplied message +*/ +func (l *loggingT) header(s severity, depth int) (*buffer, string, int) { + _, file, line, ok := runtime.Caller(3 + depth) + if !ok { + file = "???" + line = 1 + } else { + slash := strings.LastIndex(file, "/") + if slash >= 0 { + file = file[slash+1:] + } + } + return l.formatHeader(s, file, line), file, line +} + +// formatHeader formats a log header using the provided file name and line number. +func (l *loggingT) formatHeader(s severity, file string, line int) *buffer { + now := timeNow() + if line < 0 { + line = 0 // not a real line number, but acceptable to someDigits + } + if s > fatalLog { + s = infoLog // for safety. + } + buf := l.getBuffer() + + // Avoid Fprintf, for speed. The format is so simple that we can do it quickly by hand. + // It's worth about 3X. Fprintf is hard. + _, month, day := now.Date() + hour, minute, second := now.Clock() + // Lmmdd hh:mm:ss.uuuuuu threadid file:line] + buf.tmp[0] = severityChar[s] + buf.twoDigits(1, int(month)) + buf.twoDigits(3, day) + buf.tmp[5] = ' ' + buf.twoDigits(6, hour) + buf.tmp[8] = ':' + buf.twoDigits(9, minute) + buf.tmp[11] = ':' + buf.twoDigits(12, second) + buf.tmp[14] = '.' + buf.nDigits(6, 15, now.Nanosecond()/1000, '0') + buf.tmp[21] = ' ' + buf.nDigits(7, 22, pid, ' ') // TODO: should be TID + buf.tmp[29] = ' ' + buf.Write(buf.tmp[:30]) + buf.WriteString(file) + buf.tmp[0] = ':' + n := buf.someDigits(1, line) + buf.tmp[n+1] = ']' + buf.tmp[n+2] = ' ' + buf.Write(buf.tmp[:n+3]) + return buf +} + +// Some custom tiny helper functions to print the log header efficiently. + +const digits = "0123456789" + +// twoDigits formats a zero-prefixed two-digit integer at buf.tmp[i]. +func (buf *buffer) twoDigits(i, d int) { + buf.tmp[i+1] = digits[d%10] + d /= 10 + buf.tmp[i] = digits[d%10] +} + +// nDigits formats an n-digit integer at buf.tmp[i], +// padding with pad on the left. +// It assumes d >= 0. +func (buf *buffer) nDigits(n, i, d int, pad byte) { + j := n - 1 + for ; j >= 0 && d > 0; j-- { + buf.tmp[i+j] = digits[d%10] + d /= 10 + } + for ; j >= 0; j-- { + buf.tmp[i+j] = pad + } +} + +// someDigits formats a zero-prefixed variable-width integer at buf.tmp[i]. +func (buf *buffer) someDigits(i, d int) int { + // Print into the top, then copy down. We know there's space for at least + // a 10-digit number. + j := len(buf.tmp) + for { + j-- + buf.tmp[j] = digits[d%10] + d /= 10 + if d == 0 { + break + } + } + return copy(buf.tmp[i:], buf.tmp[j:]) +} + +func (l *loggingT) println(s severity, args ...interface{}) { + buf, file, line := l.header(s, 0) + fmt.Fprintln(buf, args...) + l.output(s, buf, file, line, false) +} + +func (l *loggingT) print(s severity, args ...interface{}) { + l.printDepth(s, 1, args...) +} + +func (l *loggingT) printDepth(s severity, depth int, args ...interface{}) { + buf, file, line := l.header(s, depth) + fmt.Fprint(buf, args...) + if buf.Bytes()[buf.Len()-1] != '\n' { + buf.WriteByte('\n') + } + l.output(s, buf, file, line, false) +} + +func (l *loggingT) printf(s severity, format string, args ...interface{}) { + buf, file, line := l.header(s, 0) + fmt.Fprintf(buf, format, args...) + if buf.Bytes()[buf.Len()-1] != '\n' { + buf.WriteByte('\n') + } + l.output(s, buf, file, line, false) +} + +// printWithFileLine behaves like print but uses the provided file and line number. If +// alsoLogToStderr is true, the log message always appears on standard error; it +// will also appear in the log file unless --logtostderr is set. +func (l *loggingT) printWithFileLine(s severity, file string, line int, alsoToStderr bool, args ...interface{}) { + buf := l.formatHeader(s, file, line) + fmt.Fprint(buf, args...) + if buf.Bytes()[buf.Len()-1] != '\n' { + buf.WriteByte('\n') + } + l.output(s, buf, file, line, alsoToStderr) +} + +// output writes the data to the log files and releases the buffer. +func (l *loggingT) output(s severity, buf *buffer, file string, line int, alsoToStderr bool) { + l.mu.Lock() + if l.traceLocation.isSet() { + if l.traceLocation.match(file, line) { + buf.Write(stacks(false)) + } + } + data := buf.Bytes() + if !flag.Parsed() { + os.Stderr.Write([]byte("ERROR: logging before flag.Parse: ")) + os.Stderr.Write(data) + } else if l.toStderr { + os.Stderr.Write(data) + } else { + if alsoToStderr || l.alsoToStderr || s >= l.stderrThreshold.get() { + os.Stderr.Write(data) + } + if l.file[s] == nil { + if err := l.createFiles(s); err != nil { + os.Stderr.Write(data) // Make sure the message appears somewhere. + l.exit(err) + } + } + switch s { + case fatalLog: + l.file[fatalLog].Write(data) + fallthrough + case errorLog: + l.file[errorLog].Write(data) + fallthrough + case warningLog: + l.file[warningLog].Write(data) + fallthrough + case infoLog: + l.file[infoLog].Write(data) + } + } + if s == fatalLog { + // If we got here via Exit rather than Fatal, print no stacks. + if atomic.LoadUint32(&fatalNoStacks) > 0 { + l.mu.Unlock() + timeoutFlush(10 * time.Second) + os.Exit(1) + } + // Dump all goroutine stacks before exiting. + // First, make sure we see the trace for the current goroutine on standard error. + // If -logtostderr has been specified, the loop below will do that anyway + // as the first stack in the full dump. + if !l.toStderr { + os.Stderr.Write(stacks(false)) + } + // Write the stack trace for all goroutines to the files. + trace := stacks(true) + logExitFunc = func(error) {} // If we get a write error, we'll still exit below. + for log := fatalLog; log >= infoLog; log-- { + if f := l.file[log]; f != nil { // Can be nil if -logtostderr is set. + f.Write(trace) + } + } + l.mu.Unlock() + timeoutFlush(10 * time.Second) + os.Exit(255) // C++ uses -1, which is silly because it's anded with 255 anyway. + } + l.putBuffer(buf) + l.mu.Unlock() + if stats := severityStats[s]; stats != nil { + atomic.AddInt64(&stats.lines, 1) + atomic.AddInt64(&stats.bytes, int64(len(data))) + } +} + +// timeoutFlush calls Flush and returns when it completes or after timeout +// elapses, whichever happens first. This is needed because the hooks invoked +// by Flush may deadlock when glog.Fatal is called from a hook that holds +// a lock. +func timeoutFlush(timeout time.Duration) { + done := make(chan bool, 1) + go func() { + Flush() // calls logging.lockAndFlushAll() + done <- true + }() + select { + case <-done: + case <-time.After(timeout): + fmt.Fprintln(os.Stderr, "glog: Flush took longer than", timeout) + } +} + +// stacks is a wrapper for runtime.Stack that attempts to recover the data for all goroutines. +func stacks(all bool) []byte { + // We don't know how big the traces are, so grow a few times if they don't fit. Start large, though. + n := 10000 + if all { + n = 100000 + } + var trace []byte + for i := 0; i < 5; i++ { + trace = make([]byte, n) + nbytes := runtime.Stack(trace, all) + if nbytes < len(trace) { + return trace[:nbytes] + } + n *= 2 + } + return trace +} + +// logExitFunc provides a simple mechanism to override the default behavior +// of exiting on error. Used in testing and to guarantee we reach a required exit +// for fatal logs. Instead, exit could be a function rather than a method but that +// would make its use clumsier. +var logExitFunc func(error) + +// exit is called if there is trouble creating or writing log files. +// It flushes the logs and exits the program; there's no point in hanging around. +// l.mu is held. +func (l *loggingT) exit(err error) { + fmt.Fprintf(os.Stderr, "log: exiting because of error: %s\n", err) + // If logExitFunc is set, we do that instead of exiting. + if logExitFunc != nil { + logExitFunc(err) + return + } + l.flushAll() + os.Exit(2) +} + +// syncBuffer joins a bufio.Writer to its underlying file, providing access to the +// file's Sync method and providing a wrapper for the Write method that provides log +// file rotation. There are conflicting methods, so the file cannot be embedded. +// l.mu is held for all its methods. +type syncBuffer struct { + logger *loggingT + *bufio.Writer + file *os.File + sev severity + nbytes uint64 // The number of bytes written to this file +} + +func (sb *syncBuffer) Sync() error { + return sb.file.Sync() +} + +func (sb *syncBuffer) Write(p []byte) (n int, err error) { + if sb.nbytes+uint64(len(p)) >= MaxSize { + if err := sb.rotateFile(time.Now()); err != nil { + sb.logger.exit(err) + } + } + n, err = sb.Writer.Write(p) + sb.nbytes += uint64(n) + if err != nil { + sb.logger.exit(err) + } + return +} + +// rotateFile closes the syncBuffer's file and starts a new one. +func (sb *syncBuffer) rotateFile(now time.Time) error { + if sb.file != nil { + sb.Flush() + sb.file.Close() + } + var err error + sb.file, _, err = create(severityName[sb.sev], now) + sb.nbytes = 0 + if err != nil { + return err + } + + sb.Writer = bufio.NewWriterSize(sb.file, bufferSize) + + // Write header. + var buf bytes.Buffer + fmt.Fprintf(&buf, "Log file created at: %s\n", now.Format("2006/01/02 15:04:05")) + fmt.Fprintf(&buf, "Running on machine: %s\n", host) + fmt.Fprintf(&buf, "Binary: Built with %s %s for %s/%s\n", runtime.Compiler, runtime.Version(), runtime.GOOS, runtime.GOARCH) + fmt.Fprintf(&buf, "Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg\n") + n, err := sb.file.Write(buf.Bytes()) + sb.nbytes += uint64(n) + return err +} + +// bufferSize sizes the buffer associated with each log file. It's large +// so that log records can accumulate without the logging thread blocking +// on disk I/O. The flushDaemon will block instead. +const bufferSize = 256 * 1024 + +// createFiles creates all the log files for severity from sev down to infoLog. +// l.mu is held. +func (l *loggingT) createFiles(sev severity) error { + now := time.Now() + // Files are created in decreasing severity order, so as soon as we find one + // has already been created, we can stop. + for s := sev; s >= infoLog && l.file[s] == nil; s-- { + sb := &syncBuffer{ + logger: l, + sev: s, + } + if err := sb.rotateFile(now); err != nil { + return err + } + l.file[s] = sb + } + return nil +} + +const flushInterval = 30 * time.Second + +// flushDaemon periodically flushes the log file buffers. +func (l *loggingT) flushDaemon() { + for _ = range time.NewTicker(flushInterval).C { + l.lockAndFlushAll() + } +} + +// lockAndFlushAll is like flushAll but locks l.mu first. +func (l *loggingT) lockAndFlushAll() { + l.mu.Lock() + l.flushAll() + l.mu.Unlock() +} + +// flushAll flushes all the logs and attempts to "sync" their data to disk. +// l.mu is held. +func (l *loggingT) flushAll() { + // Flush from fatal down, in case there's trouble flushing. + for s := fatalLog; s >= infoLog; s-- { + file := l.file[s] + if file != nil { + file.Flush() // ignore error + file.Sync() // ignore error + } + } +} + +// CopyStandardLogTo arranges for messages written to the Go "log" package's +// default logs to also appear in the Google logs for the named and lower +// severities. Subsequent changes to the standard log's default output location +// or format may break this behavior. +// +// Valid names are "INFO", "WARNING", "ERROR", and "FATAL". If the name is not +// recognized, CopyStandardLogTo panics. +func CopyStandardLogTo(name string) { + sev, ok := severityByName(name) + if !ok { + panic(fmt.Sprintf("log.CopyStandardLogTo(%q): unrecognized severity name", name)) + } + // Set a log format that captures the user's file and line: + // d.go:23: message + stdLog.SetFlags(stdLog.Lshortfile) + stdLog.SetOutput(logBridge(sev)) +} + +// logBridge provides the Write method that enables CopyStandardLogTo to connect +// Go's standard logs to the logs provided by this package. +type logBridge severity + +// Write parses the standard logging line and passes its components to the +// logger for severity(lb). +func (lb logBridge) Write(b []byte) (n int, err error) { + var ( + file = "???" + line = 1 + text string + ) + // Split "d.go:23: message" into "d.go", "23", and "message". + if parts := bytes.SplitN(b, []byte{':'}, 3); len(parts) != 3 || len(parts[0]) < 1 || len(parts[2]) < 1 { + text = fmt.Sprintf("bad log format: %s", b) + } else { + file = string(parts[0]) + text = string(parts[2][1:]) // skip leading space + line, err = strconv.Atoi(string(parts[1])) + if err != nil { + text = fmt.Sprintf("bad line number: %s", b) + line = 1 + } + } + // printWithFileLine with alsoToStderr=true, so standard log messages + // always appear on standard error. + logging.printWithFileLine(severity(lb), file, line, true, text) + return len(b), nil +} + +// setV computes and remembers the V level for a given PC +// when vmodule is enabled. +// File pattern matching takes the basename of the file, stripped +// of its .go suffix, and uses filepath.Match, which is a little more +// general than the *? matching used in C++. +// l.mu is held. +func (l *loggingT) setV(pc uintptr) Level { + fn := runtime.FuncForPC(pc) + file, _ := fn.FileLine(pc) + // The file is something like /a/b/c/d.go. We want just the d. + if strings.HasSuffix(file, ".go") { + file = file[:len(file)-3] + } + if slash := strings.LastIndex(file, "/"); slash >= 0 { + file = file[slash+1:] + } + for _, filter := range l.vmodule.filter { + if filter.match(file) { + l.vmap[pc] = filter.level + return filter.level + } + } + l.vmap[pc] = 0 + return 0 +} + +// Verbose is a boolean type that implements Infof (like Printf) etc. +// See the documentation of V for more information. +type Verbose bool + +// V reports whether verbosity at the call site is at least the requested level. +// The returned value is a boolean of type Verbose, which implements Info, Infoln +// and Infof. These methods will write to the Info log if called. +// Thus, one may write either +// if glog.V(2) { glog.Info("log this") } +// or +// glog.V(2).Info("log this") +// The second form is shorter but the first is cheaper if logging is off because it does +// not evaluate its arguments. +// +// Whether an individual call to V generates a log record depends on the setting of +// the -v and --vmodule flags; both are off by default. If the level in the call to +// V is at least the value of -v, or of -vmodule for the source file containing the +// call, the V call will log. +func V(level Level) Verbose { + // This function tries hard to be cheap unless there's work to do. + // The fast path is two atomic loads and compares. + + // Here is a cheap but safe test to see if V logging is enabled globally. + if logging.verbosity.get() >= level { + return Verbose(true) + } + + // It's off globally but it vmodule may still be set. + // Here is another cheap but safe test to see if vmodule is enabled. + if atomic.LoadInt32(&logging.filterLength) > 0 { + // Now we need a proper lock to use the logging structure. The pcs field + // is shared so we must lock before accessing it. This is fairly expensive, + // but if V logging is enabled we're slow anyway. + logging.mu.Lock() + defer logging.mu.Unlock() + if runtime.Callers(2, logging.pcs[:]) == 0 { + return Verbose(false) + } + v, ok := logging.vmap[logging.pcs[0]] + if !ok { + v = logging.setV(logging.pcs[0]) + } + return Verbose(v >= level) + } + return Verbose(false) +} + +// Info is equivalent to the global Info function, guarded by the value of v. +// See the documentation of V for usage. +func (v Verbose) Info(args ...interface{}) { + if v { + logging.print(infoLog, args...) + } +} + +// Infoln is equivalent to the global Infoln function, guarded by the value of v. +// See the documentation of V for usage. +func (v Verbose) Infoln(args ...interface{}) { + if v { + logging.println(infoLog, args...) + } +} + +// Infof is equivalent to the global Infof function, guarded by the value of v. +// See the documentation of V for usage. +func (v Verbose) Infof(format string, args ...interface{}) { + if v { + logging.printf(infoLog, format, args...) + } +} + +// Info logs to the INFO log. +// Arguments are handled in the manner of fmt.Print; a newline is appended if missing. +func Info(args ...interface{}) { + logging.print(infoLog, args...) +} + +// InfoDepth acts as Info but uses depth to determine which call frame to log. +// InfoDepth(0, "msg") is the same as Info("msg"). +func InfoDepth(depth int, args ...interface{}) { + logging.printDepth(infoLog, depth, args...) +} + +// Infoln logs to the INFO log. +// Arguments are handled in the manner of fmt.Println; a newline is appended if missing. +func Infoln(args ...interface{}) { + logging.println(infoLog, args...) +} + +// Infof logs to the INFO log. +// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing. +func Infof(format string, args ...interface{}) { + logging.printf(infoLog, format, args...) +} + +// Warning logs to the WARNING and INFO logs. +// Arguments are handled in the manner of fmt.Print; a newline is appended if missing. +func Warning(args ...interface{}) { + logging.print(warningLog, args...) +} + +// WarningDepth acts as Warning but uses depth to determine which call frame to log. +// WarningDepth(0, "msg") is the same as Warning("msg"). +func WarningDepth(depth int, args ...interface{}) { + logging.printDepth(warningLog, depth, args...) +} + +// Warningln logs to the WARNING and INFO logs. +// Arguments are handled in the manner of fmt.Println; a newline is appended if missing. +func Warningln(args ...interface{}) { + logging.println(warningLog, args...) +} + +// Warningf logs to the WARNING and INFO logs. +// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing. +func Warningf(format string, args ...interface{}) { + logging.printf(warningLog, format, args...) +} + +// Error logs to the ERROR, WARNING, and INFO logs. +// Arguments are handled in the manner of fmt.Print; a newline is appended if missing. +func Error(args ...interface{}) { + logging.print(errorLog, args...) +} + +// ErrorDepth acts as Error but uses depth to determine which call frame to log. +// ErrorDepth(0, "msg") is the same as Error("msg"). +func ErrorDepth(depth int, args ...interface{}) { + logging.printDepth(errorLog, depth, args...) +} + +// Errorln logs to the ERROR, WARNING, and INFO logs. +// Arguments are handled in the manner of fmt.Println; a newline is appended if missing. +func Errorln(args ...interface{}) { + logging.println(errorLog, args...) +} + +// Errorf logs to the ERROR, WARNING, and INFO logs. +// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing. +func Errorf(format string, args ...interface{}) { + logging.printf(errorLog, format, args...) +} + +// Fatal logs to the FATAL, ERROR, WARNING, and INFO logs, +// including a stack trace of all running goroutines, then calls os.Exit(255). +// Arguments are handled in the manner of fmt.Print; a newline is appended if missing. +func Fatal(args ...interface{}) { + logging.print(fatalLog, args...) +} + +// FatalDepth acts as Fatal but uses depth to determine which call frame to log. +// FatalDepth(0, "msg") is the same as Fatal("msg"). +func FatalDepth(depth int, args ...interface{}) { + logging.printDepth(fatalLog, depth, args...) +} + +// Fatalln logs to the FATAL, ERROR, WARNING, and INFO logs, +// including a stack trace of all running goroutines, then calls os.Exit(255). +// Arguments are handled in the manner of fmt.Println; a newline is appended if missing. +func Fatalln(args ...interface{}) { + logging.println(fatalLog, args...) +} + +// Fatalf logs to the FATAL, ERROR, WARNING, and INFO logs, +// including a stack trace of all running goroutines, then calls os.Exit(255). +// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing. +func Fatalf(format string, args ...interface{}) { + logging.printf(fatalLog, format, args...) +} + +// fatalNoStacks is non-zero if we are to exit without dumping goroutine stacks. +// It allows Exit and relatives to use the Fatal logs. +var fatalNoStacks uint32 + +// Exit logs to the FATAL, ERROR, WARNING, and INFO logs, then calls os.Exit(1). +// Arguments are handled in the manner of fmt.Print; a newline is appended if missing. +func Exit(args ...interface{}) { + atomic.StoreUint32(&fatalNoStacks, 1) + logging.print(fatalLog, args...) +} + +// ExitDepth acts as Exit but uses depth to determine which call frame to log. +// ExitDepth(0, "msg") is the same as Exit("msg"). +func ExitDepth(depth int, args ...interface{}) { + atomic.StoreUint32(&fatalNoStacks, 1) + logging.printDepth(fatalLog, depth, args...) +} + +// Exitln logs to the FATAL, ERROR, WARNING, and INFO logs, then calls os.Exit(1). +func Exitln(args ...interface{}) { + atomic.StoreUint32(&fatalNoStacks, 1) + logging.println(fatalLog, args...) +} + +// Exitf logs to the FATAL, ERROR, WARNING, and INFO logs, then calls os.Exit(1). +// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing. +func Exitf(format string, args ...interface{}) { + atomic.StoreUint32(&fatalNoStacks, 1) + logging.printf(fatalLog, format, args...) +} diff --git a/vendor/github.com/golang/glog/glog_file.go b/vendor/github.com/golang/glog/glog_file.go new file mode 100644 index 00000000..65075d28 --- /dev/null +++ b/vendor/github.com/golang/glog/glog_file.go @@ -0,0 +1,124 @@ +// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/ +// +// Copyright 2013 Google Inc. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// File I/O for logs. + +package glog + +import ( + "errors" + "flag" + "fmt" + "os" + "os/user" + "path/filepath" + "strings" + "sync" + "time" +) + +// MaxSize is the maximum size of a log file in bytes. +var MaxSize uint64 = 1024 * 1024 * 1800 + +// logDirs lists the candidate directories for new log files. +var logDirs []string + +// If non-empty, overrides the choice of directory in which to write logs. +// See createLogDirs for the full list of possible destinations. +var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory") + +func createLogDirs() { + if *logDir != "" { + logDirs = append(logDirs, *logDir) + } + logDirs = append(logDirs, os.TempDir()) +} + +var ( + pid = os.Getpid() + program = filepath.Base(os.Args[0]) + host = "unknownhost" + userName = "unknownuser" +) + +func init() { + h, err := os.Hostname() + if err == nil { + host = shortHostname(h) + } + + current, err := user.Current() + if err == nil { + userName = current.Username + } + + // Sanitize userName since it may contain filepath separators on Windows. + userName = strings.Replace(userName, `\`, "_", -1) +} + +// shortHostname returns its argument, truncating at the first period. +// For instance, given "www.google.com" it returns "www". +func shortHostname(hostname string) string { + if i := strings.Index(hostname, "."); i >= 0 { + return hostname[:i] + } + return hostname +} + +// logName returns a new log file name containing tag, with start time t, and +// the name for the symlink for tag. +func logName(tag string, t time.Time) (name, link string) { + name = fmt.Sprintf("%s.%s.%s.log.%s.%04d%02d%02d-%02d%02d%02d.%d", + program, + host, + userName, + tag, + t.Year(), + t.Month(), + t.Day(), + t.Hour(), + t.Minute(), + t.Second(), + pid) + return name, program + "." + tag +} + +var onceLogDirs sync.Once + +// create creates a new log file and returns the file and its filename, which +// contains tag ("INFO", "FATAL", etc.) and t. If the file is created +// successfully, create also attempts to update the symlink for that tag, ignoring +// errors. +func create(tag string, t time.Time) (f *os.File, filename string, err error) { + onceLogDirs.Do(createLogDirs) + if len(logDirs) == 0 { + return nil, "", errors.New("log: no log dirs") + } + name, link := logName(tag, t) + var lastErr error + for _, dir := range logDirs { + fname := filepath.Join(dir, name) + f, err := os.Create(fname) + if err == nil { + symlink := filepath.Join(dir, link) + os.Remove(symlink) // ignore err + os.Symlink(name, symlink) // ignore err + return f, fname, nil + } + lastErr = err + } + return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr) +} diff --git a/vendor/github.com/golang/protobuf/AUTHORS b/vendor/github.com/golang/protobuf/AUTHORS new file mode 100644 index 00000000..15167cd7 --- /dev/null +++ b/vendor/github.com/golang/protobuf/AUTHORS @@ -0,0 +1,3 @@ +# This source code refers to The Go Authors for copyright purposes. +# The master list of authors is in the main Go distribution, +# visible at http://tip.golang.org/AUTHORS. diff --git a/vendor/github.com/golang/protobuf/CONTRIBUTORS b/vendor/github.com/golang/protobuf/CONTRIBUTORS new file mode 100644 index 00000000..1c4577e9 --- /dev/null +++ b/vendor/github.com/golang/protobuf/CONTRIBUTORS @@ -0,0 +1,3 @@ +# This source code was written by the Go contributors. +# The master list of contributors is in the main Go distribution, +# visible at http://tip.golang.org/CONTRIBUTORS. diff --git a/vendor/github.com/golang/protobuf/LICENSE b/vendor/github.com/golang/protobuf/LICENSE new file mode 100644 index 00000000..1b1b1921 --- /dev/null +++ b/vendor/github.com/golang/protobuf/LICENSE @@ -0,0 +1,31 @@ +Go support for Protocol Buffers - Google's data interchange format + +Copyright 2010 The Go Authors. All rights reserved. +https://github.com/golang/protobuf + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + diff --git a/vendor/github.com/golang/protobuf/proto/clone.go b/vendor/github.com/golang/protobuf/proto/clone.go new file mode 100644 index 00000000..3cd3249f --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/clone.go @@ -0,0 +1,253 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2011 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// Protocol buffer deep copy and merge. +// TODO: RawMessage. + +package proto + +import ( + "fmt" + "log" + "reflect" + "strings" +) + +// Clone returns a deep copy of a protocol buffer. +func Clone(src Message) Message { + in := reflect.ValueOf(src) + if in.IsNil() { + return src + } + out := reflect.New(in.Type().Elem()) + dst := out.Interface().(Message) + Merge(dst, src) + return dst +} + +// Merger is the interface representing objects that can merge messages of the same type. +type Merger interface { + // Merge merges src into this message. + // Required and optional fields that are set in src will be set to that value in dst. + // Elements of repeated fields will be appended. + // + // Merge may panic if called with a different argument type than the receiver. + Merge(src Message) +} + +// generatedMerger is the custom merge method that generated protos will have. +// We must add this method since a generate Merge method will conflict with +// many existing protos that have a Merge data field already defined. +type generatedMerger interface { + XXX_Merge(src Message) +} + +// Merge merges src into dst. +// Required and optional fields that are set in src will be set to that value in dst. +// Elements of repeated fields will be appended. +// Merge panics if src and dst are not the same type, or if dst is nil. +func Merge(dst, src Message) { + if m, ok := dst.(Merger); ok { + m.Merge(src) + return + } + + in := reflect.ValueOf(src) + out := reflect.ValueOf(dst) + if out.IsNil() { + panic("proto: nil destination") + } + if in.Type() != out.Type() { + panic(fmt.Sprintf("proto.Merge(%T, %T) type mismatch", dst, src)) + } + if in.IsNil() { + return // Merge from nil src is a noop + } + if m, ok := dst.(generatedMerger); ok { + m.XXX_Merge(src) + return + } + mergeStruct(out.Elem(), in.Elem()) +} + +func mergeStruct(out, in reflect.Value) { + sprop := GetProperties(in.Type()) + for i := 0; i < in.NumField(); i++ { + f := in.Type().Field(i) + if strings.HasPrefix(f.Name, "XXX_") { + continue + } + mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i]) + } + + if emIn, err := extendable(in.Addr().Interface()); err == nil { + emOut, _ := extendable(out.Addr().Interface()) + mIn, muIn := emIn.extensionsRead() + if mIn != nil { + mOut := emOut.extensionsWrite() + muIn.Lock() + mergeExtension(mOut, mIn) + muIn.Unlock() + } + } + + uf := in.FieldByName("XXX_unrecognized") + if !uf.IsValid() { + return + } + uin := uf.Bytes() + if len(uin) > 0 { + out.FieldByName("XXX_unrecognized").SetBytes(append([]byte(nil), uin...)) + } +} + +// mergeAny performs a merge between two values of the same type. +// viaPtr indicates whether the values were indirected through a pointer (implying proto2). +// prop is set if this is a struct field (it may be nil). +func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) { + if in.Type() == protoMessageType { + if !in.IsNil() { + if out.IsNil() { + out.Set(reflect.ValueOf(Clone(in.Interface().(Message)))) + } else { + Merge(out.Interface().(Message), in.Interface().(Message)) + } + } + return + } + switch in.Kind() { + case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64, + reflect.String, reflect.Uint32, reflect.Uint64: + if !viaPtr && isProto3Zero(in) { + return + } + out.Set(in) + case reflect.Interface: + // Probably a oneof field; copy non-nil values. + if in.IsNil() { + return + } + // Allocate destination if it is not set, or set to a different type. + // Otherwise we will merge as normal. + if out.IsNil() || out.Elem().Type() != in.Elem().Type() { + out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T) + } + mergeAny(out.Elem(), in.Elem(), false, nil) + case reflect.Map: + if in.Len() == 0 { + return + } + if out.IsNil() { + out.Set(reflect.MakeMap(in.Type())) + } + // For maps with value types of *T or []byte we need to deep copy each value. + elemKind := in.Type().Elem().Kind() + for _, key := range in.MapKeys() { + var val reflect.Value + switch elemKind { + case reflect.Ptr: + val = reflect.New(in.Type().Elem().Elem()) + mergeAny(val, in.MapIndex(key), false, nil) + case reflect.Slice: + val = in.MapIndex(key) + val = reflect.ValueOf(append([]byte{}, val.Bytes()...)) + default: + val = in.MapIndex(key) + } + out.SetMapIndex(key, val) + } + case reflect.Ptr: + if in.IsNil() { + return + } + if out.IsNil() { + out.Set(reflect.New(in.Elem().Type())) + } + mergeAny(out.Elem(), in.Elem(), true, nil) + case reflect.Slice: + if in.IsNil() { + return + } + if in.Type().Elem().Kind() == reflect.Uint8 { + // []byte is a scalar bytes field, not a repeated field. + + // Edge case: if this is in a proto3 message, a zero length + // bytes field is considered the zero value, and should not + // be merged. + if prop != nil && prop.proto3 && in.Len() == 0 { + return + } + + // Make a deep copy. + // Append to []byte{} instead of []byte(nil) so that we never end up + // with a nil result. + out.SetBytes(append([]byte{}, in.Bytes()...)) + return + } + n := in.Len() + if out.IsNil() { + out.Set(reflect.MakeSlice(in.Type(), 0, n)) + } + switch in.Type().Elem().Kind() { + case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64, + reflect.String, reflect.Uint32, reflect.Uint64: + out.Set(reflect.AppendSlice(out, in)) + default: + for i := 0; i < n; i++ { + x := reflect.Indirect(reflect.New(in.Type().Elem())) + mergeAny(x, in.Index(i), false, nil) + out.Set(reflect.Append(out, x)) + } + } + case reflect.Struct: + mergeStruct(out, in) + default: + // unknown type, so not a protocol buffer + log.Printf("proto: don't know how to copy %v", in) + } +} + +func mergeExtension(out, in map[int32]Extension) { + for extNum, eIn := range in { + eOut := Extension{desc: eIn.desc} + if eIn.value != nil { + v := reflect.New(reflect.TypeOf(eIn.value)).Elem() + mergeAny(v, reflect.ValueOf(eIn.value), false, nil) + eOut.value = v.Interface() + } + if eIn.enc != nil { + eOut.enc = make([]byte, len(eIn.enc)) + copy(eOut.enc, eIn.enc) + } + + out[extNum] = eOut + } +} diff --git a/vendor/github.com/golang/protobuf/proto/decode.go b/vendor/github.com/golang/protobuf/proto/decode.go new file mode 100644 index 00000000..d9aa3c42 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/decode.go @@ -0,0 +1,428 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +/* + * Routines for decoding protocol buffer data to construct in-memory representations. + */ + +import ( + "errors" + "fmt" + "io" +) + +// errOverflow is returned when an integer is too large to be represented. +var errOverflow = errors.New("proto: integer overflow") + +// ErrInternalBadWireType is returned by generated code when an incorrect +// wire type is encountered. It does not get returned to user code. +var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof") + +// DecodeVarint reads a varint-encoded integer from the slice. +// It returns the integer and the number of bytes consumed, or +// zero if there is not enough. +// This is the format for the +// int32, int64, uint32, uint64, bool, and enum +// protocol buffer types. +func DecodeVarint(buf []byte) (x uint64, n int) { + for shift := uint(0); shift < 64; shift += 7 { + if n >= len(buf) { + return 0, 0 + } + b := uint64(buf[n]) + n++ + x |= (b & 0x7F) << shift + if (b & 0x80) == 0 { + return x, n + } + } + + // The number is too large to represent in a 64-bit value. + return 0, 0 +} + +func (p *Buffer) decodeVarintSlow() (x uint64, err error) { + i := p.index + l := len(p.buf) + + for shift := uint(0); shift < 64; shift += 7 { + if i >= l { + err = io.ErrUnexpectedEOF + return + } + b := p.buf[i] + i++ + x |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + p.index = i + return + } + } + + // The number is too large to represent in a 64-bit value. + err = errOverflow + return +} + +// DecodeVarint reads a varint-encoded integer from the Buffer. +// This is the format for the +// int32, int64, uint32, uint64, bool, and enum +// protocol buffer types. +func (p *Buffer) DecodeVarint() (x uint64, err error) { + i := p.index + buf := p.buf + + if i >= len(buf) { + return 0, io.ErrUnexpectedEOF + } else if buf[i] < 0x80 { + p.index++ + return uint64(buf[i]), nil + } else if len(buf)-i < 10 { + return p.decodeVarintSlow() + } + + var b uint64 + // we already checked the first byte + x = uint64(buf[i]) - 0x80 + i++ + + b = uint64(buf[i]) + i++ + x += b << 7 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 7 + + b = uint64(buf[i]) + i++ + x += b << 14 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 14 + + b = uint64(buf[i]) + i++ + x += b << 21 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 21 + + b = uint64(buf[i]) + i++ + x += b << 28 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 28 + + b = uint64(buf[i]) + i++ + x += b << 35 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 35 + + b = uint64(buf[i]) + i++ + x += b << 42 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 42 + + b = uint64(buf[i]) + i++ + x += b << 49 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 49 + + b = uint64(buf[i]) + i++ + x += b << 56 + if b&0x80 == 0 { + goto done + } + x -= 0x80 << 56 + + b = uint64(buf[i]) + i++ + x += b << 63 + if b&0x80 == 0 { + goto done + } + // x -= 0x80 << 63 // Always zero. + + return 0, errOverflow + +done: + p.index = i + return x, nil +} + +// DecodeFixed64 reads a 64-bit integer from the Buffer. +// This is the format for the +// fixed64, sfixed64, and double protocol buffer types. +func (p *Buffer) DecodeFixed64() (x uint64, err error) { + // x, err already 0 + i := p.index + 8 + if i < 0 || i > len(p.buf) { + err = io.ErrUnexpectedEOF + return + } + p.index = i + + x = uint64(p.buf[i-8]) + x |= uint64(p.buf[i-7]) << 8 + x |= uint64(p.buf[i-6]) << 16 + x |= uint64(p.buf[i-5]) << 24 + x |= uint64(p.buf[i-4]) << 32 + x |= uint64(p.buf[i-3]) << 40 + x |= uint64(p.buf[i-2]) << 48 + x |= uint64(p.buf[i-1]) << 56 + return +} + +// DecodeFixed32 reads a 32-bit integer from the Buffer. +// This is the format for the +// fixed32, sfixed32, and float protocol buffer types. +func (p *Buffer) DecodeFixed32() (x uint64, err error) { + // x, err already 0 + i := p.index + 4 + if i < 0 || i > len(p.buf) { + err = io.ErrUnexpectedEOF + return + } + p.index = i + + x = uint64(p.buf[i-4]) + x |= uint64(p.buf[i-3]) << 8 + x |= uint64(p.buf[i-2]) << 16 + x |= uint64(p.buf[i-1]) << 24 + return +} + +// DecodeZigzag64 reads a zigzag-encoded 64-bit integer +// from the Buffer. +// This is the format used for the sint64 protocol buffer type. +func (p *Buffer) DecodeZigzag64() (x uint64, err error) { + x, err = p.DecodeVarint() + if err != nil { + return + } + x = (x >> 1) ^ uint64((int64(x&1)<<63)>>63) + return +} + +// DecodeZigzag32 reads a zigzag-encoded 32-bit integer +// from the Buffer. +// This is the format used for the sint32 protocol buffer type. +func (p *Buffer) DecodeZigzag32() (x uint64, err error) { + x, err = p.DecodeVarint() + if err != nil { + return + } + x = uint64((uint32(x) >> 1) ^ uint32((int32(x&1)<<31)>>31)) + return +} + +// DecodeRawBytes reads a count-delimited byte buffer from the Buffer. +// This is the format used for the bytes protocol buffer +// type and for embedded messages. +func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) { + n, err := p.DecodeVarint() + if err != nil { + return nil, err + } + + nb := int(n) + if nb < 0 { + return nil, fmt.Errorf("proto: bad byte length %d", nb) + } + end := p.index + nb + if end < p.index || end > len(p.buf) { + return nil, io.ErrUnexpectedEOF + } + + if !alloc { + // todo: check if can get more uses of alloc=false + buf = p.buf[p.index:end] + p.index += nb + return + } + + buf = make([]byte, nb) + copy(buf, p.buf[p.index:]) + p.index += nb + return +} + +// DecodeStringBytes reads an encoded string from the Buffer. +// This is the format used for the proto2 string type. +func (p *Buffer) DecodeStringBytes() (s string, err error) { + buf, err := p.DecodeRawBytes(false) + if err != nil { + return + } + return string(buf), nil +} + +// Unmarshaler is the interface representing objects that can +// unmarshal themselves. The argument points to data that may be +// overwritten, so implementations should not keep references to the +// buffer. +// Unmarshal implementations should not clear the receiver. +// Any unmarshaled data should be merged into the receiver. +// Callers of Unmarshal that do not want to retain existing data +// should Reset the receiver before calling Unmarshal. +type Unmarshaler interface { + Unmarshal([]byte) error +} + +// newUnmarshaler is the interface representing objects that can +// unmarshal themselves. The semantics are identical to Unmarshaler. +// +// This exists to support protoc-gen-go generated messages. +// The proto package will stop type-asserting to this interface in the future. +// +// DO NOT DEPEND ON THIS. +type newUnmarshaler interface { + XXX_Unmarshal([]byte) error +} + +// Unmarshal parses the protocol buffer representation in buf and places the +// decoded result in pb. If the struct underlying pb does not match +// the data in buf, the results can be unpredictable. +// +// Unmarshal resets pb before starting to unmarshal, so any +// existing data in pb is always removed. Use UnmarshalMerge +// to preserve and append to existing data. +func Unmarshal(buf []byte, pb Message) error { + pb.Reset() + if u, ok := pb.(newUnmarshaler); ok { + return u.XXX_Unmarshal(buf) + } + if u, ok := pb.(Unmarshaler); ok { + return u.Unmarshal(buf) + } + return NewBuffer(buf).Unmarshal(pb) +} + +// UnmarshalMerge parses the protocol buffer representation in buf and +// writes the decoded result to pb. If the struct underlying pb does not match +// the data in buf, the results can be unpredictable. +// +// UnmarshalMerge merges into existing data in pb. +// Most code should use Unmarshal instead. +func UnmarshalMerge(buf []byte, pb Message) error { + if u, ok := pb.(newUnmarshaler); ok { + return u.XXX_Unmarshal(buf) + } + if u, ok := pb.(Unmarshaler); ok { + // NOTE: The history of proto have unfortunately been inconsistent + // whether Unmarshaler should or should not implicitly clear itself. + // Some implementations do, most do not. + // Thus, calling this here may or may not do what people want. + // + // See https://github.com/golang/protobuf/issues/424 + return u.Unmarshal(buf) + } + return NewBuffer(buf).Unmarshal(pb) +} + +// DecodeMessage reads a count-delimited message from the Buffer. +func (p *Buffer) DecodeMessage(pb Message) error { + enc, err := p.DecodeRawBytes(false) + if err != nil { + return err + } + return NewBuffer(enc).Unmarshal(pb) +} + +// DecodeGroup reads a tag-delimited group from the Buffer. +// StartGroup tag is already consumed. This function consumes +// EndGroup tag. +func (p *Buffer) DecodeGroup(pb Message) error { + b := p.buf[p.index:] + x, y := findEndGroup(b) + if x < 0 { + return io.ErrUnexpectedEOF + } + err := Unmarshal(b[:x], pb) + p.index += y + return err +} + +// Unmarshal parses the protocol buffer representation in the +// Buffer and places the decoded result in pb. If the struct +// underlying pb does not match the data in the buffer, the results can be +// unpredictable. +// +// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal. +func (p *Buffer) Unmarshal(pb Message) error { + // If the object can unmarshal itself, let it. + if u, ok := pb.(newUnmarshaler); ok { + err := u.XXX_Unmarshal(p.buf[p.index:]) + p.index = len(p.buf) + return err + } + if u, ok := pb.(Unmarshaler); ok { + // NOTE: The history of proto have unfortunately been inconsistent + // whether Unmarshaler should or should not implicitly clear itself. + // Some implementations do, most do not. + // Thus, calling this here may or may not do what people want. + // + // See https://github.com/golang/protobuf/issues/424 + err := u.Unmarshal(p.buf[p.index:]) + p.index = len(p.buf) + return err + } + + // Slow workaround for messages that aren't Unmarshalers. + // This includes some hand-coded .pb.go files and + // bootstrap protos. + // TODO: fix all of those and then add Unmarshal to + // the Message interface. Then: + // The cast above and code below can be deleted. + // The old unmarshaler can be deleted. + // Clients can call Unmarshal directly (can already do that, actually). + var info InternalMessageInfo + err := info.Unmarshal(pb, p.buf[p.index:]) + p.index = len(p.buf) + return err +} diff --git a/vendor/github.com/golang/protobuf/proto/discard.go b/vendor/github.com/golang/protobuf/proto/discard.go new file mode 100644 index 00000000..dea2617c --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/discard.go @@ -0,0 +1,350 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2017 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +import ( + "fmt" + "reflect" + "strings" + "sync" + "sync/atomic" +) + +type generatedDiscarder interface { + XXX_DiscardUnknown() +} + +// DiscardUnknown recursively discards all unknown fields from this message +// and all embedded messages. +// +// When unmarshaling a message with unrecognized fields, the tags and values +// of such fields are preserved in the Message. This allows a later call to +// marshal to be able to produce a message that continues to have those +// unrecognized fields. To avoid this, DiscardUnknown is used to +// explicitly clear the unknown fields after unmarshaling. +// +// For proto2 messages, the unknown fields of message extensions are only +// discarded from messages that have been accessed via GetExtension. +func DiscardUnknown(m Message) { + if m, ok := m.(generatedDiscarder); ok { + m.XXX_DiscardUnknown() + return + } + // TODO: Dynamically populate a InternalMessageInfo for legacy messages, + // but the master branch has no implementation for InternalMessageInfo, + // so it would be more work to replicate that approach. + discardLegacy(m) +} + +// DiscardUnknown recursively discards all unknown fields. +func (a *InternalMessageInfo) DiscardUnknown(m Message) { + di := atomicLoadDiscardInfo(&a.discard) + if di == nil { + di = getDiscardInfo(reflect.TypeOf(m).Elem()) + atomicStoreDiscardInfo(&a.discard, di) + } + di.discard(toPointer(&m)) +} + +type discardInfo struct { + typ reflect.Type + + initialized int32 // 0: only typ is valid, 1: everything is valid + lock sync.Mutex + + fields []discardFieldInfo + unrecognized field +} + +type discardFieldInfo struct { + field field // Offset of field, guaranteed to be valid + discard func(src pointer) +} + +var ( + discardInfoMap = map[reflect.Type]*discardInfo{} + discardInfoLock sync.Mutex +) + +func getDiscardInfo(t reflect.Type) *discardInfo { + discardInfoLock.Lock() + defer discardInfoLock.Unlock() + di := discardInfoMap[t] + if di == nil { + di = &discardInfo{typ: t} + discardInfoMap[t] = di + } + return di +} + +func (di *discardInfo) discard(src pointer) { + if src.isNil() { + return // Nothing to do. + } + + if atomic.LoadInt32(&di.initialized) == 0 { + di.computeDiscardInfo() + } + + for _, fi := range di.fields { + sfp := src.offset(fi.field) + fi.discard(sfp) + } + + // For proto2 messages, only discard unknown fields in message extensions + // that have been accessed via GetExtension. + if em, err := extendable(src.asPointerTo(di.typ).Interface()); err == nil { + // Ignore lock since DiscardUnknown is not concurrency safe. + emm, _ := em.extensionsRead() + for _, mx := range emm { + if m, ok := mx.value.(Message); ok { + DiscardUnknown(m) + } + } + } + + if di.unrecognized.IsValid() { + *src.offset(di.unrecognized).toBytes() = nil + } +} + +func (di *discardInfo) computeDiscardInfo() { + di.lock.Lock() + defer di.lock.Unlock() + if di.initialized != 0 { + return + } + t := di.typ + n := t.NumField() + + for i := 0; i < n; i++ { + f := t.Field(i) + if strings.HasPrefix(f.Name, "XXX_") { + continue + } + + dfi := discardFieldInfo{field: toField(&f)} + tf := f.Type + + // Unwrap tf to get its most basic type. + var isPointer, isSlice bool + if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 { + isSlice = true + tf = tf.Elem() + } + if tf.Kind() == reflect.Ptr { + isPointer = true + tf = tf.Elem() + } + if isPointer && isSlice && tf.Kind() != reflect.Struct { + panic(fmt.Sprintf("%v.%s cannot be a slice of pointers to primitive types", t, f.Name)) + } + + switch tf.Kind() { + case reflect.Struct: + switch { + case !isPointer: + panic(fmt.Sprintf("%v.%s cannot be a direct struct value", t, f.Name)) + case isSlice: // E.g., []*pb.T + di := getDiscardInfo(tf) + dfi.discard = func(src pointer) { + sps := src.getPointerSlice() + for _, sp := range sps { + if !sp.isNil() { + di.discard(sp) + } + } + } + default: // E.g., *pb.T + di := getDiscardInfo(tf) + dfi.discard = func(src pointer) { + sp := src.getPointer() + if !sp.isNil() { + di.discard(sp) + } + } + } + case reflect.Map: + switch { + case isPointer || isSlice: + panic(fmt.Sprintf("%v.%s cannot be a pointer to a map or a slice of map values", t, f.Name)) + default: // E.g., map[K]V + if tf.Elem().Kind() == reflect.Ptr { // Proto struct (e.g., *T) + dfi.discard = func(src pointer) { + sm := src.asPointerTo(tf).Elem() + if sm.Len() == 0 { + return + } + for _, key := range sm.MapKeys() { + val := sm.MapIndex(key) + DiscardUnknown(val.Interface().(Message)) + } + } + } else { + dfi.discard = func(pointer) {} // Noop + } + } + case reflect.Interface: + // Must be oneof field. + switch { + case isPointer || isSlice: + panic(fmt.Sprintf("%v.%s cannot be a pointer to a interface or a slice of interface values", t, f.Name)) + default: // E.g., interface{} + // TODO: Make this faster? + dfi.discard = func(src pointer) { + su := src.asPointerTo(tf).Elem() + if !su.IsNil() { + sv := su.Elem().Elem().Field(0) + if sv.Kind() == reflect.Ptr && sv.IsNil() { + return + } + switch sv.Type().Kind() { + case reflect.Ptr: // Proto struct (e.g., *T) + DiscardUnknown(sv.Interface().(Message)) + } + } + } + } + default: + continue + } + di.fields = append(di.fields, dfi) + } + + di.unrecognized = invalidField + if f, ok := t.FieldByName("XXX_unrecognized"); ok { + if f.Type != reflect.TypeOf([]byte{}) { + panic("expected XXX_unrecognized to be of type []byte") + } + di.unrecognized = toField(&f) + } + + atomic.StoreInt32(&di.initialized, 1) +} + +func discardLegacy(m Message) { + v := reflect.ValueOf(m) + if v.Kind() != reflect.Ptr || v.IsNil() { + return + } + v = v.Elem() + if v.Kind() != reflect.Struct { + return + } + t := v.Type() + + for i := 0; i < v.NumField(); i++ { + f := t.Field(i) + if strings.HasPrefix(f.Name, "XXX_") { + continue + } + vf := v.Field(i) + tf := f.Type + + // Unwrap tf to get its most basic type. + var isPointer, isSlice bool + if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 { + isSlice = true + tf = tf.Elem() + } + if tf.Kind() == reflect.Ptr { + isPointer = true + tf = tf.Elem() + } + if isPointer && isSlice && tf.Kind() != reflect.Struct { + panic(fmt.Sprintf("%T.%s cannot be a slice of pointers to primitive types", m, f.Name)) + } + + switch tf.Kind() { + case reflect.Struct: + switch { + case !isPointer: + panic(fmt.Sprintf("%T.%s cannot be a direct struct value", m, f.Name)) + case isSlice: // E.g., []*pb.T + for j := 0; j < vf.Len(); j++ { + discardLegacy(vf.Index(j).Interface().(Message)) + } + default: // E.g., *pb.T + discardLegacy(vf.Interface().(Message)) + } + case reflect.Map: + switch { + case isPointer || isSlice: + panic(fmt.Sprintf("%T.%s cannot be a pointer to a map or a slice of map values", m, f.Name)) + default: // E.g., map[K]V + tv := vf.Type().Elem() + if tv.Kind() == reflect.Ptr && tv.Implements(protoMessageType) { // Proto struct (e.g., *T) + for _, key := range vf.MapKeys() { + val := vf.MapIndex(key) + discardLegacy(val.Interface().(Message)) + } + } + } + case reflect.Interface: + // Must be oneof field. + switch { + case isPointer || isSlice: + panic(fmt.Sprintf("%T.%s cannot be a pointer to a interface or a slice of interface values", m, f.Name)) + default: // E.g., test_proto.isCommunique_Union interface + if !vf.IsNil() && f.Tag.Get("protobuf_oneof") != "" { + vf = vf.Elem() // E.g., *test_proto.Communique_Msg + if !vf.IsNil() { + vf = vf.Elem() // E.g., test_proto.Communique_Msg + vf = vf.Field(0) // E.g., Proto struct (e.g., *T) or primitive value + if vf.Kind() == reflect.Ptr { + discardLegacy(vf.Interface().(Message)) + } + } + } + } + } + } + + if vf := v.FieldByName("XXX_unrecognized"); vf.IsValid() { + if vf.Type() != reflect.TypeOf([]byte{}) { + panic("expected XXX_unrecognized to be of type []byte") + } + vf.Set(reflect.ValueOf([]byte(nil))) + } + + // For proto2 messages, only discard unknown fields in message extensions + // that have been accessed via GetExtension. + if em, err := extendable(m); err == nil { + // Ignore lock since discardLegacy is not concurrency safe. + emm, _ := em.extensionsRead() + for _, mx := range emm { + if m, ok := mx.value.(Message); ok { + discardLegacy(m) + } + } + } +} diff --git a/vendor/github.com/golang/protobuf/proto/encode.go b/vendor/github.com/golang/protobuf/proto/encode.go new file mode 100644 index 00000000..c27d35f8 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/encode.go @@ -0,0 +1,221 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +/* + * Routines for encoding data into the wire format for protocol buffers. + */ + +import ( + "errors" + "fmt" + "reflect" +) + +// RequiredNotSetError is the error returned if Marshal is called with +// a protocol buffer struct whose required fields have not +// all been initialized. It is also the error returned if Unmarshal is +// called with an encoded protocol buffer that does not include all the +// required fields. +// +// When printed, RequiredNotSetError reports the first unset required field in a +// message. If the field cannot be precisely determined, it is reported as +// "{Unknown}". +type RequiredNotSetError struct { + field string +} + +func (e *RequiredNotSetError) Error() string { + return fmt.Sprintf("proto: required field %q not set", e.field) +} + +var ( + // errRepeatedHasNil is the error returned if Marshal is called with + // a struct with a repeated field containing a nil element. + errRepeatedHasNil = errors.New("proto: repeated field has nil element") + + // errOneofHasNil is the error returned if Marshal is called with + // a struct with a oneof field containing a nil element. + errOneofHasNil = errors.New("proto: oneof field has nil value") + + // ErrNil is the error returned if Marshal is called with nil. + ErrNil = errors.New("proto: Marshal called with nil") + + // ErrTooLarge is the error returned if Marshal is called with a + // message that encodes to >2GB. + ErrTooLarge = errors.New("proto: message encodes to over 2 GB") +) + +// The fundamental encoders that put bytes on the wire. +// Those that take integer types all accept uint64 and are +// therefore of type valueEncoder. + +const maxVarintBytes = 10 // maximum length of a varint + +// EncodeVarint returns the varint encoding of x. +// This is the format for the +// int32, int64, uint32, uint64, bool, and enum +// protocol buffer types. +// Not used by the package itself, but helpful to clients +// wishing to use the same encoding. +func EncodeVarint(x uint64) []byte { + var buf [maxVarintBytes]byte + var n int + for n = 0; x > 127; n++ { + buf[n] = 0x80 | uint8(x&0x7F) + x >>= 7 + } + buf[n] = uint8(x) + n++ + return buf[0:n] +} + +// EncodeVarint writes a varint-encoded integer to the Buffer. +// This is the format for the +// int32, int64, uint32, uint64, bool, and enum +// protocol buffer types. +func (p *Buffer) EncodeVarint(x uint64) error { + for x >= 1<<7 { + p.buf = append(p.buf, uint8(x&0x7f|0x80)) + x >>= 7 + } + p.buf = append(p.buf, uint8(x)) + return nil +} + +// SizeVarint returns the varint encoding size of an integer. +func SizeVarint(x uint64) int { + switch { + case x < 1<<7: + return 1 + case x < 1<<14: + return 2 + case x < 1<<21: + return 3 + case x < 1<<28: + return 4 + case x < 1<<35: + return 5 + case x < 1<<42: + return 6 + case x < 1<<49: + return 7 + case x < 1<<56: + return 8 + case x < 1<<63: + return 9 + } + return 10 +} + +// EncodeFixed64 writes a 64-bit integer to the Buffer. +// This is the format for the +// fixed64, sfixed64, and double protocol buffer types. +func (p *Buffer) EncodeFixed64(x uint64) error { + p.buf = append(p.buf, + uint8(x), + uint8(x>>8), + uint8(x>>16), + uint8(x>>24), + uint8(x>>32), + uint8(x>>40), + uint8(x>>48), + uint8(x>>56)) + return nil +} + +// EncodeFixed32 writes a 32-bit integer to the Buffer. +// This is the format for the +// fixed32, sfixed32, and float protocol buffer types. +func (p *Buffer) EncodeFixed32(x uint64) error { + p.buf = append(p.buf, + uint8(x), + uint8(x>>8), + uint8(x>>16), + uint8(x>>24)) + return nil +} + +// EncodeZigzag64 writes a zigzag-encoded 64-bit integer +// to the Buffer. +// This is the format used for the sint64 protocol buffer type. +func (p *Buffer) EncodeZigzag64(x uint64) error { + // use signed number to get arithmetic right shift. + return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} + +// EncodeZigzag32 writes a zigzag-encoded 32-bit integer +// to the Buffer. +// This is the format used for the sint32 protocol buffer type. +func (p *Buffer) EncodeZigzag32(x uint64) error { + // use signed number to get arithmetic right shift. + return p.EncodeVarint(uint64((uint32(x) << 1) ^ uint32((int32(x) >> 31)))) +} + +// EncodeRawBytes writes a count-delimited byte buffer to the Buffer. +// This is the format used for the bytes protocol buffer +// type and for embedded messages. +func (p *Buffer) EncodeRawBytes(b []byte) error { + p.EncodeVarint(uint64(len(b))) + p.buf = append(p.buf, b...) + return nil +} + +// EncodeStringBytes writes an encoded string to the Buffer. +// This is the format used for the proto2 string type. +func (p *Buffer) EncodeStringBytes(s string) error { + p.EncodeVarint(uint64(len(s))) + p.buf = append(p.buf, s...) + return nil +} + +// Marshaler is the interface representing objects that can marshal themselves. +type Marshaler interface { + Marshal() ([]byte, error) +} + +// EncodeMessage writes the protocol buffer to the Buffer, +// prefixed by a varint-encoded length. +func (p *Buffer) EncodeMessage(pb Message) error { + siz := Size(pb) + p.EncodeVarint(uint64(siz)) + return p.Marshal(pb) +} + +// All protocol buffer fields are nillable, but be careful. +func isNil(v reflect.Value) bool { + switch v.Kind() { + case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: + return v.IsNil() + } + return false +} diff --git a/vendor/github.com/golang/protobuf/proto/equal.go b/vendor/github.com/golang/protobuf/proto/equal.go new file mode 100644 index 00000000..d4db5a1c --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/equal.go @@ -0,0 +1,300 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2011 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// Protocol buffer comparison. + +package proto + +import ( + "bytes" + "log" + "reflect" + "strings" +) + +/* +Equal returns true iff protocol buffers a and b are equal. +The arguments must both be pointers to protocol buffer structs. + +Equality is defined in this way: + - Two messages are equal iff they are the same type, + corresponding fields are equal, unknown field sets + are equal, and extensions sets are equal. + - Two set scalar fields are equal iff their values are equal. + If the fields are of a floating-point type, remember that + NaN != x for all x, including NaN. If the message is defined + in a proto3 .proto file, fields are not "set"; specifically, + zero length proto3 "bytes" fields are equal (nil == {}). + - Two repeated fields are equal iff their lengths are the same, + and their corresponding elements are equal. Note a "bytes" field, + although represented by []byte, is not a repeated field and the + rule for the scalar fields described above applies. + - Two unset fields are equal. + - Two unknown field sets are equal if their current + encoded state is equal. + - Two extension sets are equal iff they have corresponding + elements that are pairwise equal. + - Two map fields are equal iff their lengths are the same, + and they contain the same set of elements. Zero-length map + fields are equal. + - Every other combination of things are not equal. + +The return value is undefined if a and b are not protocol buffers. +*/ +func Equal(a, b Message) bool { + if a == nil || b == nil { + return a == b + } + v1, v2 := reflect.ValueOf(a), reflect.ValueOf(b) + if v1.Type() != v2.Type() { + return false + } + if v1.Kind() == reflect.Ptr { + if v1.IsNil() { + return v2.IsNil() + } + if v2.IsNil() { + return false + } + v1, v2 = v1.Elem(), v2.Elem() + } + if v1.Kind() != reflect.Struct { + return false + } + return equalStruct(v1, v2) +} + +// v1 and v2 are known to have the same type. +func equalStruct(v1, v2 reflect.Value) bool { + sprop := GetProperties(v1.Type()) + for i := 0; i < v1.NumField(); i++ { + f := v1.Type().Field(i) + if strings.HasPrefix(f.Name, "XXX_") { + continue + } + f1, f2 := v1.Field(i), v2.Field(i) + if f.Type.Kind() == reflect.Ptr { + if n1, n2 := f1.IsNil(), f2.IsNil(); n1 && n2 { + // both unset + continue + } else if n1 != n2 { + // set/unset mismatch + return false + } + f1, f2 = f1.Elem(), f2.Elem() + } + if !equalAny(f1, f2, sprop.Prop[i]) { + return false + } + } + + if em1 := v1.FieldByName("XXX_InternalExtensions"); em1.IsValid() { + em2 := v2.FieldByName("XXX_InternalExtensions") + if !equalExtensions(v1.Type(), em1.Interface().(XXX_InternalExtensions), em2.Interface().(XXX_InternalExtensions)) { + return false + } + } + + if em1 := v1.FieldByName("XXX_extensions"); em1.IsValid() { + em2 := v2.FieldByName("XXX_extensions") + if !equalExtMap(v1.Type(), em1.Interface().(map[int32]Extension), em2.Interface().(map[int32]Extension)) { + return false + } + } + + uf := v1.FieldByName("XXX_unrecognized") + if !uf.IsValid() { + return true + } + + u1 := uf.Bytes() + u2 := v2.FieldByName("XXX_unrecognized").Bytes() + return bytes.Equal(u1, u2) +} + +// v1 and v2 are known to have the same type. +// prop may be nil. +func equalAny(v1, v2 reflect.Value, prop *Properties) bool { + if v1.Type() == protoMessageType { + m1, _ := v1.Interface().(Message) + m2, _ := v2.Interface().(Message) + return Equal(m1, m2) + } + switch v1.Kind() { + case reflect.Bool: + return v1.Bool() == v2.Bool() + case reflect.Float32, reflect.Float64: + return v1.Float() == v2.Float() + case reflect.Int32, reflect.Int64: + return v1.Int() == v2.Int() + case reflect.Interface: + // Probably a oneof field; compare the inner values. + n1, n2 := v1.IsNil(), v2.IsNil() + if n1 || n2 { + return n1 == n2 + } + e1, e2 := v1.Elem(), v2.Elem() + if e1.Type() != e2.Type() { + return false + } + return equalAny(e1, e2, nil) + case reflect.Map: + if v1.Len() != v2.Len() { + return false + } + for _, key := range v1.MapKeys() { + val2 := v2.MapIndex(key) + if !val2.IsValid() { + // This key was not found in the second map. + return false + } + if !equalAny(v1.MapIndex(key), val2, nil) { + return false + } + } + return true + case reflect.Ptr: + // Maps may have nil values in them, so check for nil. + if v1.IsNil() && v2.IsNil() { + return true + } + if v1.IsNil() != v2.IsNil() { + return false + } + return equalAny(v1.Elem(), v2.Elem(), prop) + case reflect.Slice: + if v1.Type().Elem().Kind() == reflect.Uint8 { + // short circuit: []byte + + // Edge case: if this is in a proto3 message, a zero length + // bytes field is considered the zero value. + if prop != nil && prop.proto3 && v1.Len() == 0 && v2.Len() == 0 { + return true + } + if v1.IsNil() != v2.IsNil() { + return false + } + return bytes.Equal(v1.Interface().([]byte), v2.Interface().([]byte)) + } + + if v1.Len() != v2.Len() { + return false + } + for i := 0; i < v1.Len(); i++ { + if !equalAny(v1.Index(i), v2.Index(i), prop) { + return false + } + } + return true + case reflect.String: + return v1.Interface().(string) == v2.Interface().(string) + case reflect.Struct: + return equalStruct(v1, v2) + case reflect.Uint32, reflect.Uint64: + return v1.Uint() == v2.Uint() + } + + // unknown type, so not a protocol buffer + log.Printf("proto: don't know how to compare %v", v1) + return false +} + +// base is the struct type that the extensions are based on. +// x1 and x2 are InternalExtensions. +func equalExtensions(base reflect.Type, x1, x2 XXX_InternalExtensions) bool { + em1, _ := x1.extensionsRead() + em2, _ := x2.extensionsRead() + return equalExtMap(base, em1, em2) +} + +func equalExtMap(base reflect.Type, em1, em2 map[int32]Extension) bool { + if len(em1) != len(em2) { + return false + } + + for extNum, e1 := range em1 { + e2, ok := em2[extNum] + if !ok { + return false + } + + m1, m2 := e1.value, e2.value + + if m1 == nil && m2 == nil { + // Both have only encoded form. + if bytes.Equal(e1.enc, e2.enc) { + continue + } + // The bytes are different, but the extensions might still be + // equal. We need to decode them to compare. + } + + if m1 != nil && m2 != nil { + // Both are unencoded. + if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) { + return false + } + continue + } + + // At least one is encoded. To do a semantically correct comparison + // we need to unmarshal them first. + var desc *ExtensionDesc + if m := extensionMaps[base]; m != nil { + desc = m[extNum] + } + if desc == nil { + // If both have only encoded form and the bytes are the same, + // it is handled above. We get here when the bytes are different. + // We don't know how to decode it, so just compare them as byte + // slices. + log.Printf("proto: don't know how to compare extension %d of %v", extNum, base) + return false + } + var err error + if m1 == nil { + m1, err = decodeExtension(e1.enc, desc) + } + if m2 == nil && err == nil { + m2, err = decodeExtension(e2.enc, desc) + } + if err != nil { + // The encoded form is invalid. + log.Printf("proto: badly encoded extension %d of %v: %v", extNum, base, err) + return false + } + if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) { + return false + } + } + + return true +} diff --git a/vendor/github.com/golang/protobuf/proto/extensions.go b/vendor/github.com/golang/protobuf/proto/extensions.go new file mode 100644 index 00000000..816a3b9d --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/extensions.go @@ -0,0 +1,543 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +/* + * Types and routines for supporting protocol buffer extensions. + */ + +import ( + "errors" + "fmt" + "io" + "reflect" + "strconv" + "sync" +) + +// ErrMissingExtension is the error returned by GetExtension if the named extension is not in the message. +var ErrMissingExtension = errors.New("proto: missing extension") + +// ExtensionRange represents a range of message extensions for a protocol buffer. +// Used in code generated by the protocol compiler. +type ExtensionRange struct { + Start, End int32 // both inclusive +} + +// extendableProto is an interface implemented by any protocol buffer generated by the current +// proto compiler that may be extended. +type extendableProto interface { + Message + ExtensionRangeArray() []ExtensionRange + extensionsWrite() map[int32]Extension + extensionsRead() (map[int32]Extension, sync.Locker) +} + +// extendableProtoV1 is an interface implemented by a protocol buffer generated by the previous +// version of the proto compiler that may be extended. +type extendableProtoV1 interface { + Message + ExtensionRangeArray() []ExtensionRange + ExtensionMap() map[int32]Extension +} + +// extensionAdapter is a wrapper around extendableProtoV1 that implements extendableProto. +type extensionAdapter struct { + extendableProtoV1 +} + +func (e extensionAdapter) extensionsWrite() map[int32]Extension { + return e.ExtensionMap() +} + +func (e extensionAdapter) extensionsRead() (map[int32]Extension, sync.Locker) { + return e.ExtensionMap(), notLocker{} +} + +// notLocker is a sync.Locker whose Lock and Unlock methods are nops. +type notLocker struct{} + +func (n notLocker) Lock() {} +func (n notLocker) Unlock() {} + +// extendable returns the extendableProto interface for the given generated proto message. +// If the proto message has the old extension format, it returns a wrapper that implements +// the extendableProto interface. +func extendable(p interface{}) (extendableProto, error) { + switch p := p.(type) { + case extendableProto: + if isNilPtr(p) { + return nil, fmt.Errorf("proto: nil %T is not extendable", p) + } + return p, nil + case extendableProtoV1: + if isNilPtr(p) { + return nil, fmt.Errorf("proto: nil %T is not extendable", p) + } + return extensionAdapter{p}, nil + } + // Don't allocate a specific error containing %T: + // this is the hot path for Clone and MarshalText. + return nil, errNotExtendable +} + +var errNotExtendable = errors.New("proto: not an extendable proto.Message") + +func isNilPtr(x interface{}) bool { + v := reflect.ValueOf(x) + return v.Kind() == reflect.Ptr && v.IsNil() +} + +// XXX_InternalExtensions is an internal representation of proto extensions. +// +// Each generated message struct type embeds an anonymous XXX_InternalExtensions field, +// thus gaining the unexported 'extensions' method, which can be called only from the proto package. +// +// The methods of XXX_InternalExtensions are not concurrency safe in general, +// but calls to logically read-only methods such as has and get may be executed concurrently. +type XXX_InternalExtensions struct { + // The struct must be indirect so that if a user inadvertently copies a + // generated message and its embedded XXX_InternalExtensions, they + // avoid the mayhem of a copied mutex. + // + // The mutex serializes all logically read-only operations to p.extensionMap. + // It is up to the client to ensure that write operations to p.extensionMap are + // mutually exclusive with other accesses. + p *struct { + mu sync.Mutex + extensionMap map[int32]Extension + } +} + +// extensionsWrite returns the extension map, creating it on first use. +func (e *XXX_InternalExtensions) extensionsWrite() map[int32]Extension { + if e.p == nil { + e.p = new(struct { + mu sync.Mutex + extensionMap map[int32]Extension + }) + e.p.extensionMap = make(map[int32]Extension) + } + return e.p.extensionMap +} + +// extensionsRead returns the extensions map for read-only use. It may be nil. +// The caller must hold the returned mutex's lock when accessing Elements within the map. +func (e *XXX_InternalExtensions) extensionsRead() (map[int32]Extension, sync.Locker) { + if e.p == nil { + return nil, nil + } + return e.p.extensionMap, &e.p.mu +} + +// ExtensionDesc represents an extension specification. +// Used in generated code from the protocol compiler. +type ExtensionDesc struct { + ExtendedType Message // nil pointer to the type that is being extended + ExtensionType interface{} // nil pointer to the extension type + Field int32 // field number + Name string // fully-qualified name of extension, for text formatting + Tag string // protobuf tag style + Filename string // name of the file in which the extension is defined +} + +func (ed *ExtensionDesc) repeated() bool { + t := reflect.TypeOf(ed.ExtensionType) + return t.Kind() == reflect.Slice && t.Elem().Kind() != reflect.Uint8 +} + +// Extension represents an extension in a message. +type Extension struct { + // When an extension is stored in a message using SetExtension + // only desc and value are set. When the message is marshaled + // enc will be set to the encoded form of the message. + // + // When a message is unmarshaled and contains extensions, each + // extension will have only enc set. When such an extension is + // accessed using GetExtension (or GetExtensions) desc and value + // will be set. + desc *ExtensionDesc + value interface{} + enc []byte +} + +// SetRawExtension is for testing only. +func SetRawExtension(base Message, id int32, b []byte) { + epb, err := extendable(base) + if err != nil { + return + } + extmap := epb.extensionsWrite() + extmap[id] = Extension{enc: b} +} + +// isExtensionField returns true iff the given field number is in an extension range. +func isExtensionField(pb extendableProto, field int32) bool { + for _, er := range pb.ExtensionRangeArray() { + if er.Start <= field && field <= er.End { + return true + } + } + return false +} + +// checkExtensionTypes checks that the given extension is valid for pb. +func checkExtensionTypes(pb extendableProto, extension *ExtensionDesc) error { + var pbi interface{} = pb + // Check the extended type. + if ea, ok := pbi.(extensionAdapter); ok { + pbi = ea.extendableProtoV1 + } + if a, b := reflect.TypeOf(pbi), reflect.TypeOf(extension.ExtendedType); a != b { + return fmt.Errorf("proto: bad extended type; %v does not extend %v", b, a) + } + // Check the range. + if !isExtensionField(pb, extension.Field) { + return errors.New("proto: bad extension number; not in declared ranges") + } + return nil +} + +// extPropKey is sufficient to uniquely identify an extension. +type extPropKey struct { + base reflect.Type + field int32 +} + +var extProp = struct { + sync.RWMutex + m map[extPropKey]*Properties +}{ + m: make(map[extPropKey]*Properties), +} + +func extensionProperties(ed *ExtensionDesc) *Properties { + key := extPropKey{base: reflect.TypeOf(ed.ExtendedType), field: ed.Field} + + extProp.RLock() + if prop, ok := extProp.m[key]; ok { + extProp.RUnlock() + return prop + } + extProp.RUnlock() + + extProp.Lock() + defer extProp.Unlock() + // Check again. + if prop, ok := extProp.m[key]; ok { + return prop + } + + prop := new(Properties) + prop.Init(reflect.TypeOf(ed.ExtensionType), "unknown_name", ed.Tag, nil) + extProp.m[key] = prop + return prop +} + +// HasExtension returns whether the given extension is present in pb. +func HasExtension(pb Message, extension *ExtensionDesc) bool { + // TODO: Check types, field numbers, etc.? + epb, err := extendable(pb) + if err != nil { + return false + } + extmap, mu := epb.extensionsRead() + if extmap == nil { + return false + } + mu.Lock() + _, ok := extmap[extension.Field] + mu.Unlock() + return ok +} + +// ClearExtension removes the given extension from pb. +func ClearExtension(pb Message, extension *ExtensionDesc) { + epb, err := extendable(pb) + if err != nil { + return + } + // TODO: Check types, field numbers, etc.? + extmap := epb.extensionsWrite() + delete(extmap, extension.Field) +} + +// GetExtension retrieves a proto2 extended field from pb. +// +// If the descriptor is type complete (i.e., ExtensionDesc.ExtensionType is non-nil), +// then GetExtension parses the encoded field and returns a Go value of the specified type. +// If the field is not present, then the default value is returned (if one is specified), +// otherwise ErrMissingExtension is reported. +// +// If the descriptor is not type complete (i.e., ExtensionDesc.ExtensionType is nil), +// then GetExtension returns the raw encoded bytes of the field extension. +func GetExtension(pb Message, extension *ExtensionDesc) (interface{}, error) { + epb, err := extendable(pb) + if err != nil { + return nil, err + } + + if extension.ExtendedType != nil { + // can only check type if this is a complete descriptor + if err := checkExtensionTypes(epb, extension); err != nil { + return nil, err + } + } + + emap, mu := epb.extensionsRead() + if emap == nil { + return defaultExtensionValue(extension) + } + mu.Lock() + defer mu.Unlock() + e, ok := emap[extension.Field] + if !ok { + // defaultExtensionValue returns the default value or + // ErrMissingExtension if there is no default. + return defaultExtensionValue(extension) + } + + if e.value != nil { + // Already decoded. Check the descriptor, though. + if e.desc != extension { + // This shouldn't happen. If it does, it means that + // GetExtension was called twice with two different + // descriptors with the same field number. + return nil, errors.New("proto: descriptor conflict") + } + return e.value, nil + } + + if extension.ExtensionType == nil { + // incomplete descriptor + return e.enc, nil + } + + v, err := decodeExtension(e.enc, extension) + if err != nil { + return nil, err + } + + // Remember the decoded version and drop the encoded version. + // That way it is safe to mutate what we return. + e.value = v + e.desc = extension + e.enc = nil + emap[extension.Field] = e + return e.value, nil +} + +// defaultExtensionValue returns the default value for extension. +// If no default for an extension is defined ErrMissingExtension is returned. +func defaultExtensionValue(extension *ExtensionDesc) (interface{}, error) { + if extension.ExtensionType == nil { + // incomplete descriptor, so no default + return nil, ErrMissingExtension + } + + t := reflect.TypeOf(extension.ExtensionType) + props := extensionProperties(extension) + + sf, _, err := fieldDefault(t, props) + if err != nil { + return nil, err + } + + if sf == nil || sf.value == nil { + // There is no default value. + return nil, ErrMissingExtension + } + + if t.Kind() != reflect.Ptr { + // We do not need to return a Ptr, we can directly return sf.value. + return sf.value, nil + } + + // We need to return an interface{} that is a pointer to sf.value. + value := reflect.New(t).Elem() + value.Set(reflect.New(value.Type().Elem())) + if sf.kind == reflect.Int32 { + // We may have an int32 or an enum, but the underlying data is int32. + // Since we can't set an int32 into a non int32 reflect.value directly + // set it as a int32. + value.Elem().SetInt(int64(sf.value.(int32))) + } else { + value.Elem().Set(reflect.ValueOf(sf.value)) + } + return value.Interface(), nil +} + +// decodeExtension decodes an extension encoded in b. +func decodeExtension(b []byte, extension *ExtensionDesc) (interface{}, error) { + t := reflect.TypeOf(extension.ExtensionType) + unmarshal := typeUnmarshaler(t, extension.Tag) + + // t is a pointer to a struct, pointer to basic type or a slice. + // Allocate space to store the pointer/slice. + value := reflect.New(t).Elem() + + var err error + for { + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + wire := int(x) & 7 + + b, err = unmarshal(b, valToPointer(value.Addr()), wire) + if err != nil { + return nil, err + } + + if len(b) == 0 { + break + } + } + return value.Interface(), nil +} + +// GetExtensions returns a slice of the extensions present in pb that are also listed in es. +// The returned slice has the same length as es; missing extensions will appear as nil elements. +func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, err error) { + epb, err := extendable(pb) + if err != nil { + return nil, err + } + extensions = make([]interface{}, len(es)) + for i, e := range es { + extensions[i], err = GetExtension(epb, e) + if err == ErrMissingExtension { + err = nil + } + if err != nil { + return + } + } + return +} + +// ExtensionDescs returns a new slice containing pb's extension descriptors, in undefined order. +// For non-registered extensions, ExtensionDescs returns an incomplete descriptor containing +// just the Field field, which defines the extension's field number. +func ExtensionDescs(pb Message) ([]*ExtensionDesc, error) { + epb, err := extendable(pb) + if err != nil { + return nil, err + } + registeredExtensions := RegisteredExtensions(pb) + + emap, mu := epb.extensionsRead() + if emap == nil { + return nil, nil + } + mu.Lock() + defer mu.Unlock() + extensions := make([]*ExtensionDesc, 0, len(emap)) + for extid, e := range emap { + desc := e.desc + if desc == nil { + desc = registeredExtensions[extid] + if desc == nil { + desc = &ExtensionDesc{Field: extid} + } + } + + extensions = append(extensions, desc) + } + return extensions, nil +} + +// SetExtension sets the specified extension of pb to the specified value. +func SetExtension(pb Message, extension *ExtensionDesc, value interface{}) error { + epb, err := extendable(pb) + if err != nil { + return err + } + if err := checkExtensionTypes(epb, extension); err != nil { + return err + } + typ := reflect.TypeOf(extension.ExtensionType) + if typ != reflect.TypeOf(value) { + return errors.New("proto: bad extension value type") + } + // nil extension values need to be caught early, because the + // encoder can't distinguish an ErrNil due to a nil extension + // from an ErrNil due to a missing field. Extensions are + // always optional, so the encoder would just swallow the error + // and drop all the extensions from the encoded message. + if reflect.ValueOf(value).IsNil() { + return fmt.Errorf("proto: SetExtension called with nil value of type %T", value) + } + + extmap := epb.extensionsWrite() + extmap[extension.Field] = Extension{desc: extension, value: value} + return nil +} + +// ClearAllExtensions clears all extensions from pb. +func ClearAllExtensions(pb Message) { + epb, err := extendable(pb) + if err != nil { + return + } + m := epb.extensionsWrite() + for k := range m { + delete(m, k) + } +} + +// A global registry of extensions. +// The generated code will register the generated descriptors by calling RegisterExtension. + +var extensionMaps = make(map[reflect.Type]map[int32]*ExtensionDesc) + +// RegisterExtension is called from the generated code. +func RegisterExtension(desc *ExtensionDesc) { + st := reflect.TypeOf(desc.ExtendedType).Elem() + m := extensionMaps[st] + if m == nil { + m = make(map[int32]*ExtensionDesc) + extensionMaps[st] = m + } + if _, ok := m[desc.Field]; ok { + panic("proto: duplicate extension registered: " + st.String() + " " + strconv.Itoa(int(desc.Field))) + } + m[desc.Field] = desc +} + +// RegisteredExtensions returns a map of the registered extensions of a +// protocol buffer struct, indexed by the extension number. +// The argument pb should be a nil pointer to the struct type. +func RegisteredExtensions(pb Message) map[int32]*ExtensionDesc { + return extensionMaps[reflect.TypeOf(pb).Elem()] +} diff --git a/vendor/github.com/golang/protobuf/proto/lib.go b/vendor/github.com/golang/protobuf/proto/lib.go new file mode 100644 index 00000000..0e2191b8 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/lib.go @@ -0,0 +1,921 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +/* +Package proto converts data structures to and from the wire format of +protocol buffers. It works in concert with the Go source code generated +for .proto files by the protocol compiler. + +A summary of the properties of the protocol buffer interface +for a protocol buffer variable v: + + - Names are turned from camel_case to CamelCase for export. + - There are no methods on v to set fields; just treat + them as structure fields. + - There are getters that return a field's value if set, + and return the field's default value if unset. + The getters work even if the receiver is a nil message. + - The zero value for a struct is its correct initialization state. + All desired fields must be set before marshaling. + - A Reset() method will restore a protobuf struct to its zero state. + - Non-repeated fields are pointers to the values; nil means unset. + That is, optional or required field int32 f becomes F *int32. + - Repeated fields are slices. + - Helper functions are available to aid the setting of fields. + msg.Foo = proto.String("hello") // set field + - Constants are defined to hold the default values of all fields that + have them. They have the form Default_StructName_FieldName. + Because the getter methods handle defaulted values, + direct use of these constants should be rare. + - Enums are given type names and maps from names to values. + Enum values are prefixed by the enclosing message's name, or by the + enum's type name if it is a top-level enum. Enum types have a String + method, and a Enum method to assist in message construction. + - Nested messages, groups and enums have type names prefixed with the name of + the surrounding message type. + - Extensions are given descriptor names that start with E_, + followed by an underscore-delimited list of the nested messages + that contain it (if any) followed by the CamelCased name of the + extension field itself. HasExtension, ClearExtension, GetExtension + and SetExtension are functions for manipulating extensions. + - Oneof field sets are given a single field in their message, + with distinguished wrapper types for each possible field value. + - Marshal and Unmarshal are functions to encode and decode the wire format. + +When the .proto file specifies `syntax="proto3"`, there are some differences: + + - Non-repeated fields of non-message type are values instead of pointers. + - Enum types do not get an Enum method. + +The simplest way to describe this is to see an example. +Given file test.proto, containing + + package example; + + enum FOO { X = 17; } + + message Test { + required string label = 1; + optional int32 type = 2 [default=77]; + repeated int64 reps = 3; + optional group OptionalGroup = 4 { + required string RequiredField = 5; + } + oneof union { + int32 number = 6; + string name = 7; + } + } + +The resulting file, test.pb.go, is: + + package example + + import proto "github.com/golang/protobuf/proto" + import math "math" + + type FOO int32 + const ( + FOO_X FOO = 17 + ) + var FOO_name = map[int32]string{ + 17: "X", + } + var FOO_value = map[string]int32{ + "X": 17, + } + + func (x FOO) Enum() *FOO { + p := new(FOO) + *p = x + return p + } + func (x FOO) String() string { + return proto.EnumName(FOO_name, int32(x)) + } + func (x *FOO) UnmarshalJSON(data []byte) error { + value, err := proto.UnmarshalJSONEnum(FOO_value, data) + if err != nil { + return err + } + *x = FOO(value) + return nil + } + + type Test struct { + Label *string `protobuf:"bytes,1,req,name=label" json:"label,omitempty"` + Type *int32 `protobuf:"varint,2,opt,name=type,def=77" json:"type,omitempty"` + Reps []int64 `protobuf:"varint,3,rep,name=reps" json:"reps,omitempty"` + Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"` + // Types that are valid to be assigned to Union: + // *Test_Number + // *Test_Name + Union isTest_Union `protobuf_oneof:"union"` + XXX_unrecognized []byte `json:"-"` + } + func (m *Test) Reset() { *m = Test{} } + func (m *Test) String() string { return proto.CompactTextString(m) } + func (*Test) ProtoMessage() {} + + type isTest_Union interface { + isTest_Union() + } + + type Test_Number struct { + Number int32 `protobuf:"varint,6,opt,name=number"` + } + type Test_Name struct { + Name string `protobuf:"bytes,7,opt,name=name"` + } + + func (*Test_Number) isTest_Union() {} + func (*Test_Name) isTest_Union() {} + + func (m *Test) GetUnion() isTest_Union { + if m != nil { + return m.Union + } + return nil + } + const Default_Test_Type int32 = 77 + + func (m *Test) GetLabel() string { + if m != nil && m.Label != nil { + return *m.Label + } + return "" + } + + func (m *Test) GetType() int32 { + if m != nil && m.Type != nil { + return *m.Type + } + return Default_Test_Type + } + + func (m *Test) GetOptionalgroup() *Test_OptionalGroup { + if m != nil { + return m.Optionalgroup + } + return nil + } + + type Test_OptionalGroup struct { + RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"` + } + func (m *Test_OptionalGroup) Reset() { *m = Test_OptionalGroup{} } + func (m *Test_OptionalGroup) String() string { return proto.CompactTextString(m) } + + func (m *Test_OptionalGroup) GetRequiredField() string { + if m != nil && m.RequiredField != nil { + return *m.RequiredField + } + return "" + } + + func (m *Test) GetNumber() int32 { + if x, ok := m.GetUnion().(*Test_Number); ok { + return x.Number + } + return 0 + } + + func (m *Test) GetName() string { + if x, ok := m.GetUnion().(*Test_Name); ok { + return x.Name + } + return "" + } + + func init() { + proto.RegisterEnum("example.FOO", FOO_name, FOO_value) + } + +To create and play with a Test object: + + package main + + import ( + "log" + + "github.com/golang/protobuf/proto" + pb "./example.pb" + ) + + func main() { + test := &pb.Test{ + Label: proto.String("hello"), + Type: proto.Int32(17), + Reps: []int64{1, 2, 3}, + Optionalgroup: &pb.Test_OptionalGroup{ + RequiredField: proto.String("good bye"), + }, + Union: &pb.Test_Name{"fred"}, + } + data, err := proto.Marshal(test) + if err != nil { + log.Fatal("marshaling error: ", err) + } + newTest := &pb.Test{} + err = proto.Unmarshal(data, newTest) + if err != nil { + log.Fatal("unmarshaling error: ", err) + } + // Now test and newTest contain the same data. + if test.GetLabel() != newTest.GetLabel() { + log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel()) + } + // Use a type switch to determine which oneof was set. + switch u := test.Union.(type) { + case *pb.Test_Number: // u.Number contains the number. + case *pb.Test_Name: // u.Name contains the string. + } + // etc. + } +*/ +package proto + +import ( + "encoding/json" + "errors" + "fmt" + "log" + "reflect" + "sort" + "strconv" + "sync" +) + +var errInvalidUTF8 = errors.New("proto: invalid UTF-8 string") + +// Message is implemented by generated protocol buffer messages. +type Message interface { + Reset() + String() string + ProtoMessage() +} + +// Stats records allocation details about the protocol buffer encoders +// and decoders. Useful for tuning the library itself. +type Stats struct { + Emalloc uint64 // mallocs in encode + Dmalloc uint64 // mallocs in decode + Encode uint64 // number of encodes + Decode uint64 // number of decodes + Chit uint64 // number of cache hits + Cmiss uint64 // number of cache misses + Size uint64 // number of sizes +} + +// Set to true to enable stats collection. +const collectStats = false + +var stats Stats + +// GetStats returns a copy of the global Stats structure. +func GetStats() Stats { return stats } + +// A Buffer is a buffer manager for marshaling and unmarshaling +// protocol buffers. It may be reused between invocations to +// reduce memory usage. It is not necessary to use a Buffer; +// the global functions Marshal and Unmarshal create a +// temporary Buffer and are fine for most applications. +type Buffer struct { + buf []byte // encode/decode byte stream + index int // read point + + deterministic bool +} + +// NewBuffer allocates a new Buffer and initializes its internal data to +// the contents of the argument slice. +func NewBuffer(e []byte) *Buffer { + return &Buffer{buf: e} +} + +// Reset resets the Buffer, ready for marshaling a new protocol buffer. +func (p *Buffer) Reset() { + p.buf = p.buf[0:0] // for reading/writing + p.index = 0 // for reading +} + +// SetBuf replaces the internal buffer with the slice, +// ready for unmarshaling the contents of the slice. +func (p *Buffer) SetBuf(s []byte) { + p.buf = s + p.index = 0 +} + +// Bytes returns the contents of the Buffer. +func (p *Buffer) Bytes() []byte { return p.buf } + +// SetDeterministic sets whether to use deterministic serialization. +// +// Deterministic serialization guarantees that for a given binary, equal +// messages will always be serialized to the same bytes. This implies: +// +// - Repeated serialization of a message will return the same bytes. +// - Different processes of the same binary (which may be executing on +// different machines) will serialize equal messages to the same bytes. +// +// Note that the deterministic serialization is NOT canonical across +// languages. It is not guaranteed to remain stable over time. It is unstable +// across different builds with schema changes due to unknown fields. +// Users who need canonical serialization (e.g., persistent storage in a +// canonical form, fingerprinting, etc.) should define their own +// canonicalization specification and implement their own serializer rather +// than relying on this API. +// +// If deterministic serialization is requested, map entries will be sorted +// by keys in lexographical order. This is an implementation detail and +// subject to change. +func (p *Buffer) SetDeterministic(deterministic bool) { + p.deterministic = deterministic +} + +/* + * Helper routines for simplifying the creation of optional fields of basic type. + */ + +// Bool is a helper routine that allocates a new bool value +// to store v and returns a pointer to it. +func Bool(v bool) *bool { + return &v +} + +// Int32 is a helper routine that allocates a new int32 value +// to store v and returns a pointer to it. +func Int32(v int32) *int32 { + return &v +} + +// Int is a helper routine that allocates a new int32 value +// to store v and returns a pointer to it, but unlike Int32 +// its argument value is an int. +func Int(v int) *int32 { + p := new(int32) + *p = int32(v) + return p +} + +// Int64 is a helper routine that allocates a new int64 value +// to store v and returns a pointer to it. +func Int64(v int64) *int64 { + return &v +} + +// Float32 is a helper routine that allocates a new float32 value +// to store v and returns a pointer to it. +func Float32(v float32) *float32 { + return &v +} + +// Float64 is a helper routine that allocates a new float64 value +// to store v and returns a pointer to it. +func Float64(v float64) *float64 { + return &v +} + +// Uint32 is a helper routine that allocates a new uint32 value +// to store v and returns a pointer to it. +func Uint32(v uint32) *uint32 { + return &v +} + +// Uint64 is a helper routine that allocates a new uint64 value +// to store v and returns a pointer to it. +func Uint64(v uint64) *uint64 { + return &v +} + +// String is a helper routine that allocates a new string value +// to store v and returns a pointer to it. +func String(v string) *string { + return &v +} + +// EnumName is a helper function to simplify printing protocol buffer enums +// by name. Given an enum map and a value, it returns a useful string. +func EnumName(m map[int32]string, v int32) string { + s, ok := m[v] + if ok { + return s + } + return strconv.Itoa(int(v)) +} + +// UnmarshalJSONEnum is a helper function to simplify recovering enum int values +// from their JSON-encoded representation. Given a map from the enum's symbolic +// names to its int values, and a byte buffer containing the JSON-encoded +// value, it returns an int32 that can be cast to the enum type by the caller. +// +// The function can deal with both JSON representations, numeric and symbolic. +func UnmarshalJSONEnum(m map[string]int32, data []byte, enumName string) (int32, error) { + if data[0] == '"' { + // New style: enums are strings. + var repr string + if err := json.Unmarshal(data, &repr); err != nil { + return -1, err + } + val, ok := m[repr] + if !ok { + return 0, fmt.Errorf("unrecognized enum %s value %q", enumName, repr) + } + return val, nil + } + // Old style: enums are ints. + var val int32 + if err := json.Unmarshal(data, &val); err != nil { + return 0, fmt.Errorf("cannot unmarshal %#q into enum %s", data, enumName) + } + return val, nil +} + +// DebugPrint dumps the encoded data in b in a debugging format with a header +// including the string s. Used in testing but made available for general debugging. +func (p *Buffer) DebugPrint(s string, b []byte) { + var u uint64 + + obuf := p.buf + index := p.index + p.buf = b + p.index = 0 + depth := 0 + + fmt.Printf("\n--- %s ---\n", s) + +out: + for { + for i := 0; i < depth; i++ { + fmt.Print(" ") + } + + index := p.index + if index == len(p.buf) { + break + } + + op, err := p.DecodeVarint() + if err != nil { + fmt.Printf("%3d: fetching op err %v\n", index, err) + break out + } + tag := op >> 3 + wire := op & 7 + + switch wire { + default: + fmt.Printf("%3d: t=%3d unknown wire=%d\n", + index, tag, wire) + break out + + case WireBytes: + var r []byte + + r, err = p.DecodeRawBytes(false) + if err != nil { + break out + } + fmt.Printf("%3d: t=%3d bytes [%d]", index, tag, len(r)) + if len(r) <= 6 { + for i := 0; i < len(r); i++ { + fmt.Printf(" %.2x", r[i]) + } + } else { + for i := 0; i < 3; i++ { + fmt.Printf(" %.2x", r[i]) + } + fmt.Printf(" ..") + for i := len(r) - 3; i < len(r); i++ { + fmt.Printf(" %.2x", r[i]) + } + } + fmt.Printf("\n") + + case WireFixed32: + u, err = p.DecodeFixed32() + if err != nil { + fmt.Printf("%3d: t=%3d fix32 err %v\n", index, tag, err) + break out + } + fmt.Printf("%3d: t=%3d fix32 %d\n", index, tag, u) + + case WireFixed64: + u, err = p.DecodeFixed64() + if err != nil { + fmt.Printf("%3d: t=%3d fix64 err %v\n", index, tag, err) + break out + } + fmt.Printf("%3d: t=%3d fix64 %d\n", index, tag, u) + + case WireVarint: + u, err = p.DecodeVarint() + if err != nil { + fmt.Printf("%3d: t=%3d varint err %v\n", index, tag, err) + break out + } + fmt.Printf("%3d: t=%3d varint %d\n", index, tag, u) + + case WireStartGroup: + fmt.Printf("%3d: t=%3d start\n", index, tag) + depth++ + + case WireEndGroup: + depth-- + fmt.Printf("%3d: t=%3d end\n", index, tag) + } + } + + if depth != 0 { + fmt.Printf("%3d: start-end not balanced %d\n", p.index, depth) + } + fmt.Printf("\n") + + p.buf = obuf + p.index = index +} + +// SetDefaults sets unset protocol buffer fields to their default values. +// It only modifies fields that are both unset and have defined defaults. +// It recursively sets default values in any non-nil sub-messages. +func SetDefaults(pb Message) { + setDefaults(reflect.ValueOf(pb), true, false) +} + +// v is a pointer to a struct. +func setDefaults(v reflect.Value, recur, zeros bool) { + v = v.Elem() + + defaultMu.RLock() + dm, ok := defaults[v.Type()] + defaultMu.RUnlock() + if !ok { + dm = buildDefaultMessage(v.Type()) + defaultMu.Lock() + defaults[v.Type()] = dm + defaultMu.Unlock() + } + + for _, sf := range dm.scalars { + f := v.Field(sf.index) + if !f.IsNil() { + // field already set + continue + } + dv := sf.value + if dv == nil && !zeros { + // no explicit default, and don't want to set zeros + continue + } + fptr := f.Addr().Interface() // **T + // TODO: Consider batching the allocations we do here. + switch sf.kind { + case reflect.Bool: + b := new(bool) + if dv != nil { + *b = dv.(bool) + } + *(fptr.(**bool)) = b + case reflect.Float32: + f := new(float32) + if dv != nil { + *f = dv.(float32) + } + *(fptr.(**float32)) = f + case reflect.Float64: + f := new(float64) + if dv != nil { + *f = dv.(float64) + } + *(fptr.(**float64)) = f + case reflect.Int32: + // might be an enum + if ft := f.Type(); ft != int32PtrType { + // enum + f.Set(reflect.New(ft.Elem())) + if dv != nil { + f.Elem().SetInt(int64(dv.(int32))) + } + } else { + // int32 field + i := new(int32) + if dv != nil { + *i = dv.(int32) + } + *(fptr.(**int32)) = i + } + case reflect.Int64: + i := new(int64) + if dv != nil { + *i = dv.(int64) + } + *(fptr.(**int64)) = i + case reflect.String: + s := new(string) + if dv != nil { + *s = dv.(string) + } + *(fptr.(**string)) = s + case reflect.Uint8: + // exceptional case: []byte + var b []byte + if dv != nil { + db := dv.([]byte) + b = make([]byte, len(db)) + copy(b, db) + } else { + b = []byte{} + } + *(fptr.(*[]byte)) = b + case reflect.Uint32: + u := new(uint32) + if dv != nil { + *u = dv.(uint32) + } + *(fptr.(**uint32)) = u + case reflect.Uint64: + u := new(uint64) + if dv != nil { + *u = dv.(uint64) + } + *(fptr.(**uint64)) = u + default: + log.Printf("proto: can't set default for field %v (sf.kind=%v)", f, sf.kind) + } + } + + for _, ni := range dm.nested { + f := v.Field(ni) + // f is *T or []*T or map[T]*T + switch f.Kind() { + case reflect.Ptr: + if f.IsNil() { + continue + } + setDefaults(f, recur, zeros) + + case reflect.Slice: + for i := 0; i < f.Len(); i++ { + e := f.Index(i) + if e.IsNil() { + continue + } + setDefaults(e, recur, zeros) + } + + case reflect.Map: + for _, k := range f.MapKeys() { + e := f.MapIndex(k) + if e.IsNil() { + continue + } + setDefaults(e, recur, zeros) + } + } + } +} + +var ( + // defaults maps a protocol buffer struct type to a slice of the fields, + // with its scalar fields set to their proto-declared non-zero default values. + defaultMu sync.RWMutex + defaults = make(map[reflect.Type]defaultMessage) + + int32PtrType = reflect.TypeOf((*int32)(nil)) +) + +// defaultMessage represents information about the default values of a message. +type defaultMessage struct { + scalars []scalarField + nested []int // struct field index of nested messages +} + +type scalarField struct { + index int // struct field index + kind reflect.Kind // element type (the T in *T or []T) + value interface{} // the proto-declared default value, or nil +} + +// t is a struct type. +func buildDefaultMessage(t reflect.Type) (dm defaultMessage) { + sprop := GetProperties(t) + for _, prop := range sprop.Prop { + fi, ok := sprop.decoderTags.get(prop.Tag) + if !ok { + // XXX_unrecognized + continue + } + ft := t.Field(fi).Type + + sf, nested, err := fieldDefault(ft, prop) + switch { + case err != nil: + log.Print(err) + case nested: + dm.nested = append(dm.nested, fi) + case sf != nil: + sf.index = fi + dm.scalars = append(dm.scalars, *sf) + } + } + + return dm +} + +// fieldDefault returns the scalarField for field type ft. +// sf will be nil if the field can not have a default. +// nestedMessage will be true if this is a nested message. +// Note that sf.index is not set on return. +func fieldDefault(ft reflect.Type, prop *Properties) (sf *scalarField, nestedMessage bool, err error) { + var canHaveDefault bool + switch ft.Kind() { + case reflect.Ptr: + if ft.Elem().Kind() == reflect.Struct { + nestedMessage = true + } else { + canHaveDefault = true // proto2 scalar field + } + + case reflect.Slice: + switch ft.Elem().Kind() { + case reflect.Ptr: + nestedMessage = true // repeated message + case reflect.Uint8: + canHaveDefault = true // bytes field + } + + case reflect.Map: + if ft.Elem().Kind() == reflect.Ptr { + nestedMessage = true // map with message values + } + } + + if !canHaveDefault { + if nestedMessage { + return nil, true, nil + } + return nil, false, nil + } + + // We now know that ft is a pointer or slice. + sf = &scalarField{kind: ft.Elem().Kind()} + + // scalar fields without defaults + if !prop.HasDefault { + return sf, false, nil + } + + // a scalar field: either *T or []byte + switch ft.Elem().Kind() { + case reflect.Bool: + x, err := strconv.ParseBool(prop.Default) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default bool %q: %v", prop.Default, err) + } + sf.value = x + case reflect.Float32: + x, err := strconv.ParseFloat(prop.Default, 32) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default float32 %q: %v", prop.Default, err) + } + sf.value = float32(x) + case reflect.Float64: + x, err := strconv.ParseFloat(prop.Default, 64) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default float64 %q: %v", prop.Default, err) + } + sf.value = x + case reflect.Int32: + x, err := strconv.ParseInt(prop.Default, 10, 32) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default int32 %q: %v", prop.Default, err) + } + sf.value = int32(x) + case reflect.Int64: + x, err := strconv.ParseInt(prop.Default, 10, 64) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default int64 %q: %v", prop.Default, err) + } + sf.value = x + case reflect.String: + sf.value = prop.Default + case reflect.Uint8: + // []byte (not *uint8) + sf.value = []byte(prop.Default) + case reflect.Uint32: + x, err := strconv.ParseUint(prop.Default, 10, 32) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default uint32 %q: %v", prop.Default, err) + } + sf.value = uint32(x) + case reflect.Uint64: + x, err := strconv.ParseUint(prop.Default, 10, 64) + if err != nil { + return nil, false, fmt.Errorf("proto: bad default uint64 %q: %v", prop.Default, err) + } + sf.value = x + default: + return nil, false, fmt.Errorf("proto: unhandled def kind %v", ft.Elem().Kind()) + } + + return sf, false, nil +} + +// mapKeys returns a sort.Interface to be used for sorting the map keys. +// Map fields may have key types of non-float scalars, strings and enums. +func mapKeys(vs []reflect.Value) sort.Interface { + s := mapKeySorter{vs: vs} + + // Type specialization per https://developers.google.com/protocol-buffers/docs/proto#maps. + if len(vs) == 0 { + return s + } + switch vs[0].Kind() { + case reflect.Int32, reflect.Int64: + s.less = func(a, b reflect.Value) bool { return a.Int() < b.Int() } + case reflect.Uint32, reflect.Uint64: + s.less = func(a, b reflect.Value) bool { return a.Uint() < b.Uint() } + case reflect.Bool: + s.less = func(a, b reflect.Value) bool { return !a.Bool() && b.Bool() } // false < true + case reflect.String: + s.less = func(a, b reflect.Value) bool { return a.String() < b.String() } + default: + panic(fmt.Sprintf("unsupported map key type: %v", vs[0].Kind())) + } + + return s +} + +type mapKeySorter struct { + vs []reflect.Value + less func(a, b reflect.Value) bool +} + +func (s mapKeySorter) Len() int { return len(s.vs) } +func (s mapKeySorter) Swap(i, j int) { s.vs[i], s.vs[j] = s.vs[j], s.vs[i] } +func (s mapKeySorter) Less(i, j int) bool { + return s.less(s.vs[i], s.vs[j]) +} + +// isProto3Zero reports whether v is a zero proto3 value. +func isProto3Zero(v reflect.Value) bool { + switch v.Kind() { + case reflect.Bool: + return !v.Bool() + case reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint32, reflect.Uint64: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.String: + return v.String() == "" + } + return false +} + +// ProtoPackageIsVersion2 is referenced from generated protocol buffer files +// to assert that that code is compatible with this version of the proto package. +const ProtoPackageIsVersion2 = true + +// ProtoPackageIsVersion1 is referenced from generated protocol buffer files +// to assert that that code is compatible with this version of the proto package. +const ProtoPackageIsVersion1 = true + +// InternalMessageInfo is a type used internally by generated .pb.go files. +// This type is not intended to be used by non-generated code. +// This type is not subject to any compatibility guarantee. +type InternalMessageInfo struct { + marshal *marshalInfo + unmarshal *unmarshalInfo + merge *mergeInfo + discard *discardInfo +} diff --git a/vendor/github.com/golang/protobuf/proto/message_set.go b/vendor/github.com/golang/protobuf/proto/message_set.go new file mode 100644 index 00000000..3b6ca41d --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/message_set.go @@ -0,0 +1,314 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +/* + * Support for message sets. + */ + +import ( + "bytes" + "encoding/json" + "errors" + "fmt" + "reflect" + "sort" + "sync" +) + +// errNoMessageTypeID occurs when a protocol buffer does not have a message type ID. +// A message type ID is required for storing a protocol buffer in a message set. +var errNoMessageTypeID = errors.New("proto does not have a message type ID") + +// The first two types (_MessageSet_Item and messageSet) +// model what the protocol compiler produces for the following protocol message: +// message MessageSet { +// repeated group Item = 1 { +// required int32 type_id = 2; +// required string message = 3; +// }; +// } +// That is the MessageSet wire format. We can't use a proto to generate these +// because that would introduce a circular dependency between it and this package. + +type _MessageSet_Item struct { + TypeId *int32 `protobuf:"varint,2,req,name=type_id"` + Message []byte `protobuf:"bytes,3,req,name=message"` +} + +type messageSet struct { + Item []*_MessageSet_Item `protobuf:"group,1,rep"` + XXX_unrecognized []byte + // TODO: caching? +} + +// Make sure messageSet is a Message. +var _ Message = (*messageSet)(nil) + +// messageTypeIder is an interface satisfied by a protocol buffer type +// that may be stored in a MessageSet. +type messageTypeIder interface { + MessageTypeId() int32 +} + +func (ms *messageSet) find(pb Message) *_MessageSet_Item { + mti, ok := pb.(messageTypeIder) + if !ok { + return nil + } + id := mti.MessageTypeId() + for _, item := range ms.Item { + if *item.TypeId == id { + return item + } + } + return nil +} + +func (ms *messageSet) Has(pb Message) bool { + return ms.find(pb) != nil +} + +func (ms *messageSet) Unmarshal(pb Message) error { + if item := ms.find(pb); item != nil { + return Unmarshal(item.Message, pb) + } + if _, ok := pb.(messageTypeIder); !ok { + return errNoMessageTypeID + } + return nil // TODO: return error instead? +} + +func (ms *messageSet) Marshal(pb Message) error { + msg, err := Marshal(pb) + if err != nil { + return err + } + if item := ms.find(pb); item != nil { + // reuse existing item + item.Message = msg + return nil + } + + mti, ok := pb.(messageTypeIder) + if !ok { + return errNoMessageTypeID + } + + mtid := mti.MessageTypeId() + ms.Item = append(ms.Item, &_MessageSet_Item{ + TypeId: &mtid, + Message: msg, + }) + return nil +} + +func (ms *messageSet) Reset() { *ms = messageSet{} } +func (ms *messageSet) String() string { return CompactTextString(ms) } +func (*messageSet) ProtoMessage() {} + +// Support for the message_set_wire_format message option. + +func skipVarint(buf []byte) []byte { + i := 0 + for ; buf[i]&0x80 != 0; i++ { + } + return buf[i+1:] +} + +// MarshalMessageSet encodes the extension map represented by m in the message set wire format. +// It is called by generated Marshal methods on protocol buffer messages with the message_set_wire_format option. +func MarshalMessageSet(exts interface{}) ([]byte, error) { + return marshalMessageSet(exts, false) +} + +// marshaMessageSet implements above function, with the opt to turn on / off deterministic during Marshal. +func marshalMessageSet(exts interface{}, deterministic bool) ([]byte, error) { + switch exts := exts.(type) { + case *XXX_InternalExtensions: + var u marshalInfo + siz := u.sizeMessageSet(exts) + b := make([]byte, 0, siz) + return u.appendMessageSet(b, exts, deterministic) + + case map[int32]Extension: + // This is an old-style extension map. + // Wrap it in a new-style XXX_InternalExtensions. + ie := XXX_InternalExtensions{ + p: &struct { + mu sync.Mutex + extensionMap map[int32]Extension + }{ + extensionMap: exts, + }, + } + + var u marshalInfo + siz := u.sizeMessageSet(&ie) + b := make([]byte, 0, siz) + return u.appendMessageSet(b, &ie, deterministic) + + default: + return nil, errors.New("proto: not an extension map") + } +} + +// UnmarshalMessageSet decodes the extension map encoded in buf in the message set wire format. +// It is called by Unmarshal methods on protocol buffer messages with the message_set_wire_format option. +func UnmarshalMessageSet(buf []byte, exts interface{}) error { + var m map[int32]Extension + switch exts := exts.(type) { + case *XXX_InternalExtensions: + m = exts.extensionsWrite() + case map[int32]Extension: + m = exts + default: + return errors.New("proto: not an extension map") + } + + ms := new(messageSet) + if err := Unmarshal(buf, ms); err != nil { + return err + } + for _, item := range ms.Item { + id := *item.TypeId + msg := item.Message + + // Restore wire type and field number varint, plus length varint. + // Be careful to preserve duplicate items. + b := EncodeVarint(uint64(id)<<3 | WireBytes) + if ext, ok := m[id]; ok { + // Existing data; rip off the tag and length varint + // so we join the new data correctly. + // We can assume that ext.enc is set because we are unmarshaling. + o := ext.enc[len(b):] // skip wire type and field number + _, n := DecodeVarint(o) // calculate length of length varint + o = o[n:] // skip length varint + msg = append(o, msg...) // join old data and new data + } + b = append(b, EncodeVarint(uint64(len(msg)))...) + b = append(b, msg...) + + m[id] = Extension{enc: b} + } + return nil +} + +// MarshalMessageSetJSON encodes the extension map represented by m in JSON format. +// It is called by generated MarshalJSON methods on protocol buffer messages with the message_set_wire_format option. +func MarshalMessageSetJSON(exts interface{}) ([]byte, error) { + var m map[int32]Extension + switch exts := exts.(type) { + case *XXX_InternalExtensions: + var mu sync.Locker + m, mu = exts.extensionsRead() + if m != nil { + // Keep the extensions map locked until we're done marshaling to prevent + // races between marshaling and unmarshaling the lazily-{en,de}coded + // values. + mu.Lock() + defer mu.Unlock() + } + case map[int32]Extension: + m = exts + default: + return nil, errors.New("proto: not an extension map") + } + var b bytes.Buffer + b.WriteByte('{') + + // Process the map in key order for deterministic output. + ids := make([]int32, 0, len(m)) + for id := range m { + ids = append(ids, id) + } + sort.Sort(int32Slice(ids)) // int32Slice defined in text.go + + for i, id := range ids { + ext := m[id] + msd, ok := messageSetMap[id] + if !ok { + // Unknown type; we can't render it, so skip it. + continue + } + + if i > 0 && b.Len() > 1 { + b.WriteByte(',') + } + + fmt.Fprintf(&b, `"[%s]":`, msd.name) + + x := ext.value + if x == nil { + x = reflect.New(msd.t.Elem()).Interface() + if err := Unmarshal(ext.enc, x.(Message)); err != nil { + return nil, err + } + } + d, err := json.Marshal(x) + if err != nil { + return nil, err + } + b.Write(d) + } + b.WriteByte('}') + return b.Bytes(), nil +} + +// UnmarshalMessageSetJSON decodes the extension map encoded in buf in JSON format. +// It is called by generated UnmarshalJSON methods on protocol buffer messages with the message_set_wire_format option. +func UnmarshalMessageSetJSON(buf []byte, exts interface{}) error { + // Common-case fast path. + if len(buf) == 0 || bytes.Equal(buf, []byte("{}")) { + return nil + } + + // This is fairly tricky, and it's not clear that it is needed. + return errors.New("TODO: UnmarshalMessageSetJSON not yet implemented") +} + +// A global registry of types that can be used in a MessageSet. + +var messageSetMap = make(map[int32]messageSetDesc) + +type messageSetDesc struct { + t reflect.Type // pointer to struct + name string +} + +// RegisterMessageSetType is called from the generated code. +func RegisterMessageSetType(m Message, fieldNum int32, name string) { + messageSetMap[fieldNum] = messageSetDesc{ + t: reflect.TypeOf(m), + name: name, + } +} diff --git a/vendor/github.com/golang/protobuf/proto/pointer_reflect.go b/vendor/github.com/golang/protobuf/proto/pointer_reflect.go new file mode 100644 index 00000000..b6cad908 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/pointer_reflect.go @@ -0,0 +1,357 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2012 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// +build purego appengine js + +// This file contains an implementation of proto field accesses using package reflect. +// It is slower than the code in pointer_unsafe.go but it avoids package unsafe and can +// be used on App Engine. + +package proto + +import ( + "reflect" + "sync" +) + +const unsafeAllowed = false + +// A field identifies a field in a struct, accessible from a pointer. +// In this implementation, a field is identified by the sequence of field indices +// passed to reflect's FieldByIndex. +type field []int + +// toField returns a field equivalent to the given reflect field. +func toField(f *reflect.StructField) field { + return f.Index +} + +// invalidField is an invalid field identifier. +var invalidField = field(nil) + +// zeroField is a noop when calling pointer.offset. +var zeroField = field([]int{}) + +// IsValid reports whether the field identifier is valid. +func (f field) IsValid() bool { return f != nil } + +// The pointer type is for the table-driven decoder. +// The implementation here uses a reflect.Value of pointer type to +// create a generic pointer. In pointer_unsafe.go we use unsafe +// instead of reflect to implement the same (but faster) interface. +type pointer struct { + v reflect.Value +} + +// toPointer converts an interface of pointer type to a pointer +// that points to the same target. +func toPointer(i *Message) pointer { + return pointer{v: reflect.ValueOf(*i)} +} + +// toAddrPointer converts an interface to a pointer that points to +// the interface data. +func toAddrPointer(i *interface{}, isptr bool) pointer { + v := reflect.ValueOf(*i) + u := reflect.New(v.Type()) + u.Elem().Set(v) + return pointer{v: u} +} + +// valToPointer converts v to a pointer. v must be of pointer type. +func valToPointer(v reflect.Value) pointer { + return pointer{v: v} +} + +// offset converts from a pointer to a structure to a pointer to +// one of its fields. +func (p pointer) offset(f field) pointer { + return pointer{v: p.v.Elem().FieldByIndex(f).Addr()} +} + +func (p pointer) isNil() bool { + return p.v.IsNil() +} + +// grow updates the slice s in place to make it one element longer. +// s must be addressable. +// Returns the (addressable) new element. +func grow(s reflect.Value) reflect.Value { + n, m := s.Len(), s.Cap() + if n < m { + s.SetLen(n + 1) + } else { + s.Set(reflect.Append(s, reflect.Zero(s.Type().Elem()))) + } + return s.Index(n) +} + +func (p pointer) toInt64() *int64 { + return p.v.Interface().(*int64) +} +func (p pointer) toInt64Ptr() **int64 { + return p.v.Interface().(**int64) +} +func (p pointer) toInt64Slice() *[]int64 { + return p.v.Interface().(*[]int64) +} + +var int32ptr = reflect.TypeOf((*int32)(nil)) + +func (p pointer) toInt32() *int32 { + return p.v.Convert(int32ptr).Interface().(*int32) +} + +// The toInt32Ptr/Slice methods don't work because of enums. +// Instead, we must use set/get methods for the int32ptr/slice case. +/* + func (p pointer) toInt32Ptr() **int32 { + return p.v.Interface().(**int32) +} + func (p pointer) toInt32Slice() *[]int32 { + return p.v.Interface().(*[]int32) +} +*/ +func (p pointer) getInt32Ptr() *int32 { + if p.v.Type().Elem().Elem() == reflect.TypeOf(int32(0)) { + // raw int32 type + return p.v.Elem().Interface().(*int32) + } + // an enum + return p.v.Elem().Convert(int32PtrType).Interface().(*int32) +} +func (p pointer) setInt32Ptr(v int32) { + // Allocate value in a *int32. Possibly convert that to a *enum. + // Then assign it to a **int32 or **enum. + // Note: we can convert *int32 to *enum, but we can't convert + // **int32 to **enum! + p.v.Elem().Set(reflect.ValueOf(&v).Convert(p.v.Type().Elem())) +} + +// getInt32Slice copies []int32 from p as a new slice. +// This behavior differs from the implementation in pointer_unsafe.go. +func (p pointer) getInt32Slice() []int32 { + if p.v.Type().Elem().Elem() == reflect.TypeOf(int32(0)) { + // raw int32 type + return p.v.Elem().Interface().([]int32) + } + // an enum + // Allocate a []int32, then assign []enum's values into it. + // Note: we can't convert []enum to []int32. + slice := p.v.Elem() + s := make([]int32, slice.Len()) + for i := 0; i < slice.Len(); i++ { + s[i] = int32(slice.Index(i).Int()) + } + return s +} + +// setInt32Slice copies []int32 into p as a new slice. +// This behavior differs from the implementation in pointer_unsafe.go. +func (p pointer) setInt32Slice(v []int32) { + if p.v.Type().Elem().Elem() == reflect.TypeOf(int32(0)) { + // raw int32 type + p.v.Elem().Set(reflect.ValueOf(v)) + return + } + // an enum + // Allocate a []enum, then assign []int32's values into it. + // Note: we can't convert []enum to []int32. + slice := reflect.MakeSlice(p.v.Type().Elem(), len(v), cap(v)) + for i, x := range v { + slice.Index(i).SetInt(int64(x)) + } + p.v.Elem().Set(slice) +} +func (p pointer) appendInt32Slice(v int32) { + grow(p.v.Elem()).SetInt(int64(v)) +} + +func (p pointer) toUint64() *uint64 { + return p.v.Interface().(*uint64) +} +func (p pointer) toUint64Ptr() **uint64 { + return p.v.Interface().(**uint64) +} +func (p pointer) toUint64Slice() *[]uint64 { + return p.v.Interface().(*[]uint64) +} +func (p pointer) toUint32() *uint32 { + return p.v.Interface().(*uint32) +} +func (p pointer) toUint32Ptr() **uint32 { + return p.v.Interface().(**uint32) +} +func (p pointer) toUint32Slice() *[]uint32 { + return p.v.Interface().(*[]uint32) +} +func (p pointer) toBool() *bool { + return p.v.Interface().(*bool) +} +func (p pointer) toBoolPtr() **bool { + return p.v.Interface().(**bool) +} +func (p pointer) toBoolSlice() *[]bool { + return p.v.Interface().(*[]bool) +} +func (p pointer) toFloat64() *float64 { + return p.v.Interface().(*float64) +} +func (p pointer) toFloat64Ptr() **float64 { + return p.v.Interface().(**float64) +} +func (p pointer) toFloat64Slice() *[]float64 { + return p.v.Interface().(*[]float64) +} +func (p pointer) toFloat32() *float32 { + return p.v.Interface().(*float32) +} +func (p pointer) toFloat32Ptr() **float32 { + return p.v.Interface().(**float32) +} +func (p pointer) toFloat32Slice() *[]float32 { + return p.v.Interface().(*[]float32) +} +func (p pointer) toString() *string { + return p.v.Interface().(*string) +} +func (p pointer) toStringPtr() **string { + return p.v.Interface().(**string) +} +func (p pointer) toStringSlice() *[]string { + return p.v.Interface().(*[]string) +} +func (p pointer) toBytes() *[]byte { + return p.v.Interface().(*[]byte) +} +func (p pointer) toBytesSlice() *[][]byte { + return p.v.Interface().(*[][]byte) +} +func (p pointer) toExtensions() *XXX_InternalExtensions { + return p.v.Interface().(*XXX_InternalExtensions) +} +func (p pointer) toOldExtensions() *map[int32]Extension { + return p.v.Interface().(*map[int32]Extension) +} +func (p pointer) getPointer() pointer { + return pointer{v: p.v.Elem()} +} +func (p pointer) setPointer(q pointer) { + p.v.Elem().Set(q.v) +} +func (p pointer) appendPointer(q pointer) { + grow(p.v.Elem()).Set(q.v) +} + +// getPointerSlice copies []*T from p as a new []pointer. +// This behavior differs from the implementation in pointer_unsafe.go. +func (p pointer) getPointerSlice() []pointer { + if p.v.IsNil() { + return nil + } + n := p.v.Elem().Len() + s := make([]pointer, n) + for i := 0; i < n; i++ { + s[i] = pointer{v: p.v.Elem().Index(i)} + } + return s +} + +// setPointerSlice copies []pointer into p as a new []*T. +// This behavior differs from the implementation in pointer_unsafe.go. +func (p pointer) setPointerSlice(v []pointer) { + if v == nil { + p.v.Elem().Set(reflect.New(p.v.Elem().Type()).Elem()) + return + } + s := reflect.MakeSlice(p.v.Elem().Type(), 0, len(v)) + for _, p := range v { + s = reflect.Append(s, p.v) + } + p.v.Elem().Set(s) +} + +// getInterfacePointer returns a pointer that points to the +// interface data of the interface pointed by p. +func (p pointer) getInterfacePointer() pointer { + if p.v.Elem().IsNil() { + return pointer{v: p.v.Elem()} + } + return pointer{v: p.v.Elem().Elem().Elem().Field(0).Addr()} // *interface -> interface -> *struct -> struct +} + +func (p pointer) asPointerTo(t reflect.Type) reflect.Value { + // TODO: check that p.v.Type().Elem() == t? + return p.v +} + +func atomicLoadUnmarshalInfo(p **unmarshalInfo) *unmarshalInfo { + atomicLock.Lock() + defer atomicLock.Unlock() + return *p +} +func atomicStoreUnmarshalInfo(p **unmarshalInfo, v *unmarshalInfo) { + atomicLock.Lock() + defer atomicLock.Unlock() + *p = v +} +func atomicLoadMarshalInfo(p **marshalInfo) *marshalInfo { + atomicLock.Lock() + defer atomicLock.Unlock() + return *p +} +func atomicStoreMarshalInfo(p **marshalInfo, v *marshalInfo) { + atomicLock.Lock() + defer atomicLock.Unlock() + *p = v +} +func atomicLoadMergeInfo(p **mergeInfo) *mergeInfo { + atomicLock.Lock() + defer atomicLock.Unlock() + return *p +} +func atomicStoreMergeInfo(p **mergeInfo, v *mergeInfo) { + atomicLock.Lock() + defer atomicLock.Unlock() + *p = v +} +func atomicLoadDiscardInfo(p **discardInfo) *discardInfo { + atomicLock.Lock() + defer atomicLock.Unlock() + return *p +} +func atomicStoreDiscardInfo(p **discardInfo, v *discardInfo) { + atomicLock.Lock() + defer atomicLock.Unlock() + *p = v +} + +var atomicLock sync.Mutex diff --git a/vendor/github.com/golang/protobuf/proto/pointer_unsafe.go b/vendor/github.com/golang/protobuf/proto/pointer_unsafe.go new file mode 100644 index 00000000..d55a335d --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/pointer_unsafe.go @@ -0,0 +1,308 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2012 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// +build !purego,!appengine,!js + +// This file contains the implementation of the proto field accesses using package unsafe. + +package proto + +import ( + "reflect" + "sync/atomic" + "unsafe" +) + +const unsafeAllowed = true + +// A field identifies a field in a struct, accessible from a pointer. +// In this implementation, a field is identified by its byte offset from the start of the struct. +type field uintptr + +// toField returns a field equivalent to the given reflect field. +func toField(f *reflect.StructField) field { + return field(f.Offset) +} + +// invalidField is an invalid field identifier. +const invalidField = ^field(0) + +// zeroField is a noop when calling pointer.offset. +const zeroField = field(0) + +// IsValid reports whether the field identifier is valid. +func (f field) IsValid() bool { + return f != invalidField +} + +// The pointer type below is for the new table-driven encoder/decoder. +// The implementation here uses unsafe.Pointer to create a generic pointer. +// In pointer_reflect.go we use reflect instead of unsafe to implement +// the same (but slower) interface. +type pointer struct { + p unsafe.Pointer +} + +// size of pointer +var ptrSize = unsafe.Sizeof(uintptr(0)) + +// toPointer converts an interface of pointer type to a pointer +// that points to the same target. +func toPointer(i *Message) pointer { + // Super-tricky - read pointer out of data word of interface value. + // Saves ~25ns over the equivalent: + // return valToPointer(reflect.ValueOf(*i)) + return pointer{p: (*[2]unsafe.Pointer)(unsafe.Pointer(i))[1]} +} + +// toAddrPointer converts an interface to a pointer that points to +// the interface data. +func toAddrPointer(i *interface{}, isptr bool) pointer { + // Super-tricky - read or get the address of data word of interface value. + if isptr { + // The interface is of pointer type, thus it is a direct interface. + // The data word is the pointer data itself. We take its address. + return pointer{p: unsafe.Pointer(uintptr(unsafe.Pointer(i)) + ptrSize)} + } + // The interface is not of pointer type. The data word is the pointer + // to the data. + return pointer{p: (*[2]unsafe.Pointer)(unsafe.Pointer(i))[1]} +} + +// valToPointer converts v to a pointer. v must be of pointer type. +func valToPointer(v reflect.Value) pointer { + return pointer{p: unsafe.Pointer(v.Pointer())} +} + +// offset converts from a pointer to a structure to a pointer to +// one of its fields. +func (p pointer) offset(f field) pointer { + // For safety, we should panic if !f.IsValid, however calling panic causes + // this to no longer be inlineable, which is a serious performance cost. + /* + if !f.IsValid() { + panic("invalid field") + } + */ + return pointer{p: unsafe.Pointer(uintptr(p.p) + uintptr(f))} +} + +func (p pointer) isNil() bool { + return p.p == nil +} + +func (p pointer) toInt64() *int64 { + return (*int64)(p.p) +} +func (p pointer) toInt64Ptr() **int64 { + return (**int64)(p.p) +} +func (p pointer) toInt64Slice() *[]int64 { + return (*[]int64)(p.p) +} +func (p pointer) toInt32() *int32 { + return (*int32)(p.p) +} + +// See pointer_reflect.go for why toInt32Ptr/Slice doesn't exist. +/* + func (p pointer) toInt32Ptr() **int32 { + return (**int32)(p.p) + } + func (p pointer) toInt32Slice() *[]int32 { + return (*[]int32)(p.p) + } +*/ +func (p pointer) getInt32Ptr() *int32 { + return *(**int32)(p.p) +} +func (p pointer) setInt32Ptr(v int32) { + *(**int32)(p.p) = &v +} + +// getInt32Slice loads a []int32 from p. +// The value returned is aliased with the original slice. +// This behavior differs from the implementation in pointer_reflect.go. +func (p pointer) getInt32Slice() []int32 { + return *(*[]int32)(p.p) +} + +// setInt32Slice stores a []int32 to p. +// The value set is aliased with the input slice. +// This behavior differs from the implementation in pointer_reflect.go. +func (p pointer) setInt32Slice(v []int32) { + *(*[]int32)(p.p) = v +} + +// TODO: Can we get rid of appendInt32Slice and use setInt32Slice instead? +func (p pointer) appendInt32Slice(v int32) { + s := (*[]int32)(p.p) + *s = append(*s, v) +} + +func (p pointer) toUint64() *uint64 { + return (*uint64)(p.p) +} +func (p pointer) toUint64Ptr() **uint64 { + return (**uint64)(p.p) +} +func (p pointer) toUint64Slice() *[]uint64 { + return (*[]uint64)(p.p) +} +func (p pointer) toUint32() *uint32 { + return (*uint32)(p.p) +} +func (p pointer) toUint32Ptr() **uint32 { + return (**uint32)(p.p) +} +func (p pointer) toUint32Slice() *[]uint32 { + return (*[]uint32)(p.p) +} +func (p pointer) toBool() *bool { + return (*bool)(p.p) +} +func (p pointer) toBoolPtr() **bool { + return (**bool)(p.p) +} +func (p pointer) toBoolSlice() *[]bool { + return (*[]bool)(p.p) +} +func (p pointer) toFloat64() *float64 { + return (*float64)(p.p) +} +func (p pointer) toFloat64Ptr() **float64 { + return (**float64)(p.p) +} +func (p pointer) toFloat64Slice() *[]float64 { + return (*[]float64)(p.p) +} +func (p pointer) toFloat32() *float32 { + return (*float32)(p.p) +} +func (p pointer) toFloat32Ptr() **float32 { + return (**float32)(p.p) +} +func (p pointer) toFloat32Slice() *[]float32 { + return (*[]float32)(p.p) +} +func (p pointer) toString() *string { + return (*string)(p.p) +} +func (p pointer) toStringPtr() **string { + return (**string)(p.p) +} +func (p pointer) toStringSlice() *[]string { + return (*[]string)(p.p) +} +func (p pointer) toBytes() *[]byte { + return (*[]byte)(p.p) +} +func (p pointer) toBytesSlice() *[][]byte { + return (*[][]byte)(p.p) +} +func (p pointer) toExtensions() *XXX_InternalExtensions { + return (*XXX_InternalExtensions)(p.p) +} +func (p pointer) toOldExtensions() *map[int32]Extension { + return (*map[int32]Extension)(p.p) +} + +// getPointerSlice loads []*T from p as a []pointer. +// The value returned is aliased with the original slice. +// This behavior differs from the implementation in pointer_reflect.go. +func (p pointer) getPointerSlice() []pointer { + // Super-tricky - p should point to a []*T where T is a + // message type. We load it as []pointer. + return *(*[]pointer)(p.p) +} + +// setPointerSlice stores []pointer into p as a []*T. +// The value set is aliased with the input slice. +// This behavior differs from the implementation in pointer_reflect.go. +func (p pointer) setPointerSlice(v []pointer) { + // Super-tricky - p should point to a []*T where T is a + // message type. We store it as []pointer. + *(*[]pointer)(p.p) = v +} + +// getPointer loads the pointer at p and returns it. +func (p pointer) getPointer() pointer { + return pointer{p: *(*unsafe.Pointer)(p.p)} +} + +// setPointer stores the pointer q at p. +func (p pointer) setPointer(q pointer) { + *(*unsafe.Pointer)(p.p) = q.p +} + +// append q to the slice pointed to by p. +func (p pointer) appendPointer(q pointer) { + s := (*[]unsafe.Pointer)(p.p) + *s = append(*s, q.p) +} + +// getInterfacePointer returns a pointer that points to the +// interface data of the interface pointed by p. +func (p pointer) getInterfacePointer() pointer { + // Super-tricky - read pointer out of data word of interface value. + return pointer{p: (*(*[2]unsafe.Pointer)(p.p))[1]} +} + +// asPointerTo returns a reflect.Value that is a pointer to an +// object of type t stored at p. +func (p pointer) asPointerTo(t reflect.Type) reflect.Value { + return reflect.NewAt(t, p.p) +} + +func atomicLoadUnmarshalInfo(p **unmarshalInfo) *unmarshalInfo { + return (*unmarshalInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p)))) +} +func atomicStoreUnmarshalInfo(p **unmarshalInfo, v *unmarshalInfo) { + atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v)) +} +func atomicLoadMarshalInfo(p **marshalInfo) *marshalInfo { + return (*marshalInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p)))) +} +func atomicStoreMarshalInfo(p **marshalInfo, v *marshalInfo) { + atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v)) +} +func atomicLoadMergeInfo(p **mergeInfo) *mergeInfo { + return (*mergeInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p)))) +} +func atomicStoreMergeInfo(p **mergeInfo, v *mergeInfo) { + atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v)) +} +func atomicLoadDiscardInfo(p **discardInfo) *discardInfo { + return (*discardInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p)))) +} +func atomicStoreDiscardInfo(p **discardInfo, v *discardInfo) { + atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v)) +} diff --git a/vendor/github.com/golang/protobuf/proto/properties.go b/vendor/github.com/golang/protobuf/proto/properties.go new file mode 100644 index 00000000..f710adab --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/properties.go @@ -0,0 +1,544 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +/* + * Routines for encoding data into the wire format for protocol buffers. + */ + +import ( + "fmt" + "log" + "os" + "reflect" + "sort" + "strconv" + "strings" + "sync" +) + +const debug bool = false + +// Constants that identify the encoding of a value on the wire. +const ( + WireVarint = 0 + WireFixed64 = 1 + WireBytes = 2 + WireStartGroup = 3 + WireEndGroup = 4 + WireFixed32 = 5 +) + +// tagMap is an optimization over map[int]int for typical protocol buffer +// use-cases. Encoded protocol buffers are often in tag order with small tag +// numbers. +type tagMap struct { + fastTags []int + slowTags map[int]int +} + +// tagMapFastLimit is the upper bound on the tag number that will be stored in +// the tagMap slice rather than its map. +const tagMapFastLimit = 1024 + +func (p *tagMap) get(t int) (int, bool) { + if t > 0 && t < tagMapFastLimit { + if t >= len(p.fastTags) { + return 0, false + } + fi := p.fastTags[t] + return fi, fi >= 0 + } + fi, ok := p.slowTags[t] + return fi, ok +} + +func (p *tagMap) put(t int, fi int) { + if t > 0 && t < tagMapFastLimit { + for len(p.fastTags) < t+1 { + p.fastTags = append(p.fastTags, -1) + } + p.fastTags[t] = fi + return + } + if p.slowTags == nil { + p.slowTags = make(map[int]int) + } + p.slowTags[t] = fi +} + +// StructProperties represents properties for all the fields of a struct. +// decoderTags and decoderOrigNames should only be used by the decoder. +type StructProperties struct { + Prop []*Properties // properties for each field + reqCount int // required count + decoderTags tagMap // map from proto tag to struct field number + decoderOrigNames map[string]int // map from original name to struct field number + order []int // list of struct field numbers in tag order + + // OneofTypes contains information about the oneof fields in this message. + // It is keyed by the original name of a field. + OneofTypes map[string]*OneofProperties +} + +// OneofProperties represents information about a specific field in a oneof. +type OneofProperties struct { + Type reflect.Type // pointer to generated struct type for this oneof field + Field int // struct field number of the containing oneof in the message + Prop *Properties +} + +// Implement the sorting interface so we can sort the fields in tag order, as recommended by the spec. +// See encode.go, (*Buffer).enc_struct. + +func (sp *StructProperties) Len() int { return len(sp.order) } +func (sp *StructProperties) Less(i, j int) bool { + return sp.Prop[sp.order[i]].Tag < sp.Prop[sp.order[j]].Tag +} +func (sp *StructProperties) Swap(i, j int) { sp.order[i], sp.order[j] = sp.order[j], sp.order[i] } + +// Properties represents the protocol-specific behavior of a single struct field. +type Properties struct { + Name string // name of the field, for error messages + OrigName string // original name before protocol compiler (always set) + JSONName string // name to use for JSON; determined by protoc + Wire string + WireType int + Tag int + Required bool + Optional bool + Repeated bool + Packed bool // relevant for repeated primitives only + Enum string // set for enum types only + proto3 bool // whether this is known to be a proto3 field; set for []byte only + oneof bool // whether this is a oneof field + + Default string // default value + HasDefault bool // whether an explicit default was provided + + stype reflect.Type // set for struct types only + sprop *StructProperties // set for struct types only + + mtype reflect.Type // set for map types only + mkeyprop *Properties // set for map types only + mvalprop *Properties // set for map types only +} + +// String formats the properties in the protobuf struct field tag style. +func (p *Properties) String() string { + s := p.Wire + s += "," + s += strconv.Itoa(p.Tag) + if p.Required { + s += ",req" + } + if p.Optional { + s += ",opt" + } + if p.Repeated { + s += ",rep" + } + if p.Packed { + s += ",packed" + } + s += ",name=" + p.OrigName + if p.JSONName != p.OrigName { + s += ",json=" + p.JSONName + } + if p.proto3 { + s += ",proto3" + } + if p.oneof { + s += ",oneof" + } + if len(p.Enum) > 0 { + s += ",enum=" + p.Enum + } + if p.HasDefault { + s += ",def=" + p.Default + } + return s +} + +// Parse populates p by parsing a string in the protobuf struct field tag style. +func (p *Properties) Parse(s string) { + // "bytes,49,opt,name=foo,def=hello!" + fields := strings.Split(s, ",") // breaks def=, but handled below. + if len(fields) < 2 { + fmt.Fprintf(os.Stderr, "proto: tag has too few fields: %q\n", s) + return + } + + p.Wire = fields[0] + switch p.Wire { + case "varint": + p.WireType = WireVarint + case "fixed32": + p.WireType = WireFixed32 + case "fixed64": + p.WireType = WireFixed64 + case "zigzag32": + p.WireType = WireVarint + case "zigzag64": + p.WireType = WireVarint + case "bytes", "group": + p.WireType = WireBytes + // no numeric converter for non-numeric types + default: + fmt.Fprintf(os.Stderr, "proto: tag has unknown wire type: %q\n", s) + return + } + + var err error + p.Tag, err = strconv.Atoi(fields[1]) + if err != nil { + return + } + +outer: + for i := 2; i < len(fields); i++ { + f := fields[i] + switch { + case f == "req": + p.Required = true + case f == "opt": + p.Optional = true + case f == "rep": + p.Repeated = true + case f == "packed": + p.Packed = true + case strings.HasPrefix(f, "name="): + p.OrigName = f[5:] + case strings.HasPrefix(f, "json="): + p.JSONName = f[5:] + case strings.HasPrefix(f, "enum="): + p.Enum = f[5:] + case f == "proto3": + p.proto3 = true + case f == "oneof": + p.oneof = true + case strings.HasPrefix(f, "def="): + p.HasDefault = true + p.Default = f[4:] // rest of string + if i+1 < len(fields) { + // Commas aren't escaped, and def is always last. + p.Default += "," + strings.Join(fields[i+1:], ",") + break outer + } + } + } +} + +var protoMessageType = reflect.TypeOf((*Message)(nil)).Elem() + +// setFieldProps initializes the field properties for submessages and maps. +func (p *Properties) setFieldProps(typ reflect.Type, f *reflect.StructField, lockGetProp bool) { + switch t1 := typ; t1.Kind() { + case reflect.Ptr: + if t1.Elem().Kind() == reflect.Struct { + p.stype = t1.Elem() + } + + case reflect.Slice: + if t2 := t1.Elem(); t2.Kind() == reflect.Ptr && t2.Elem().Kind() == reflect.Struct { + p.stype = t2.Elem() + } + + case reflect.Map: + p.mtype = t1 + p.mkeyprop = &Properties{} + p.mkeyprop.init(reflect.PtrTo(p.mtype.Key()), "Key", f.Tag.Get("protobuf_key"), nil, lockGetProp) + p.mvalprop = &Properties{} + vtype := p.mtype.Elem() + if vtype.Kind() != reflect.Ptr && vtype.Kind() != reflect.Slice { + // The value type is not a message (*T) or bytes ([]byte), + // so we need encoders for the pointer to this type. + vtype = reflect.PtrTo(vtype) + } + p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp) + } + + if p.stype != nil { + if lockGetProp { + p.sprop = GetProperties(p.stype) + } else { + p.sprop = getPropertiesLocked(p.stype) + } + } +} + +var ( + marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem() +) + +// Init populates the properties from a protocol buffer struct tag. +func (p *Properties) Init(typ reflect.Type, name, tag string, f *reflect.StructField) { + p.init(typ, name, tag, f, true) +} + +func (p *Properties) init(typ reflect.Type, name, tag string, f *reflect.StructField, lockGetProp bool) { + // "bytes,49,opt,def=hello!" + p.Name = name + p.OrigName = name + if tag == "" { + return + } + p.Parse(tag) + p.setFieldProps(typ, f, lockGetProp) +} + +var ( + propertiesMu sync.RWMutex + propertiesMap = make(map[reflect.Type]*StructProperties) +) + +// GetProperties returns the list of properties for the type represented by t. +// t must represent a generated struct type of a protocol message. +func GetProperties(t reflect.Type) *StructProperties { + if t.Kind() != reflect.Struct { + panic("proto: type must have kind struct") + } + + // Most calls to GetProperties in a long-running program will be + // retrieving details for types we have seen before. + propertiesMu.RLock() + sprop, ok := propertiesMap[t] + propertiesMu.RUnlock() + if ok { + if collectStats { + stats.Chit++ + } + return sprop + } + + propertiesMu.Lock() + sprop = getPropertiesLocked(t) + propertiesMu.Unlock() + return sprop +} + +// getPropertiesLocked requires that propertiesMu is held. +func getPropertiesLocked(t reflect.Type) *StructProperties { + if prop, ok := propertiesMap[t]; ok { + if collectStats { + stats.Chit++ + } + return prop + } + if collectStats { + stats.Cmiss++ + } + + prop := new(StructProperties) + // in case of recursive protos, fill this in now. + propertiesMap[t] = prop + + // build properties + prop.Prop = make([]*Properties, t.NumField()) + prop.order = make([]int, t.NumField()) + + for i := 0; i < t.NumField(); i++ { + f := t.Field(i) + p := new(Properties) + name := f.Name + p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false) + + oneof := f.Tag.Get("protobuf_oneof") // special case + if oneof != "" { + // Oneof fields don't use the traditional protobuf tag. + p.OrigName = oneof + } + prop.Prop[i] = p + prop.order[i] = i + if debug { + print(i, " ", f.Name, " ", t.String(), " ") + if p.Tag > 0 { + print(p.String()) + } + print("\n") + } + } + + // Re-order prop.order. + sort.Sort(prop) + + type oneofMessage interface { + XXX_OneofFuncs() (func(Message, *Buffer) error, func(Message, int, int, *Buffer) (bool, error), func(Message) int, []interface{}) + } + if om, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(oneofMessage); ok { + var oots []interface{} + _, _, _, oots = om.XXX_OneofFuncs() + + // Interpret oneof metadata. + prop.OneofTypes = make(map[string]*OneofProperties) + for _, oot := range oots { + oop := &OneofProperties{ + Type: reflect.ValueOf(oot).Type(), // *T + Prop: new(Properties), + } + sft := oop.Type.Elem().Field(0) + oop.Prop.Name = sft.Name + oop.Prop.Parse(sft.Tag.Get("protobuf")) + // There will be exactly one interface field that + // this new value is assignable to. + for i := 0; i < t.NumField(); i++ { + f := t.Field(i) + if f.Type.Kind() != reflect.Interface { + continue + } + if !oop.Type.AssignableTo(f.Type) { + continue + } + oop.Field = i + break + } + prop.OneofTypes[oop.Prop.OrigName] = oop + } + } + + // build required counts + // build tags + reqCount := 0 + prop.decoderOrigNames = make(map[string]int) + for i, p := range prop.Prop { + if strings.HasPrefix(p.Name, "XXX_") { + // Internal fields should not appear in tags/origNames maps. + // They are handled specially when encoding and decoding. + continue + } + if p.Required { + reqCount++ + } + prop.decoderTags.put(p.Tag, i) + prop.decoderOrigNames[p.OrigName] = i + } + prop.reqCount = reqCount + + return prop +} + +// A global registry of enum types. +// The generated code will register the generated maps by calling RegisterEnum. + +var enumValueMaps = make(map[string]map[string]int32) + +// RegisterEnum is called from the generated code to install the enum descriptor +// maps into the global table to aid parsing text format protocol buffers. +func RegisterEnum(typeName string, unusedNameMap map[int32]string, valueMap map[string]int32) { + if _, ok := enumValueMaps[typeName]; ok { + panic("proto: duplicate enum registered: " + typeName) + } + enumValueMaps[typeName] = valueMap +} + +// EnumValueMap returns the mapping from names to integers of the +// enum type enumType, or a nil if not found. +func EnumValueMap(enumType string) map[string]int32 { + return enumValueMaps[enumType] +} + +// A registry of all linked message types. +// The string is a fully-qualified proto name ("pkg.Message"). +var ( + protoTypedNils = make(map[string]Message) // a map from proto names to typed nil pointers + protoMapTypes = make(map[string]reflect.Type) // a map from proto names to map types + revProtoTypes = make(map[reflect.Type]string) +) + +// RegisterType is called from generated code and maps from the fully qualified +// proto name to the type (pointer to struct) of the protocol buffer. +func RegisterType(x Message, name string) { + if _, ok := protoTypedNils[name]; ok { + // TODO: Some day, make this a panic. + log.Printf("proto: duplicate proto type registered: %s", name) + return + } + t := reflect.TypeOf(x) + if v := reflect.ValueOf(x); v.Kind() == reflect.Ptr && v.Pointer() == 0 { + // Generated code always calls RegisterType with nil x. + // This check is just for extra safety. + protoTypedNils[name] = x + } else { + protoTypedNils[name] = reflect.Zero(t).Interface().(Message) + } + revProtoTypes[t] = name +} + +// RegisterMapType is called from generated code and maps from the fully qualified +// proto name to the native map type of the proto map definition. +func RegisterMapType(x interface{}, name string) { + if reflect.TypeOf(x).Kind() != reflect.Map { + panic(fmt.Sprintf("RegisterMapType(%T, %q); want map", x, name)) + } + if _, ok := protoMapTypes[name]; ok { + log.Printf("proto: duplicate proto type registered: %s", name) + return + } + t := reflect.TypeOf(x) + protoMapTypes[name] = t + revProtoTypes[t] = name +} + +// MessageName returns the fully-qualified proto name for the given message type. +func MessageName(x Message) string { + type xname interface { + XXX_MessageName() string + } + if m, ok := x.(xname); ok { + return m.XXX_MessageName() + } + return revProtoTypes[reflect.TypeOf(x)] +} + +// MessageType returns the message type (pointer to struct) for a named message. +// The type is not guaranteed to implement proto.Message if the name refers to a +// map entry. +func MessageType(name string) reflect.Type { + if t, ok := protoTypedNils[name]; ok { + return reflect.TypeOf(t) + } + return protoMapTypes[name] +} + +// A registry of all linked proto files. +var ( + protoFiles = make(map[string][]byte) // file name => fileDescriptor +) + +// RegisterFile is called from generated code and maps from the +// full file name of a .proto file to its compressed FileDescriptorProto. +func RegisterFile(filename string, fileDescriptor []byte) { + protoFiles[filename] = fileDescriptor +} + +// FileDescriptor returns the compressed FileDescriptorProto for a .proto file. +func FileDescriptor(filename string) []byte { return protoFiles[filename] } diff --git a/vendor/github.com/golang/protobuf/proto/table_marshal.go b/vendor/github.com/golang/protobuf/proto/table_marshal.go new file mode 100644 index 00000000..0f212b30 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/table_marshal.go @@ -0,0 +1,2681 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2016 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +import ( + "errors" + "fmt" + "math" + "reflect" + "sort" + "strconv" + "strings" + "sync" + "sync/atomic" + "unicode/utf8" +) + +// a sizer takes a pointer to a field and the size of its tag, computes the size of +// the encoded data. +type sizer func(pointer, int) int + +// a marshaler takes a byte slice, a pointer to a field, and its tag (in wire format), +// marshals the field to the end of the slice, returns the slice and error (if any). +type marshaler func(b []byte, ptr pointer, wiretag uint64, deterministic bool) ([]byte, error) + +// marshalInfo is the information used for marshaling a message. +type marshalInfo struct { + typ reflect.Type + fields []*marshalFieldInfo + unrecognized field // offset of XXX_unrecognized + extensions field // offset of XXX_InternalExtensions + v1extensions field // offset of XXX_extensions + sizecache field // offset of XXX_sizecache + initialized int32 // 0 -- only typ is set, 1 -- fully initialized + messageset bool // uses message set wire format + hasmarshaler bool // has custom marshaler + sync.RWMutex // protect extElems map, also for initialization + extElems map[int32]*marshalElemInfo // info of extension elements +} + +// marshalFieldInfo is the information used for marshaling a field of a message. +type marshalFieldInfo struct { + field field + wiretag uint64 // tag in wire format + tagsize int // size of tag in wire format + sizer sizer + marshaler marshaler + isPointer bool + required bool // field is required + name string // name of the field, for error reporting + oneofElems map[reflect.Type]*marshalElemInfo // info of oneof elements +} + +// marshalElemInfo is the information used for marshaling an extension or oneof element. +type marshalElemInfo struct { + wiretag uint64 // tag in wire format + tagsize int // size of tag in wire format + sizer sizer + marshaler marshaler + isptr bool // elem is pointer typed, thus interface of this type is a direct interface (extension only) +} + +var ( + marshalInfoMap = map[reflect.Type]*marshalInfo{} + marshalInfoLock sync.Mutex +) + +// getMarshalInfo returns the information to marshal a given type of message. +// The info it returns may not necessarily initialized. +// t is the type of the message (NOT the pointer to it). +func getMarshalInfo(t reflect.Type) *marshalInfo { + marshalInfoLock.Lock() + u, ok := marshalInfoMap[t] + if !ok { + u = &marshalInfo{typ: t} + marshalInfoMap[t] = u + } + marshalInfoLock.Unlock() + return u +} + +// Size is the entry point from generated code, +// and should be ONLY called by generated code. +// It computes the size of encoded data of msg. +// a is a pointer to a place to store cached marshal info. +func (a *InternalMessageInfo) Size(msg Message) int { + u := getMessageMarshalInfo(msg, a) + ptr := toPointer(&msg) + if ptr.isNil() { + // We get here if msg is a typed nil ((*SomeMessage)(nil)), + // so it satisfies the interface, and msg == nil wouldn't + // catch it. We don't want crash in this case. + return 0 + } + return u.size(ptr) +} + +// Marshal is the entry point from generated code, +// and should be ONLY called by generated code. +// It marshals msg to the end of b. +// a is a pointer to a place to store cached marshal info. +func (a *InternalMessageInfo) Marshal(b []byte, msg Message, deterministic bool) ([]byte, error) { + u := getMessageMarshalInfo(msg, a) + ptr := toPointer(&msg) + if ptr.isNil() { + // We get here if msg is a typed nil ((*SomeMessage)(nil)), + // so it satisfies the interface, and msg == nil wouldn't + // catch it. We don't want crash in this case. + return b, ErrNil + } + return u.marshal(b, ptr, deterministic) +} + +func getMessageMarshalInfo(msg interface{}, a *InternalMessageInfo) *marshalInfo { + // u := a.marshal, but atomically. + // We use an atomic here to ensure memory consistency. + u := atomicLoadMarshalInfo(&a.marshal) + if u == nil { + // Get marshal information from type of message. + t := reflect.ValueOf(msg).Type() + if t.Kind() != reflect.Ptr { + panic(fmt.Sprintf("cannot handle non-pointer message type %v", t)) + } + u = getMarshalInfo(t.Elem()) + // Store it in the cache for later users. + // a.marshal = u, but atomically. + atomicStoreMarshalInfo(&a.marshal, u) + } + return u +} + +// size is the main function to compute the size of the encoded data of a message. +// ptr is the pointer to the message. +func (u *marshalInfo) size(ptr pointer) int { + if atomic.LoadInt32(&u.initialized) == 0 { + u.computeMarshalInfo() + } + + // If the message can marshal itself, let it do it, for compatibility. + // NOTE: This is not efficient. + if u.hasmarshaler { + m := ptr.asPointerTo(u.typ).Interface().(Marshaler) + b, _ := m.Marshal() + return len(b) + } + + n := 0 + for _, f := range u.fields { + if f.isPointer && ptr.offset(f.field).getPointer().isNil() { + // nil pointer always marshals to nothing + continue + } + n += f.sizer(ptr.offset(f.field), f.tagsize) + } + if u.extensions.IsValid() { + e := ptr.offset(u.extensions).toExtensions() + if u.messageset { + n += u.sizeMessageSet(e) + } else { + n += u.sizeExtensions(e) + } + } + if u.v1extensions.IsValid() { + m := *ptr.offset(u.v1extensions).toOldExtensions() + n += u.sizeV1Extensions(m) + } + if u.unrecognized.IsValid() { + s := *ptr.offset(u.unrecognized).toBytes() + n += len(s) + } + // cache the result for use in marshal + if u.sizecache.IsValid() { + atomic.StoreInt32(ptr.offset(u.sizecache).toInt32(), int32(n)) + } + return n +} + +// cachedsize gets the size from cache. If there is no cache (i.e. message is not generated), +// fall back to compute the size. +func (u *marshalInfo) cachedsize(ptr pointer) int { + if u.sizecache.IsValid() { + return int(atomic.LoadInt32(ptr.offset(u.sizecache).toInt32())) + } + return u.size(ptr) +} + +// marshal is the main function to marshal a message. It takes a byte slice and appends +// the encoded data to the end of the slice, returns the slice and error (if any). +// ptr is the pointer to the message. +// If deterministic is true, map is marshaled in deterministic order. +func (u *marshalInfo) marshal(b []byte, ptr pointer, deterministic bool) ([]byte, error) { + if atomic.LoadInt32(&u.initialized) == 0 { + u.computeMarshalInfo() + } + + // If the message can marshal itself, let it do it, for compatibility. + // NOTE: This is not efficient. + if u.hasmarshaler { + m := ptr.asPointerTo(u.typ).Interface().(Marshaler) + b1, err := m.Marshal() + b = append(b, b1...) + return b, err + } + + var err, errreq error + // The old marshaler encodes extensions at beginning. + if u.extensions.IsValid() { + e := ptr.offset(u.extensions).toExtensions() + if u.messageset { + b, err = u.appendMessageSet(b, e, deterministic) + } else { + b, err = u.appendExtensions(b, e, deterministic) + } + if err != nil { + return b, err + } + } + if u.v1extensions.IsValid() { + m := *ptr.offset(u.v1extensions).toOldExtensions() + b, err = u.appendV1Extensions(b, m, deterministic) + if err != nil { + return b, err + } + } + for _, f := range u.fields { + if f.required && errreq == nil { + if ptr.offset(f.field).getPointer().isNil() { + // Required field is not set. + // We record the error but keep going, to give a complete marshaling. + errreq = &RequiredNotSetError{f.name} + continue + } + } + if f.isPointer && ptr.offset(f.field).getPointer().isNil() { + // nil pointer always marshals to nothing + continue + } + b, err = f.marshaler(b, ptr.offset(f.field), f.wiretag, deterministic) + if err != nil { + if err1, ok := err.(*RequiredNotSetError); ok { + // Required field in submessage is not set. + // We record the error but keep going, to give a complete marshaling. + if errreq == nil { + errreq = &RequiredNotSetError{f.name + "." + err1.field} + } + continue + } + if err == errRepeatedHasNil { + err = errors.New("proto: repeated field " + f.name + " has nil element") + } + return b, err + } + } + if u.unrecognized.IsValid() { + s := *ptr.offset(u.unrecognized).toBytes() + b = append(b, s...) + } + return b, errreq +} + +// computeMarshalInfo initializes the marshal info. +func (u *marshalInfo) computeMarshalInfo() { + u.Lock() + defer u.Unlock() + if u.initialized != 0 { // non-atomic read is ok as it is protected by the lock + return + } + + t := u.typ + u.unrecognized = invalidField + u.extensions = invalidField + u.v1extensions = invalidField + u.sizecache = invalidField + + // If the message can marshal itself, let it do it, for compatibility. + // NOTE: This is not efficient. + if reflect.PtrTo(t).Implements(marshalerType) { + u.hasmarshaler = true + atomic.StoreInt32(&u.initialized, 1) + return + } + + // get oneof implementers + var oneofImplementers []interface{} + if m, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(oneofMessage); ok { + _, _, _, oneofImplementers = m.XXX_OneofFuncs() + } + + n := t.NumField() + + // deal with XXX fields first + for i := 0; i < t.NumField(); i++ { + f := t.Field(i) + if !strings.HasPrefix(f.Name, "XXX_") { + continue + } + switch f.Name { + case "XXX_sizecache": + u.sizecache = toField(&f) + case "XXX_unrecognized": + u.unrecognized = toField(&f) + case "XXX_InternalExtensions": + u.extensions = toField(&f) + u.messageset = f.Tag.Get("protobuf_messageset") == "1" + case "XXX_extensions": + u.v1extensions = toField(&f) + case "XXX_NoUnkeyedLiteral": + // nothing to do + default: + panic("unknown XXX field: " + f.Name) + } + n-- + } + + // normal fields + fields := make([]marshalFieldInfo, n) // batch allocation + u.fields = make([]*marshalFieldInfo, 0, n) + for i, j := 0, 0; i < t.NumField(); i++ { + f := t.Field(i) + + if strings.HasPrefix(f.Name, "XXX_") { + continue + } + field := &fields[j] + j++ + field.name = f.Name + u.fields = append(u.fields, field) + if f.Tag.Get("protobuf_oneof") != "" { + field.computeOneofFieldInfo(&f, oneofImplementers) + continue + } + if f.Tag.Get("protobuf") == "" { + // field has no tag (not in generated message), ignore it + u.fields = u.fields[:len(u.fields)-1] + j-- + continue + } + field.computeMarshalFieldInfo(&f) + } + + // fields are marshaled in tag order on the wire. + sort.Sort(byTag(u.fields)) + + atomic.StoreInt32(&u.initialized, 1) +} + +// helper for sorting fields by tag +type byTag []*marshalFieldInfo + +func (a byTag) Len() int { return len(a) } +func (a byTag) Swap(i, j int) { a[i], a[j] = a[j], a[i] } +func (a byTag) Less(i, j int) bool { return a[i].wiretag < a[j].wiretag } + +// getExtElemInfo returns the information to marshal an extension element. +// The info it returns is initialized. +func (u *marshalInfo) getExtElemInfo(desc *ExtensionDesc) *marshalElemInfo { + // get from cache first + u.RLock() + e, ok := u.extElems[desc.Field] + u.RUnlock() + if ok { + return e + } + + t := reflect.TypeOf(desc.ExtensionType) // pointer or slice to basic type or struct + tags := strings.Split(desc.Tag, ",") + tag, err := strconv.Atoi(tags[1]) + if err != nil { + panic("tag is not an integer") + } + wt := wiretype(tags[0]) + sizer, marshaler := typeMarshaler(t, tags, false, false) + e = &marshalElemInfo{ + wiretag: uint64(tag)<<3 | wt, + tagsize: SizeVarint(uint64(tag) << 3), + sizer: sizer, + marshaler: marshaler, + isptr: t.Kind() == reflect.Ptr, + } + + // update cache + u.Lock() + if u.extElems == nil { + u.extElems = make(map[int32]*marshalElemInfo) + } + u.extElems[desc.Field] = e + u.Unlock() + return e +} + +// computeMarshalFieldInfo fills up the information to marshal a field. +func (fi *marshalFieldInfo) computeMarshalFieldInfo(f *reflect.StructField) { + // parse protobuf tag of the field. + // tag has format of "bytes,49,opt,name=foo,def=hello!" + tags := strings.Split(f.Tag.Get("protobuf"), ",") + if tags[0] == "" { + return + } + tag, err := strconv.Atoi(tags[1]) + if err != nil { + panic("tag is not an integer") + } + wt := wiretype(tags[0]) + if tags[2] == "req" { + fi.required = true + } + fi.setTag(f, tag, wt) + fi.setMarshaler(f, tags) +} + +func (fi *marshalFieldInfo) computeOneofFieldInfo(f *reflect.StructField, oneofImplementers []interface{}) { + fi.field = toField(f) + fi.wiretag = 1<<31 - 1 // Use a large tag number, make oneofs sorted at the end. This tag will not appear on the wire. + fi.isPointer = true + fi.sizer, fi.marshaler = makeOneOfMarshaler(fi, f) + fi.oneofElems = make(map[reflect.Type]*marshalElemInfo) + + ityp := f.Type // interface type + for _, o := range oneofImplementers { + t := reflect.TypeOf(o) + if !t.Implements(ityp) { + continue + } + sf := t.Elem().Field(0) // oneof implementer is a struct with a single field + tags := strings.Split(sf.Tag.Get("protobuf"), ",") + tag, err := strconv.Atoi(tags[1]) + if err != nil { + panic("tag is not an integer") + } + wt := wiretype(tags[0]) + sizer, marshaler := typeMarshaler(sf.Type, tags, false, true) // oneof should not omit any zero value + fi.oneofElems[t.Elem()] = &marshalElemInfo{ + wiretag: uint64(tag)<<3 | wt, + tagsize: SizeVarint(uint64(tag) << 3), + sizer: sizer, + marshaler: marshaler, + } + } +} + +type oneofMessage interface { + XXX_OneofFuncs() (func(Message, *Buffer) error, func(Message, int, int, *Buffer) (bool, error), func(Message) int, []interface{}) +} + +// wiretype returns the wire encoding of the type. +func wiretype(encoding string) uint64 { + switch encoding { + case "fixed32": + return WireFixed32 + case "fixed64": + return WireFixed64 + case "varint", "zigzag32", "zigzag64": + return WireVarint + case "bytes": + return WireBytes + case "group": + return WireStartGroup + } + panic("unknown wire type " + encoding) +} + +// setTag fills up the tag (in wire format) and its size in the info of a field. +func (fi *marshalFieldInfo) setTag(f *reflect.StructField, tag int, wt uint64) { + fi.field = toField(f) + fi.wiretag = uint64(tag)<<3 | wt + fi.tagsize = SizeVarint(uint64(tag) << 3) +} + +// setMarshaler fills up the sizer and marshaler in the info of a field. +func (fi *marshalFieldInfo) setMarshaler(f *reflect.StructField, tags []string) { + switch f.Type.Kind() { + case reflect.Map: + // map field + fi.isPointer = true + fi.sizer, fi.marshaler = makeMapMarshaler(f) + return + case reflect.Ptr, reflect.Slice: + fi.isPointer = true + } + fi.sizer, fi.marshaler = typeMarshaler(f.Type, tags, true, false) +} + +// typeMarshaler returns the sizer and marshaler of a given field. +// t is the type of the field. +// tags is the generated "protobuf" tag of the field. +// If nozero is true, zero value is not marshaled to the wire. +// If oneof is true, it is a oneof field. +func typeMarshaler(t reflect.Type, tags []string, nozero, oneof bool) (sizer, marshaler) { + encoding := tags[0] + + pointer := false + slice := false + if t.Kind() == reflect.Slice && t.Elem().Kind() != reflect.Uint8 { + slice = true + t = t.Elem() + } + if t.Kind() == reflect.Ptr { + pointer = true + t = t.Elem() + } + + packed := false + proto3 := false + for i := 2; i < len(tags); i++ { + if tags[i] == "packed" { + packed = true + } + if tags[i] == "proto3" { + proto3 = true + } + } + + switch t.Kind() { + case reflect.Bool: + if pointer { + return sizeBoolPtr, appendBoolPtr + } + if slice { + if packed { + return sizeBoolPackedSlice, appendBoolPackedSlice + } + return sizeBoolSlice, appendBoolSlice + } + if nozero { + return sizeBoolValueNoZero, appendBoolValueNoZero + } + return sizeBoolValue, appendBoolValue + case reflect.Uint32: + switch encoding { + case "fixed32": + if pointer { + return sizeFixed32Ptr, appendFixed32Ptr + } + if slice { + if packed { + return sizeFixed32PackedSlice, appendFixed32PackedSlice + } + return sizeFixed32Slice, appendFixed32Slice + } + if nozero { + return sizeFixed32ValueNoZero, appendFixed32ValueNoZero + } + return sizeFixed32Value, appendFixed32Value + case "varint": + if pointer { + return sizeVarint32Ptr, appendVarint32Ptr + } + if slice { + if packed { + return sizeVarint32PackedSlice, appendVarint32PackedSlice + } + return sizeVarint32Slice, appendVarint32Slice + } + if nozero { + return sizeVarint32ValueNoZero, appendVarint32ValueNoZero + } + return sizeVarint32Value, appendVarint32Value + } + case reflect.Int32: + switch encoding { + case "fixed32": + if pointer { + return sizeFixedS32Ptr, appendFixedS32Ptr + } + if slice { + if packed { + return sizeFixedS32PackedSlice, appendFixedS32PackedSlice + } + return sizeFixedS32Slice, appendFixedS32Slice + } + if nozero { + return sizeFixedS32ValueNoZero, appendFixedS32ValueNoZero + } + return sizeFixedS32Value, appendFixedS32Value + case "varint": + if pointer { + return sizeVarintS32Ptr, appendVarintS32Ptr + } + if slice { + if packed { + return sizeVarintS32PackedSlice, appendVarintS32PackedSlice + } + return sizeVarintS32Slice, appendVarintS32Slice + } + if nozero { + return sizeVarintS32ValueNoZero, appendVarintS32ValueNoZero + } + return sizeVarintS32Value, appendVarintS32Value + case "zigzag32": + if pointer { + return sizeZigzag32Ptr, appendZigzag32Ptr + } + if slice { + if packed { + return sizeZigzag32PackedSlice, appendZigzag32PackedSlice + } + return sizeZigzag32Slice, appendZigzag32Slice + } + if nozero { + return sizeZigzag32ValueNoZero, appendZigzag32ValueNoZero + } + return sizeZigzag32Value, appendZigzag32Value + } + case reflect.Uint64: + switch encoding { + case "fixed64": + if pointer { + return sizeFixed64Ptr, appendFixed64Ptr + } + if slice { + if packed { + return sizeFixed64PackedSlice, appendFixed64PackedSlice + } + return sizeFixed64Slice, appendFixed64Slice + } + if nozero { + return sizeFixed64ValueNoZero, appendFixed64ValueNoZero + } + return sizeFixed64Value, appendFixed64Value + case "varint": + if pointer { + return sizeVarint64Ptr, appendVarint64Ptr + } + if slice { + if packed { + return sizeVarint64PackedSlice, appendVarint64PackedSlice + } + return sizeVarint64Slice, appendVarint64Slice + } + if nozero { + return sizeVarint64ValueNoZero, appendVarint64ValueNoZero + } + return sizeVarint64Value, appendVarint64Value + } + case reflect.Int64: + switch encoding { + case "fixed64": + if pointer { + return sizeFixedS64Ptr, appendFixedS64Ptr + } + if slice { + if packed { + return sizeFixedS64PackedSlice, appendFixedS64PackedSlice + } + return sizeFixedS64Slice, appendFixedS64Slice + } + if nozero { + return sizeFixedS64ValueNoZero, appendFixedS64ValueNoZero + } + return sizeFixedS64Value, appendFixedS64Value + case "varint": + if pointer { + return sizeVarintS64Ptr, appendVarintS64Ptr + } + if slice { + if packed { + return sizeVarintS64PackedSlice, appendVarintS64PackedSlice + } + return sizeVarintS64Slice, appendVarintS64Slice + } + if nozero { + return sizeVarintS64ValueNoZero, appendVarintS64ValueNoZero + } + return sizeVarintS64Value, appendVarintS64Value + case "zigzag64": + if pointer { + return sizeZigzag64Ptr, appendZigzag64Ptr + } + if slice { + if packed { + return sizeZigzag64PackedSlice, appendZigzag64PackedSlice + } + return sizeZigzag64Slice, appendZigzag64Slice + } + if nozero { + return sizeZigzag64ValueNoZero, appendZigzag64ValueNoZero + } + return sizeZigzag64Value, appendZigzag64Value + } + case reflect.Float32: + if pointer { + return sizeFloat32Ptr, appendFloat32Ptr + } + if slice { + if packed { + return sizeFloat32PackedSlice, appendFloat32PackedSlice + } + return sizeFloat32Slice, appendFloat32Slice + } + if nozero { + return sizeFloat32ValueNoZero, appendFloat32ValueNoZero + } + return sizeFloat32Value, appendFloat32Value + case reflect.Float64: + if pointer { + return sizeFloat64Ptr, appendFloat64Ptr + } + if slice { + if packed { + return sizeFloat64PackedSlice, appendFloat64PackedSlice + } + return sizeFloat64Slice, appendFloat64Slice + } + if nozero { + return sizeFloat64ValueNoZero, appendFloat64ValueNoZero + } + return sizeFloat64Value, appendFloat64Value + case reflect.String: + if pointer { + return sizeStringPtr, appendStringPtr + } + if slice { + return sizeStringSlice, appendStringSlice + } + if nozero { + return sizeStringValueNoZero, appendStringValueNoZero + } + return sizeStringValue, appendStringValue + case reflect.Slice: + if slice { + return sizeBytesSlice, appendBytesSlice + } + if oneof { + // Oneof bytes field may also have "proto3" tag. + // We want to marshal it as a oneof field. Do this + // check before the proto3 check. + return sizeBytesOneof, appendBytesOneof + } + if proto3 { + return sizeBytes3, appendBytes3 + } + return sizeBytes, appendBytes + case reflect.Struct: + switch encoding { + case "group": + if slice { + return makeGroupSliceMarshaler(getMarshalInfo(t)) + } + return makeGroupMarshaler(getMarshalInfo(t)) + case "bytes": + if slice { + return makeMessageSliceMarshaler(getMarshalInfo(t)) + } + return makeMessageMarshaler(getMarshalInfo(t)) + } + } + panic(fmt.Sprintf("unknown or mismatched type: type: %v, wire type: %v", t, encoding)) +} + +// Below are functions to size/marshal a specific type of a field. +// They are stored in the field's info, and called by function pointers. +// They have type sizer or marshaler. + +func sizeFixed32Value(_ pointer, tagsize int) int { + return 4 + tagsize +} +func sizeFixed32ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toUint32() + if v == 0 { + return 0 + } + return 4 + tagsize +} +func sizeFixed32Ptr(ptr pointer, tagsize int) int { + p := *ptr.toUint32Ptr() + if p == nil { + return 0 + } + return 4 + tagsize +} +func sizeFixed32Slice(ptr pointer, tagsize int) int { + s := *ptr.toUint32Slice() + return (4 + tagsize) * len(s) +} +func sizeFixed32PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toUint32Slice() + if len(s) == 0 { + return 0 + } + return 4*len(s) + SizeVarint(uint64(4*len(s))) + tagsize +} +func sizeFixedS32Value(_ pointer, tagsize int) int { + return 4 + tagsize +} +func sizeFixedS32ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toInt32() + if v == 0 { + return 0 + } + return 4 + tagsize +} +func sizeFixedS32Ptr(ptr pointer, tagsize int) int { + p := ptr.getInt32Ptr() + if p == nil { + return 0 + } + return 4 + tagsize +} +func sizeFixedS32Slice(ptr pointer, tagsize int) int { + s := ptr.getInt32Slice() + return (4 + tagsize) * len(s) +} +func sizeFixedS32PackedSlice(ptr pointer, tagsize int) int { + s := ptr.getInt32Slice() + if len(s) == 0 { + return 0 + } + return 4*len(s) + SizeVarint(uint64(4*len(s))) + tagsize +} +func sizeFloat32Value(_ pointer, tagsize int) int { + return 4 + tagsize +} +func sizeFloat32ValueNoZero(ptr pointer, tagsize int) int { + v := math.Float32bits(*ptr.toFloat32()) + if v == 0 { + return 0 + } + return 4 + tagsize +} +func sizeFloat32Ptr(ptr pointer, tagsize int) int { + p := *ptr.toFloat32Ptr() + if p == nil { + return 0 + } + return 4 + tagsize +} +func sizeFloat32Slice(ptr pointer, tagsize int) int { + s := *ptr.toFloat32Slice() + return (4 + tagsize) * len(s) +} +func sizeFloat32PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toFloat32Slice() + if len(s) == 0 { + return 0 + } + return 4*len(s) + SizeVarint(uint64(4*len(s))) + tagsize +} +func sizeFixed64Value(_ pointer, tagsize int) int { + return 8 + tagsize +} +func sizeFixed64ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toUint64() + if v == 0 { + return 0 + } + return 8 + tagsize +} +func sizeFixed64Ptr(ptr pointer, tagsize int) int { + p := *ptr.toUint64Ptr() + if p == nil { + return 0 + } + return 8 + tagsize +} +func sizeFixed64Slice(ptr pointer, tagsize int) int { + s := *ptr.toUint64Slice() + return (8 + tagsize) * len(s) +} +func sizeFixed64PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toUint64Slice() + if len(s) == 0 { + return 0 + } + return 8*len(s) + SizeVarint(uint64(8*len(s))) + tagsize +} +func sizeFixedS64Value(_ pointer, tagsize int) int { + return 8 + tagsize +} +func sizeFixedS64ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toInt64() + if v == 0 { + return 0 + } + return 8 + tagsize +} +func sizeFixedS64Ptr(ptr pointer, tagsize int) int { + p := *ptr.toInt64Ptr() + if p == nil { + return 0 + } + return 8 + tagsize +} +func sizeFixedS64Slice(ptr pointer, tagsize int) int { + s := *ptr.toInt64Slice() + return (8 + tagsize) * len(s) +} +func sizeFixedS64PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toInt64Slice() + if len(s) == 0 { + return 0 + } + return 8*len(s) + SizeVarint(uint64(8*len(s))) + tagsize +} +func sizeFloat64Value(_ pointer, tagsize int) int { + return 8 + tagsize +} +func sizeFloat64ValueNoZero(ptr pointer, tagsize int) int { + v := math.Float64bits(*ptr.toFloat64()) + if v == 0 { + return 0 + } + return 8 + tagsize +} +func sizeFloat64Ptr(ptr pointer, tagsize int) int { + p := *ptr.toFloat64Ptr() + if p == nil { + return 0 + } + return 8 + tagsize +} +func sizeFloat64Slice(ptr pointer, tagsize int) int { + s := *ptr.toFloat64Slice() + return (8 + tagsize) * len(s) +} +func sizeFloat64PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toFloat64Slice() + if len(s) == 0 { + return 0 + } + return 8*len(s) + SizeVarint(uint64(8*len(s))) + tagsize +} +func sizeVarint32Value(ptr pointer, tagsize int) int { + v := *ptr.toUint32() + return SizeVarint(uint64(v)) + tagsize +} +func sizeVarint32ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toUint32() + if v == 0 { + return 0 + } + return SizeVarint(uint64(v)) + tagsize +} +func sizeVarint32Ptr(ptr pointer, tagsize int) int { + p := *ptr.toUint32Ptr() + if p == nil { + return 0 + } + return SizeVarint(uint64(*p)) + tagsize +} +func sizeVarint32Slice(ptr pointer, tagsize int) int { + s := *ptr.toUint32Slice() + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + tagsize + } + return n +} +func sizeVarint32PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toUint32Slice() + if len(s) == 0 { + return 0 + } + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + } + return n + SizeVarint(uint64(n)) + tagsize +} +func sizeVarintS32Value(ptr pointer, tagsize int) int { + v := *ptr.toInt32() + return SizeVarint(uint64(v)) + tagsize +} +func sizeVarintS32ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toInt32() + if v == 0 { + return 0 + } + return SizeVarint(uint64(v)) + tagsize +} +func sizeVarintS32Ptr(ptr pointer, tagsize int) int { + p := ptr.getInt32Ptr() + if p == nil { + return 0 + } + return SizeVarint(uint64(*p)) + tagsize +} +func sizeVarintS32Slice(ptr pointer, tagsize int) int { + s := ptr.getInt32Slice() + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + tagsize + } + return n +} +func sizeVarintS32PackedSlice(ptr pointer, tagsize int) int { + s := ptr.getInt32Slice() + if len(s) == 0 { + return 0 + } + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + } + return n + SizeVarint(uint64(n)) + tagsize +} +func sizeVarint64Value(ptr pointer, tagsize int) int { + v := *ptr.toUint64() + return SizeVarint(v) + tagsize +} +func sizeVarint64ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toUint64() + if v == 0 { + return 0 + } + return SizeVarint(v) + tagsize +} +func sizeVarint64Ptr(ptr pointer, tagsize int) int { + p := *ptr.toUint64Ptr() + if p == nil { + return 0 + } + return SizeVarint(*p) + tagsize +} +func sizeVarint64Slice(ptr pointer, tagsize int) int { + s := *ptr.toUint64Slice() + n := 0 + for _, v := range s { + n += SizeVarint(v) + tagsize + } + return n +} +func sizeVarint64PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toUint64Slice() + if len(s) == 0 { + return 0 + } + n := 0 + for _, v := range s { + n += SizeVarint(v) + } + return n + SizeVarint(uint64(n)) + tagsize +} +func sizeVarintS64Value(ptr pointer, tagsize int) int { + v := *ptr.toInt64() + return SizeVarint(uint64(v)) + tagsize +} +func sizeVarintS64ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toInt64() + if v == 0 { + return 0 + } + return SizeVarint(uint64(v)) + tagsize +} +func sizeVarintS64Ptr(ptr pointer, tagsize int) int { + p := *ptr.toInt64Ptr() + if p == nil { + return 0 + } + return SizeVarint(uint64(*p)) + tagsize +} +func sizeVarintS64Slice(ptr pointer, tagsize int) int { + s := *ptr.toInt64Slice() + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + tagsize + } + return n +} +func sizeVarintS64PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toInt64Slice() + if len(s) == 0 { + return 0 + } + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + } + return n + SizeVarint(uint64(n)) + tagsize +} +func sizeZigzag32Value(ptr pointer, tagsize int) int { + v := *ptr.toInt32() + return SizeVarint(uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + tagsize +} +func sizeZigzag32ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toInt32() + if v == 0 { + return 0 + } + return SizeVarint(uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + tagsize +} +func sizeZigzag32Ptr(ptr pointer, tagsize int) int { + p := ptr.getInt32Ptr() + if p == nil { + return 0 + } + v := *p + return SizeVarint(uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + tagsize +} +func sizeZigzag32Slice(ptr pointer, tagsize int) int { + s := ptr.getInt32Slice() + n := 0 + for _, v := range s { + n += SizeVarint(uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + tagsize + } + return n +} +func sizeZigzag32PackedSlice(ptr pointer, tagsize int) int { + s := ptr.getInt32Slice() + if len(s) == 0 { + return 0 + } + n := 0 + for _, v := range s { + n += SizeVarint(uint64((uint32(v) << 1) ^ uint32((int32(v) >> 31)))) + } + return n + SizeVarint(uint64(n)) + tagsize +} +func sizeZigzag64Value(ptr pointer, tagsize int) int { + v := *ptr.toInt64() + return SizeVarint(uint64(v<<1)^uint64((int64(v)>>63))) + tagsize +} +func sizeZigzag64ValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toInt64() + if v == 0 { + return 0 + } + return SizeVarint(uint64(v<<1)^uint64((int64(v)>>63))) + tagsize +} +func sizeZigzag64Ptr(ptr pointer, tagsize int) int { + p := *ptr.toInt64Ptr() + if p == nil { + return 0 + } + v := *p + return SizeVarint(uint64(v<<1)^uint64((int64(v)>>63))) + tagsize +} +func sizeZigzag64Slice(ptr pointer, tagsize int) int { + s := *ptr.toInt64Slice() + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v<<1)^uint64((int64(v)>>63))) + tagsize + } + return n +} +func sizeZigzag64PackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toInt64Slice() + if len(s) == 0 { + return 0 + } + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v<<1) ^ uint64((int64(v) >> 63))) + } + return n + SizeVarint(uint64(n)) + tagsize +} +func sizeBoolValue(_ pointer, tagsize int) int { + return 1 + tagsize +} +func sizeBoolValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toBool() + if !v { + return 0 + } + return 1 + tagsize +} +func sizeBoolPtr(ptr pointer, tagsize int) int { + p := *ptr.toBoolPtr() + if p == nil { + return 0 + } + return 1 + tagsize +} +func sizeBoolSlice(ptr pointer, tagsize int) int { + s := *ptr.toBoolSlice() + return (1 + tagsize) * len(s) +} +func sizeBoolPackedSlice(ptr pointer, tagsize int) int { + s := *ptr.toBoolSlice() + if len(s) == 0 { + return 0 + } + return len(s) + SizeVarint(uint64(len(s))) + tagsize +} +func sizeStringValue(ptr pointer, tagsize int) int { + v := *ptr.toString() + return len(v) + SizeVarint(uint64(len(v))) + tagsize +} +func sizeStringValueNoZero(ptr pointer, tagsize int) int { + v := *ptr.toString() + if v == "" { + return 0 + } + return len(v) + SizeVarint(uint64(len(v))) + tagsize +} +func sizeStringPtr(ptr pointer, tagsize int) int { + p := *ptr.toStringPtr() + if p == nil { + return 0 + } + v := *p + return len(v) + SizeVarint(uint64(len(v))) + tagsize +} +func sizeStringSlice(ptr pointer, tagsize int) int { + s := *ptr.toStringSlice() + n := 0 + for _, v := range s { + n += len(v) + SizeVarint(uint64(len(v))) + tagsize + } + return n +} +func sizeBytes(ptr pointer, tagsize int) int { + v := *ptr.toBytes() + if v == nil { + return 0 + } + return len(v) + SizeVarint(uint64(len(v))) + tagsize +} +func sizeBytes3(ptr pointer, tagsize int) int { + v := *ptr.toBytes() + if len(v) == 0 { + return 0 + } + return len(v) + SizeVarint(uint64(len(v))) + tagsize +} +func sizeBytesOneof(ptr pointer, tagsize int) int { + v := *ptr.toBytes() + return len(v) + SizeVarint(uint64(len(v))) + tagsize +} +func sizeBytesSlice(ptr pointer, tagsize int) int { + s := *ptr.toBytesSlice() + n := 0 + for _, v := range s { + n += len(v) + SizeVarint(uint64(len(v))) + tagsize + } + return n +} + +// appendFixed32 appends an encoded fixed32 to b. +func appendFixed32(b []byte, v uint32) []byte { + b = append(b, + byte(v), + byte(v>>8), + byte(v>>16), + byte(v>>24)) + return b +} + +// appendFixed64 appends an encoded fixed64 to b. +func appendFixed64(b []byte, v uint64) []byte { + b = append(b, + byte(v), + byte(v>>8), + byte(v>>16), + byte(v>>24), + byte(v>>32), + byte(v>>40), + byte(v>>48), + byte(v>>56)) + return b +} + +// appendVarint appends an encoded varint to b. +func appendVarint(b []byte, v uint64) []byte { + // TODO: make 1-byte (maybe 2-byte) case inline-able, once we + // have non-leaf inliner. + switch { + case v < 1<<7: + b = append(b, byte(v)) + case v < 1<<14: + b = append(b, + byte(v&0x7f|0x80), + byte(v>>7)) + case v < 1<<21: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte(v>>14)) + case v < 1<<28: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte(v>>21)) + case v < 1<<35: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte((v>>21)&0x7f|0x80), + byte(v>>28)) + case v < 1<<42: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte((v>>21)&0x7f|0x80), + byte((v>>28)&0x7f|0x80), + byte(v>>35)) + case v < 1<<49: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte((v>>21)&0x7f|0x80), + byte((v>>28)&0x7f|0x80), + byte((v>>35)&0x7f|0x80), + byte(v>>42)) + case v < 1<<56: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte((v>>21)&0x7f|0x80), + byte((v>>28)&0x7f|0x80), + byte((v>>35)&0x7f|0x80), + byte((v>>42)&0x7f|0x80), + byte(v>>49)) + case v < 1<<63: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte((v>>21)&0x7f|0x80), + byte((v>>28)&0x7f|0x80), + byte((v>>35)&0x7f|0x80), + byte((v>>42)&0x7f|0x80), + byte((v>>49)&0x7f|0x80), + byte(v>>56)) + default: + b = append(b, + byte(v&0x7f|0x80), + byte((v>>7)&0x7f|0x80), + byte((v>>14)&0x7f|0x80), + byte((v>>21)&0x7f|0x80), + byte((v>>28)&0x7f|0x80), + byte((v>>35)&0x7f|0x80), + byte((v>>42)&0x7f|0x80), + byte((v>>49)&0x7f|0x80), + byte((v>>56)&0x7f|0x80), + 1) + } + return b +} + +func appendFixed32Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint32() + b = appendVarint(b, wiretag) + b = appendFixed32(b, v) + return b, nil +} +func appendFixed32ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint32() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed32(b, v) + return b, nil +} +func appendFixed32Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toUint32Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed32(b, *p) + return b, nil +} +func appendFixed32Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint32Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendFixed32(b, v) + } + return b, nil +} +func appendFixed32PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint32Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(4*len(s))) + for _, v := range s { + b = appendFixed32(b, v) + } + return b, nil +} +func appendFixedS32Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt32() + b = appendVarint(b, wiretag) + b = appendFixed32(b, uint32(v)) + return b, nil +} +func appendFixedS32ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt32() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed32(b, uint32(v)) + return b, nil +} +func appendFixedS32Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := ptr.getInt32Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed32(b, uint32(*p)) + return b, nil +} +func appendFixedS32Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := ptr.getInt32Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendFixed32(b, uint32(v)) + } + return b, nil +} +func appendFixedS32PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := ptr.getInt32Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(4*len(s))) + for _, v := range s { + b = appendFixed32(b, uint32(v)) + } + return b, nil +} +func appendFloat32Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := math.Float32bits(*ptr.toFloat32()) + b = appendVarint(b, wiretag) + b = appendFixed32(b, v) + return b, nil +} +func appendFloat32ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := math.Float32bits(*ptr.toFloat32()) + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed32(b, v) + return b, nil +} +func appendFloat32Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toFloat32Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed32(b, math.Float32bits(*p)) + return b, nil +} +func appendFloat32Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toFloat32Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendFixed32(b, math.Float32bits(v)) + } + return b, nil +} +func appendFloat32PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toFloat32Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(4*len(s))) + for _, v := range s { + b = appendFixed32(b, math.Float32bits(v)) + } + return b, nil +} +func appendFixed64Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint64() + b = appendVarint(b, wiretag) + b = appendFixed64(b, v) + return b, nil +} +func appendFixed64ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint64() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed64(b, v) + return b, nil +} +func appendFixed64Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toUint64Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed64(b, *p) + return b, nil +} +func appendFixed64Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint64Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendFixed64(b, v) + } + return b, nil +} +func appendFixed64PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint64Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(8*len(s))) + for _, v := range s { + b = appendFixed64(b, v) + } + return b, nil +} +func appendFixedS64Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt64() + b = appendVarint(b, wiretag) + b = appendFixed64(b, uint64(v)) + return b, nil +} +func appendFixedS64ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt64() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed64(b, uint64(v)) + return b, nil +} +func appendFixedS64Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toInt64Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed64(b, uint64(*p)) + return b, nil +} +func appendFixedS64Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toInt64Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendFixed64(b, uint64(v)) + } + return b, nil +} +func appendFixedS64PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toInt64Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(8*len(s))) + for _, v := range s { + b = appendFixed64(b, uint64(v)) + } + return b, nil +} +func appendFloat64Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := math.Float64bits(*ptr.toFloat64()) + b = appendVarint(b, wiretag) + b = appendFixed64(b, v) + return b, nil +} +func appendFloat64ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := math.Float64bits(*ptr.toFloat64()) + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed64(b, v) + return b, nil +} +func appendFloat64Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toFloat64Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendFixed64(b, math.Float64bits(*p)) + return b, nil +} +func appendFloat64Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toFloat64Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendFixed64(b, math.Float64bits(v)) + } + return b, nil +} +func appendFloat64PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toFloat64Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(8*len(s))) + for _, v := range s { + b = appendFixed64(b, math.Float64bits(v)) + } + return b, nil +} +func appendVarint32Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint32() + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + return b, nil +} +func appendVarint32ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint32() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + return b, nil +} +func appendVarint32Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toUint32Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(*p)) + return b, nil +} +func appendVarint32Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint32Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + } + return b, nil +} +func appendVarint32PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint32Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + // compute size + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + } + b = appendVarint(b, uint64(n)) + for _, v := range s { + b = appendVarint(b, uint64(v)) + } + return b, nil +} +func appendVarintS32Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt32() + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + return b, nil +} +func appendVarintS32ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt32() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + return b, nil +} +func appendVarintS32Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := ptr.getInt32Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(*p)) + return b, nil +} +func appendVarintS32Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := ptr.getInt32Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + } + return b, nil +} +func appendVarintS32PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := ptr.getInt32Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + // compute size + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + } + b = appendVarint(b, uint64(n)) + for _, v := range s { + b = appendVarint(b, uint64(v)) + } + return b, nil +} +func appendVarint64Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint64() + b = appendVarint(b, wiretag) + b = appendVarint(b, v) + return b, nil +} +func appendVarint64ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toUint64() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, v) + return b, nil +} +func appendVarint64Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toUint64Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, *p) + return b, nil +} +func appendVarint64Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint64Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, v) + } + return b, nil +} +func appendVarint64PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toUint64Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + // compute size + n := 0 + for _, v := range s { + n += SizeVarint(v) + } + b = appendVarint(b, uint64(n)) + for _, v := range s { + b = appendVarint(b, v) + } + return b, nil +} +func appendVarintS64Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt64() + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + return b, nil +} +func appendVarintS64ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt64() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + return b, nil +} +func appendVarintS64Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toInt64Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(*p)) + return b, nil +} +func appendVarintS64Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toInt64Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v)) + } + return b, nil +} +func appendVarintS64PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toInt64Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + // compute size + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v)) + } + b = appendVarint(b, uint64(n)) + for _, v := range s { + b = appendVarint(b, uint64(v)) + } + return b, nil +} +func appendZigzag32Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt32() + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + return b, nil +} +func appendZigzag32ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt32() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + return b, nil +} +func appendZigzag32Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := ptr.getInt32Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + v := *p + b = appendVarint(b, uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + return b, nil +} +func appendZigzag32Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := ptr.getInt32Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + } + return b, nil +} +func appendZigzag32PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := ptr.getInt32Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + // compute size + n := 0 + for _, v := range s { + n += SizeVarint(uint64((uint32(v) << 1) ^ uint32((int32(v) >> 31)))) + } + b = appendVarint(b, uint64(n)) + for _, v := range s { + b = appendVarint(b, uint64((uint32(v)<<1)^uint32((int32(v)>>31)))) + } + return b, nil +} +func appendZigzag64Value(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt64() + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v<<1)^uint64((int64(v)>>63))) + return b, nil +} +func appendZigzag64ValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toInt64() + if v == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v<<1)^uint64((int64(v)>>63))) + return b, nil +} +func appendZigzag64Ptr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toInt64Ptr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + v := *p + b = appendVarint(b, uint64(v<<1)^uint64((int64(v)>>63))) + return b, nil +} +func appendZigzag64Slice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toInt64Slice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(v<<1)^uint64((int64(v)>>63))) + } + return b, nil +} +func appendZigzag64PackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toInt64Slice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + // compute size + n := 0 + for _, v := range s { + n += SizeVarint(uint64(v<<1) ^ uint64((int64(v) >> 63))) + } + b = appendVarint(b, uint64(n)) + for _, v := range s { + b = appendVarint(b, uint64(v<<1)^uint64((int64(v)>>63))) + } + return b, nil +} +func appendBoolValue(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toBool() + b = appendVarint(b, wiretag) + if v { + b = append(b, 1) + } else { + b = append(b, 0) + } + return b, nil +} +func appendBoolValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toBool() + if !v { + return b, nil + } + b = appendVarint(b, wiretag) + b = append(b, 1) + return b, nil +} + +func appendBoolPtr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toBoolPtr() + if p == nil { + return b, nil + } + b = appendVarint(b, wiretag) + if *p { + b = append(b, 1) + } else { + b = append(b, 0) + } + return b, nil +} +func appendBoolSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toBoolSlice() + for _, v := range s { + b = appendVarint(b, wiretag) + if v { + b = append(b, 1) + } else { + b = append(b, 0) + } + } + return b, nil +} +func appendBoolPackedSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toBoolSlice() + if len(s) == 0 { + return b, nil + } + b = appendVarint(b, wiretag&^7|WireBytes) + b = appendVarint(b, uint64(len(s))) + for _, v := range s { + if v { + b = append(b, 1) + } else { + b = append(b, 0) + } + } + return b, nil +} +func appendStringValue(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toString() + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + return b, nil +} +func appendStringValueNoZero(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toString() + if v == "" { + return b, nil + } + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + return b, nil +} +func appendStringPtr(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + p := *ptr.toStringPtr() + if p == nil { + return b, nil + } + v := *p + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + return b, nil +} +func appendStringSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toStringSlice() + for _, v := range s { + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + } + return b, nil +} +func appendBytes(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toBytes() + if v == nil { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + return b, nil +} +func appendBytes3(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toBytes() + if len(v) == 0 { + return b, nil + } + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + return b, nil +} +func appendBytesOneof(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + v := *ptr.toBytes() + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + return b, nil +} +func appendBytesSlice(b []byte, ptr pointer, wiretag uint64, _ bool) ([]byte, error) { + s := *ptr.toBytesSlice() + for _, v := range s { + b = appendVarint(b, wiretag) + b = appendVarint(b, uint64(len(v))) + b = append(b, v...) + } + return b, nil +} + +// makeGroupMarshaler returns the sizer and marshaler for a group. +// u is the marshal info of the underlying message. +func makeGroupMarshaler(u *marshalInfo) (sizer, marshaler) { + return func(ptr pointer, tagsize int) int { + p := ptr.getPointer() + if p.isNil() { + return 0 + } + return u.size(p) + 2*tagsize + }, + func(b []byte, ptr pointer, wiretag uint64, deterministic bool) ([]byte, error) { + p := ptr.getPointer() + if p.isNil() { + return b, nil + } + var err error + b = appendVarint(b, wiretag) // start group + b, err = u.marshal(b, p, deterministic) + b = appendVarint(b, wiretag+(WireEndGroup-WireStartGroup)) // end group + return b, err + } +} + +// makeGroupSliceMarshaler returns the sizer and marshaler for a group slice. +// u is the marshal info of the underlying message. +func makeGroupSliceMarshaler(u *marshalInfo) (sizer, marshaler) { + return func(ptr pointer, tagsize int) int { + s := ptr.getPointerSlice() + n := 0 + for _, v := range s { + if v.isNil() { + continue + } + n += u.size(v) + 2*tagsize + } + return n + }, + func(b []byte, ptr pointer, wiretag uint64, deterministic bool) ([]byte, error) { + s := ptr.getPointerSlice() + var err, errreq error + for _, v := range s { + if v.isNil() { + return b, errRepeatedHasNil + } + b = appendVarint(b, wiretag) // start group + b, err = u.marshal(b, v, deterministic) + b = appendVarint(b, wiretag+(WireEndGroup-WireStartGroup)) // end group + if err != nil { + if _, ok := err.(*RequiredNotSetError); ok { + // Required field in submessage is not set. + // We record the error but keep going, to give a complete marshaling. + if errreq == nil { + errreq = err + } + continue + } + if err == ErrNil { + err = errRepeatedHasNil + } + return b, err + } + } + return b, errreq + } +} + +// makeMessageMarshaler returns the sizer and marshaler for a message field. +// u is the marshal info of the message. +func makeMessageMarshaler(u *marshalInfo) (sizer, marshaler) { + return func(ptr pointer, tagsize int) int { + p := ptr.getPointer() + if p.isNil() { + return 0 + } + siz := u.size(p) + return siz + SizeVarint(uint64(siz)) + tagsize + }, + func(b []byte, ptr pointer, wiretag uint64, deterministic bool) ([]byte, error) { + p := ptr.getPointer() + if p.isNil() { + return b, nil + } + b = appendVarint(b, wiretag) + siz := u.cachedsize(p) + b = appendVarint(b, uint64(siz)) + return u.marshal(b, p, deterministic) + } +} + +// makeMessageSliceMarshaler returns the sizer and marshaler for a message slice. +// u is the marshal info of the message. +func makeMessageSliceMarshaler(u *marshalInfo) (sizer, marshaler) { + return func(ptr pointer, tagsize int) int { + s := ptr.getPointerSlice() + n := 0 + for _, v := range s { + if v.isNil() { + continue + } + siz := u.size(v) + n += siz + SizeVarint(uint64(siz)) + tagsize + } + return n + }, + func(b []byte, ptr pointer, wiretag uint64, deterministic bool) ([]byte, error) { + s := ptr.getPointerSlice() + var err, errreq error + for _, v := range s { + if v.isNil() { + return b, errRepeatedHasNil + } + b = appendVarint(b, wiretag) + siz := u.cachedsize(v) + b = appendVarint(b, uint64(siz)) + b, err = u.marshal(b, v, deterministic) + + if err != nil { + if _, ok := err.(*RequiredNotSetError); ok { + // Required field in submessage is not set. + // We record the error but keep going, to give a complete marshaling. + if errreq == nil { + errreq = err + } + continue + } + if err == ErrNil { + err = errRepeatedHasNil + } + return b, err + } + } + return b, errreq + } +} + +// makeMapMarshaler returns the sizer and marshaler for a map field. +// f is the pointer to the reflect data structure of the field. +func makeMapMarshaler(f *reflect.StructField) (sizer, marshaler) { + // figure out key and value type + t := f.Type + keyType := t.Key() + valType := t.Elem() + keyTags := strings.Split(f.Tag.Get("protobuf_key"), ",") + valTags := strings.Split(f.Tag.Get("protobuf_val"), ",") + keySizer, keyMarshaler := typeMarshaler(keyType, keyTags, false, false) // don't omit zero value in map + valSizer, valMarshaler := typeMarshaler(valType, valTags, false, false) // don't omit zero value in map + keyWireTag := 1<<3 | wiretype(keyTags[0]) + valWireTag := 2<<3 | wiretype(valTags[0]) + + // We create an interface to get the addresses of the map key and value. + // If value is pointer-typed, the interface is a direct interface, the + // idata itself is the value. Otherwise, the idata is the pointer to the + // value. + // Key cannot be pointer-typed. + valIsPtr := valType.Kind() == reflect.Ptr + return func(ptr pointer, tagsize int) int { + m := ptr.asPointerTo(t).Elem() // the map + n := 0 + for _, k := range m.MapKeys() { + ki := k.Interface() + vi := m.MapIndex(k).Interface() + kaddr := toAddrPointer(&ki, false) // pointer to key + vaddr := toAddrPointer(&vi, valIsPtr) // pointer to value + siz := keySizer(kaddr, 1) + valSizer(vaddr, 1) // tag of key = 1 (size=1), tag of val = 2 (size=1) + n += siz + SizeVarint(uint64(siz)) + tagsize + } + return n + }, + func(b []byte, ptr pointer, tag uint64, deterministic bool) ([]byte, error) { + m := ptr.asPointerTo(t).Elem() // the map + var err error + keys := m.MapKeys() + if len(keys) > 1 && deterministic { + sort.Sort(mapKeys(keys)) + } + for _, k := range keys { + ki := k.Interface() + vi := m.MapIndex(k).Interface() + kaddr := toAddrPointer(&ki, false) // pointer to key + vaddr := toAddrPointer(&vi, valIsPtr) // pointer to value + b = appendVarint(b, tag) + siz := keySizer(kaddr, 1) + valSizer(vaddr, 1) // tag of key = 1 (size=1), tag of val = 2 (size=1) + b = appendVarint(b, uint64(siz)) + b, err = keyMarshaler(b, kaddr, keyWireTag, deterministic) + if err != nil { + return b, err + } + b, err = valMarshaler(b, vaddr, valWireTag, deterministic) + if err != nil && err != ErrNil { // allow nil value in map + return b, err + } + } + return b, nil + } +} + +// makeOneOfMarshaler returns the sizer and marshaler for a oneof field. +// fi is the marshal info of the field. +// f is the pointer to the reflect data structure of the field. +func makeOneOfMarshaler(fi *marshalFieldInfo, f *reflect.StructField) (sizer, marshaler) { + // Oneof field is an interface. We need to get the actual data type on the fly. + t := f.Type + return func(ptr pointer, _ int) int { + p := ptr.getInterfacePointer() + if p.isNil() { + return 0 + } + v := ptr.asPointerTo(t).Elem().Elem().Elem() // *interface -> interface -> *struct -> struct + telem := v.Type() + e := fi.oneofElems[telem] + return e.sizer(p, e.tagsize) + }, + func(b []byte, ptr pointer, _ uint64, deterministic bool) ([]byte, error) { + p := ptr.getInterfacePointer() + if p.isNil() { + return b, nil + } + v := ptr.asPointerTo(t).Elem().Elem().Elem() // *interface -> interface -> *struct -> struct + telem := v.Type() + if telem.Field(0).Type.Kind() == reflect.Ptr && p.getPointer().isNil() { + return b, errOneofHasNil + } + e := fi.oneofElems[telem] + return e.marshaler(b, p, e.wiretag, deterministic) + } +} + +// sizeExtensions computes the size of encoded data for a XXX_InternalExtensions field. +func (u *marshalInfo) sizeExtensions(ext *XXX_InternalExtensions) int { + m, mu := ext.extensionsRead() + if m == nil { + return 0 + } + mu.Lock() + + n := 0 + for _, e := range m { + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + n += len(e.enc) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + n += ei.sizer(p, ei.tagsize) + } + mu.Unlock() + return n +} + +// appendExtensions marshals a XXX_InternalExtensions field to the end of byte slice b. +func (u *marshalInfo) appendExtensions(b []byte, ext *XXX_InternalExtensions, deterministic bool) ([]byte, error) { + m, mu := ext.extensionsRead() + if m == nil { + return b, nil + } + mu.Lock() + defer mu.Unlock() + + var err error + + // Fast-path for common cases: zero or one extensions. + // Don't bother sorting the keys. + if len(m) <= 1 { + for _, e := range m { + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + b = append(b, e.enc...) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + b, err = ei.marshaler(b, p, ei.wiretag, deterministic) + if err != nil { + return b, err + } + } + return b, nil + } + + // Sort the keys to provide a deterministic encoding. + // Not sure this is required, but the old code does it. + keys := make([]int, 0, len(m)) + for k := range m { + keys = append(keys, int(k)) + } + sort.Ints(keys) + + for _, k := range keys { + e := m[int32(k)] + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + b = append(b, e.enc...) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + b, err = ei.marshaler(b, p, ei.wiretag, deterministic) + if err != nil { + return b, err + } + } + return b, nil +} + +// message set format is: +// message MessageSet { +// repeated group Item = 1 { +// required int32 type_id = 2; +// required string message = 3; +// }; +// } + +// sizeMessageSet computes the size of encoded data for a XXX_InternalExtensions field +// in message set format (above). +func (u *marshalInfo) sizeMessageSet(ext *XXX_InternalExtensions) int { + m, mu := ext.extensionsRead() + if m == nil { + return 0 + } + mu.Lock() + + n := 0 + for id, e := range m { + n += 2 // start group, end group. tag = 1 (size=1) + n += SizeVarint(uint64(id)) + 1 // type_id, tag = 2 (size=1) + + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + msgWithLen := skipVarint(e.enc) // skip old tag, but leave the length varint + siz := len(msgWithLen) + n += siz + 1 // message, tag = 3 (size=1) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + n += ei.sizer(p, 1) // message, tag = 3 (size=1) + } + mu.Unlock() + return n +} + +// appendMessageSet marshals a XXX_InternalExtensions field in message set format (above) +// to the end of byte slice b. +func (u *marshalInfo) appendMessageSet(b []byte, ext *XXX_InternalExtensions, deterministic bool) ([]byte, error) { + m, mu := ext.extensionsRead() + if m == nil { + return b, nil + } + mu.Lock() + defer mu.Unlock() + + var err error + + // Fast-path for common cases: zero or one extensions. + // Don't bother sorting the keys. + if len(m) <= 1 { + for id, e := range m { + b = append(b, 1<<3|WireStartGroup) + b = append(b, 2<<3|WireVarint) + b = appendVarint(b, uint64(id)) + + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + msgWithLen := skipVarint(e.enc) // skip old tag, but leave the length varint + b = append(b, 3<<3|WireBytes) + b = append(b, msgWithLen...) + b = append(b, 1<<3|WireEndGroup) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + b, err = ei.marshaler(b, p, 3<<3|WireBytes, deterministic) + if err != nil { + return b, err + } + b = append(b, 1<<3|WireEndGroup) + } + return b, nil + } + + // Sort the keys to provide a deterministic encoding. + keys := make([]int, 0, len(m)) + for k := range m { + keys = append(keys, int(k)) + } + sort.Ints(keys) + + for _, id := range keys { + e := m[int32(id)] + b = append(b, 1<<3|WireStartGroup) + b = append(b, 2<<3|WireVarint) + b = appendVarint(b, uint64(id)) + + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + msgWithLen := skipVarint(e.enc) // skip old tag, but leave the length varint + b = append(b, 3<<3|WireBytes) + b = append(b, msgWithLen...) + b = append(b, 1<<3|WireEndGroup) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + b, err = ei.marshaler(b, p, 3<<3|WireBytes, deterministic) + b = append(b, 1<<3|WireEndGroup) + if err != nil { + return b, err + } + } + return b, nil +} + +// sizeV1Extensions computes the size of encoded data for a V1-API extension field. +func (u *marshalInfo) sizeV1Extensions(m map[int32]Extension) int { + if m == nil { + return 0 + } + + n := 0 + for _, e := range m { + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + n += len(e.enc) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + n += ei.sizer(p, ei.tagsize) + } + return n +} + +// appendV1Extensions marshals a V1-API extension field to the end of byte slice b. +func (u *marshalInfo) appendV1Extensions(b []byte, m map[int32]Extension, deterministic bool) ([]byte, error) { + if m == nil { + return b, nil + } + + // Sort the keys to provide a deterministic encoding. + keys := make([]int, 0, len(m)) + for k := range m { + keys = append(keys, int(k)) + } + sort.Ints(keys) + + var err error + for _, k := range keys { + e := m[int32(k)] + if e.value == nil || e.desc == nil { + // Extension is only in its encoded form. + b = append(b, e.enc...) + continue + } + + // We don't skip extensions that have an encoded form set, + // because the extension value may have been mutated after + // the last time this function was called. + + ei := u.getExtElemInfo(e.desc) + v := e.value + p := toAddrPointer(&v, ei.isptr) + b, err = ei.marshaler(b, p, ei.wiretag, deterministic) + if err != nil { + return b, err + } + } + return b, nil +} + +// newMarshaler is the interface representing objects that can marshal themselves. +// +// This exists to support protoc-gen-go generated messages. +// The proto package will stop type-asserting to this interface in the future. +// +// DO NOT DEPEND ON THIS. +type newMarshaler interface { + XXX_Size() int + XXX_Marshal(b []byte, deterministic bool) ([]byte, error) +} + +// Size returns the encoded size of a protocol buffer message. +// This is the main entry point. +func Size(pb Message) int { + if m, ok := pb.(newMarshaler); ok { + return m.XXX_Size() + } + if m, ok := pb.(Marshaler); ok { + // If the message can marshal itself, let it do it, for compatibility. + // NOTE: This is not efficient. + b, _ := m.Marshal() + return len(b) + } + // in case somehow we didn't generate the wrapper + if pb == nil { + return 0 + } + var info InternalMessageInfo + return info.Size(pb) +} + +// Marshal takes a protocol buffer message +// and encodes it into the wire format, returning the data. +// This is the main entry point. +func Marshal(pb Message) ([]byte, error) { + if m, ok := pb.(newMarshaler); ok { + siz := m.XXX_Size() + b := make([]byte, 0, siz) + return m.XXX_Marshal(b, false) + } + if m, ok := pb.(Marshaler); ok { + // If the message can marshal itself, let it do it, for compatibility. + // NOTE: This is not efficient. + return m.Marshal() + } + // in case somehow we didn't generate the wrapper + if pb == nil { + return nil, ErrNil + } + var info InternalMessageInfo + siz := info.Size(pb) + b := make([]byte, 0, siz) + return info.Marshal(b, pb, false) +} + +// Marshal takes a protocol buffer message +// and encodes it into the wire format, writing the result to the +// Buffer. +// This is an alternative entry point. It is not necessary to use +// a Buffer for most applications. +func (p *Buffer) Marshal(pb Message) error { + var err error + if m, ok := pb.(newMarshaler); ok { + siz := m.XXX_Size() + p.grow(siz) // make sure buf has enough capacity + p.buf, err = m.XXX_Marshal(p.buf, p.deterministic) + return err + } + if m, ok := pb.(Marshaler); ok { + // If the message can marshal itself, let it do it, for compatibility. + // NOTE: This is not efficient. + b, err := m.Marshal() + p.buf = append(p.buf, b...) + return err + } + // in case somehow we didn't generate the wrapper + if pb == nil { + return ErrNil + } + var info InternalMessageInfo + siz := info.Size(pb) + p.grow(siz) // make sure buf has enough capacity + p.buf, err = info.Marshal(p.buf, pb, p.deterministic) + return err +} + +// grow grows the buffer's capacity, if necessary, to guarantee space for +// another n bytes. After grow(n), at least n bytes can be written to the +// buffer without another allocation. +func (p *Buffer) grow(n int) { + need := len(p.buf) + n + if need <= cap(p.buf) { + return + } + newCap := len(p.buf) * 2 + if newCap < need { + newCap = need + } + p.buf = append(make([]byte, 0, newCap), p.buf...) +} diff --git a/vendor/github.com/golang/protobuf/proto/table_merge.go b/vendor/github.com/golang/protobuf/proto/table_merge.go new file mode 100644 index 00000000..5525def6 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/table_merge.go @@ -0,0 +1,654 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2016 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +import ( + "fmt" + "reflect" + "strings" + "sync" + "sync/atomic" +) + +// Merge merges the src message into dst. +// This assumes that dst and src of the same type and are non-nil. +func (a *InternalMessageInfo) Merge(dst, src Message) { + mi := atomicLoadMergeInfo(&a.merge) + if mi == nil { + mi = getMergeInfo(reflect.TypeOf(dst).Elem()) + atomicStoreMergeInfo(&a.merge, mi) + } + mi.merge(toPointer(&dst), toPointer(&src)) +} + +type mergeInfo struct { + typ reflect.Type + + initialized int32 // 0: only typ is valid, 1: everything is valid + lock sync.Mutex + + fields []mergeFieldInfo + unrecognized field // Offset of XXX_unrecognized +} + +type mergeFieldInfo struct { + field field // Offset of field, guaranteed to be valid + + // isPointer reports whether the value in the field is a pointer. + // This is true for the following situations: + // * Pointer to struct + // * Pointer to basic type (proto2 only) + // * Slice (first value in slice header is a pointer) + // * String (first value in string header is a pointer) + isPointer bool + + // basicWidth reports the width of the field assuming that it is directly + // embedded in the struct (as is the case for basic types in proto3). + // The possible values are: + // 0: invalid + // 1: bool + // 4: int32, uint32, float32 + // 8: int64, uint64, float64 + basicWidth int + + // Where dst and src are pointers to the types being merged. + merge func(dst, src pointer) +} + +var ( + mergeInfoMap = map[reflect.Type]*mergeInfo{} + mergeInfoLock sync.Mutex +) + +func getMergeInfo(t reflect.Type) *mergeInfo { + mergeInfoLock.Lock() + defer mergeInfoLock.Unlock() + mi := mergeInfoMap[t] + if mi == nil { + mi = &mergeInfo{typ: t} + mergeInfoMap[t] = mi + } + return mi +} + +// merge merges src into dst assuming they are both of type *mi.typ. +func (mi *mergeInfo) merge(dst, src pointer) { + if dst.isNil() { + panic("proto: nil destination") + } + if src.isNil() { + return // Nothing to do. + } + + if atomic.LoadInt32(&mi.initialized) == 0 { + mi.computeMergeInfo() + } + + for _, fi := range mi.fields { + sfp := src.offset(fi.field) + + // As an optimization, we can avoid the merge function call cost + // if we know for sure that the source will have no effect + // by checking if it is the zero value. + if unsafeAllowed { + if fi.isPointer && sfp.getPointer().isNil() { // Could be slice or string + continue + } + if fi.basicWidth > 0 { + switch { + case fi.basicWidth == 1 && !*sfp.toBool(): + continue + case fi.basicWidth == 4 && *sfp.toUint32() == 0: + continue + case fi.basicWidth == 8 && *sfp.toUint64() == 0: + continue + } + } + } + + dfp := dst.offset(fi.field) + fi.merge(dfp, sfp) + } + + // TODO: Make this faster? + out := dst.asPointerTo(mi.typ).Elem() + in := src.asPointerTo(mi.typ).Elem() + if emIn, err := extendable(in.Addr().Interface()); err == nil { + emOut, _ := extendable(out.Addr().Interface()) + mIn, muIn := emIn.extensionsRead() + if mIn != nil { + mOut := emOut.extensionsWrite() + muIn.Lock() + mergeExtension(mOut, mIn) + muIn.Unlock() + } + } + + if mi.unrecognized.IsValid() { + if b := *src.offset(mi.unrecognized).toBytes(); len(b) > 0 { + *dst.offset(mi.unrecognized).toBytes() = append([]byte(nil), b...) + } + } +} + +func (mi *mergeInfo) computeMergeInfo() { + mi.lock.Lock() + defer mi.lock.Unlock() + if mi.initialized != 0 { + return + } + t := mi.typ + n := t.NumField() + + props := GetProperties(t) + for i := 0; i < n; i++ { + f := t.Field(i) + if strings.HasPrefix(f.Name, "XXX_") { + continue + } + + mfi := mergeFieldInfo{field: toField(&f)} + tf := f.Type + + // As an optimization, we can avoid the merge function call cost + // if we know for sure that the source will have no effect + // by checking if it is the zero value. + if unsafeAllowed { + switch tf.Kind() { + case reflect.Ptr, reflect.Slice, reflect.String: + // As a special case, we assume slices and strings are pointers + // since we know that the first field in the SliceSlice or + // StringHeader is a data pointer. + mfi.isPointer = true + case reflect.Bool: + mfi.basicWidth = 1 + case reflect.Int32, reflect.Uint32, reflect.Float32: + mfi.basicWidth = 4 + case reflect.Int64, reflect.Uint64, reflect.Float64: + mfi.basicWidth = 8 + } + } + + // Unwrap tf to get at its most basic type. + var isPointer, isSlice bool + if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 { + isSlice = true + tf = tf.Elem() + } + if tf.Kind() == reflect.Ptr { + isPointer = true + tf = tf.Elem() + } + if isPointer && isSlice && tf.Kind() != reflect.Struct { + panic("both pointer and slice for basic type in " + tf.Name()) + } + + switch tf.Kind() { + case reflect.Int32: + switch { + case isSlice: // E.g., []int32 + mfi.merge = func(dst, src pointer) { + // NOTE: toInt32Slice is not defined (see pointer_reflect.go). + /* + sfsp := src.toInt32Slice() + if *sfsp != nil { + dfsp := dst.toInt32Slice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []int64{} + } + } + */ + sfs := src.getInt32Slice() + if sfs != nil { + dfs := dst.getInt32Slice() + dfs = append(dfs, sfs...) + if dfs == nil { + dfs = []int32{} + } + dst.setInt32Slice(dfs) + } + } + case isPointer: // E.g., *int32 + mfi.merge = func(dst, src pointer) { + // NOTE: toInt32Ptr is not defined (see pointer_reflect.go). + /* + sfpp := src.toInt32Ptr() + if *sfpp != nil { + dfpp := dst.toInt32Ptr() + if *dfpp == nil { + *dfpp = Int32(**sfpp) + } else { + **dfpp = **sfpp + } + } + */ + sfp := src.getInt32Ptr() + if sfp != nil { + dfp := dst.getInt32Ptr() + if dfp == nil { + dst.setInt32Ptr(*sfp) + } else { + *dfp = *sfp + } + } + } + default: // E.g., int32 + mfi.merge = func(dst, src pointer) { + if v := *src.toInt32(); v != 0 { + *dst.toInt32() = v + } + } + } + case reflect.Int64: + switch { + case isSlice: // E.g., []int64 + mfi.merge = func(dst, src pointer) { + sfsp := src.toInt64Slice() + if *sfsp != nil { + dfsp := dst.toInt64Slice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []int64{} + } + } + } + case isPointer: // E.g., *int64 + mfi.merge = func(dst, src pointer) { + sfpp := src.toInt64Ptr() + if *sfpp != nil { + dfpp := dst.toInt64Ptr() + if *dfpp == nil { + *dfpp = Int64(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., int64 + mfi.merge = func(dst, src pointer) { + if v := *src.toInt64(); v != 0 { + *dst.toInt64() = v + } + } + } + case reflect.Uint32: + switch { + case isSlice: // E.g., []uint32 + mfi.merge = func(dst, src pointer) { + sfsp := src.toUint32Slice() + if *sfsp != nil { + dfsp := dst.toUint32Slice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []uint32{} + } + } + } + case isPointer: // E.g., *uint32 + mfi.merge = func(dst, src pointer) { + sfpp := src.toUint32Ptr() + if *sfpp != nil { + dfpp := dst.toUint32Ptr() + if *dfpp == nil { + *dfpp = Uint32(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., uint32 + mfi.merge = func(dst, src pointer) { + if v := *src.toUint32(); v != 0 { + *dst.toUint32() = v + } + } + } + case reflect.Uint64: + switch { + case isSlice: // E.g., []uint64 + mfi.merge = func(dst, src pointer) { + sfsp := src.toUint64Slice() + if *sfsp != nil { + dfsp := dst.toUint64Slice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []uint64{} + } + } + } + case isPointer: // E.g., *uint64 + mfi.merge = func(dst, src pointer) { + sfpp := src.toUint64Ptr() + if *sfpp != nil { + dfpp := dst.toUint64Ptr() + if *dfpp == nil { + *dfpp = Uint64(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., uint64 + mfi.merge = func(dst, src pointer) { + if v := *src.toUint64(); v != 0 { + *dst.toUint64() = v + } + } + } + case reflect.Float32: + switch { + case isSlice: // E.g., []float32 + mfi.merge = func(dst, src pointer) { + sfsp := src.toFloat32Slice() + if *sfsp != nil { + dfsp := dst.toFloat32Slice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []float32{} + } + } + } + case isPointer: // E.g., *float32 + mfi.merge = func(dst, src pointer) { + sfpp := src.toFloat32Ptr() + if *sfpp != nil { + dfpp := dst.toFloat32Ptr() + if *dfpp == nil { + *dfpp = Float32(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., float32 + mfi.merge = func(dst, src pointer) { + if v := *src.toFloat32(); v != 0 { + *dst.toFloat32() = v + } + } + } + case reflect.Float64: + switch { + case isSlice: // E.g., []float64 + mfi.merge = func(dst, src pointer) { + sfsp := src.toFloat64Slice() + if *sfsp != nil { + dfsp := dst.toFloat64Slice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []float64{} + } + } + } + case isPointer: // E.g., *float64 + mfi.merge = func(dst, src pointer) { + sfpp := src.toFloat64Ptr() + if *sfpp != nil { + dfpp := dst.toFloat64Ptr() + if *dfpp == nil { + *dfpp = Float64(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., float64 + mfi.merge = func(dst, src pointer) { + if v := *src.toFloat64(); v != 0 { + *dst.toFloat64() = v + } + } + } + case reflect.Bool: + switch { + case isSlice: // E.g., []bool + mfi.merge = func(dst, src pointer) { + sfsp := src.toBoolSlice() + if *sfsp != nil { + dfsp := dst.toBoolSlice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []bool{} + } + } + } + case isPointer: // E.g., *bool + mfi.merge = func(dst, src pointer) { + sfpp := src.toBoolPtr() + if *sfpp != nil { + dfpp := dst.toBoolPtr() + if *dfpp == nil { + *dfpp = Bool(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., bool + mfi.merge = func(dst, src pointer) { + if v := *src.toBool(); v { + *dst.toBool() = v + } + } + } + case reflect.String: + switch { + case isSlice: // E.g., []string + mfi.merge = func(dst, src pointer) { + sfsp := src.toStringSlice() + if *sfsp != nil { + dfsp := dst.toStringSlice() + *dfsp = append(*dfsp, *sfsp...) + if *dfsp == nil { + *dfsp = []string{} + } + } + } + case isPointer: // E.g., *string + mfi.merge = func(dst, src pointer) { + sfpp := src.toStringPtr() + if *sfpp != nil { + dfpp := dst.toStringPtr() + if *dfpp == nil { + *dfpp = String(**sfpp) + } else { + **dfpp = **sfpp + } + } + } + default: // E.g., string + mfi.merge = func(dst, src pointer) { + if v := *src.toString(); v != "" { + *dst.toString() = v + } + } + } + case reflect.Slice: + isProto3 := props.Prop[i].proto3 + switch { + case isPointer: + panic("bad pointer in byte slice case in " + tf.Name()) + case tf.Elem().Kind() != reflect.Uint8: + panic("bad element kind in byte slice case in " + tf.Name()) + case isSlice: // E.g., [][]byte + mfi.merge = func(dst, src pointer) { + sbsp := src.toBytesSlice() + if *sbsp != nil { + dbsp := dst.toBytesSlice() + for _, sb := range *sbsp { + if sb == nil { + *dbsp = append(*dbsp, nil) + } else { + *dbsp = append(*dbsp, append([]byte{}, sb...)) + } + } + if *dbsp == nil { + *dbsp = [][]byte{} + } + } + } + default: // E.g., []byte + mfi.merge = func(dst, src pointer) { + sbp := src.toBytes() + if *sbp != nil { + dbp := dst.toBytes() + if !isProto3 || len(*sbp) > 0 { + *dbp = append([]byte{}, *sbp...) + } + } + } + } + case reflect.Struct: + switch { + case !isPointer: + panic(fmt.Sprintf("message field %s without pointer", tf)) + case isSlice: // E.g., []*pb.T + mi := getMergeInfo(tf) + mfi.merge = func(dst, src pointer) { + sps := src.getPointerSlice() + if sps != nil { + dps := dst.getPointerSlice() + for _, sp := range sps { + var dp pointer + if !sp.isNil() { + dp = valToPointer(reflect.New(tf)) + mi.merge(dp, sp) + } + dps = append(dps, dp) + } + if dps == nil { + dps = []pointer{} + } + dst.setPointerSlice(dps) + } + } + default: // E.g., *pb.T + mi := getMergeInfo(tf) + mfi.merge = func(dst, src pointer) { + sp := src.getPointer() + if !sp.isNil() { + dp := dst.getPointer() + if dp.isNil() { + dp = valToPointer(reflect.New(tf)) + dst.setPointer(dp) + } + mi.merge(dp, sp) + } + } + } + case reflect.Map: + switch { + case isPointer || isSlice: + panic("bad pointer or slice in map case in " + tf.Name()) + default: // E.g., map[K]V + mfi.merge = func(dst, src pointer) { + sm := src.asPointerTo(tf).Elem() + if sm.Len() == 0 { + return + } + dm := dst.asPointerTo(tf).Elem() + if dm.IsNil() { + dm.Set(reflect.MakeMap(tf)) + } + + switch tf.Elem().Kind() { + case reflect.Ptr: // Proto struct (e.g., *T) + for _, key := range sm.MapKeys() { + val := sm.MapIndex(key) + val = reflect.ValueOf(Clone(val.Interface().(Message))) + dm.SetMapIndex(key, val) + } + case reflect.Slice: // E.g. Bytes type (e.g., []byte) + for _, key := range sm.MapKeys() { + val := sm.MapIndex(key) + val = reflect.ValueOf(append([]byte{}, val.Bytes()...)) + dm.SetMapIndex(key, val) + } + default: // Basic type (e.g., string) + for _, key := range sm.MapKeys() { + val := sm.MapIndex(key) + dm.SetMapIndex(key, val) + } + } + } + } + case reflect.Interface: + // Must be oneof field. + switch { + case isPointer || isSlice: + panic("bad pointer or slice in interface case in " + tf.Name()) + default: // E.g., interface{} + // TODO: Make this faster? + mfi.merge = func(dst, src pointer) { + su := src.asPointerTo(tf).Elem() + if !su.IsNil() { + du := dst.asPointerTo(tf).Elem() + typ := su.Elem().Type() + if du.IsNil() || du.Elem().Type() != typ { + du.Set(reflect.New(typ.Elem())) // Initialize interface if empty + } + sv := su.Elem().Elem().Field(0) + if sv.Kind() == reflect.Ptr && sv.IsNil() { + return + } + dv := du.Elem().Elem().Field(0) + if dv.Kind() == reflect.Ptr && dv.IsNil() { + dv.Set(reflect.New(sv.Type().Elem())) // Initialize proto message if empty + } + switch sv.Type().Kind() { + case reflect.Ptr: // Proto struct (e.g., *T) + Merge(dv.Interface().(Message), sv.Interface().(Message)) + case reflect.Slice: // E.g. Bytes type (e.g., []byte) + dv.Set(reflect.ValueOf(append([]byte{}, sv.Bytes()...))) + default: // Basic type (e.g., string) + dv.Set(sv) + } + } + } + } + default: + panic(fmt.Sprintf("merger not found for type:%s", tf)) + } + mi.fields = append(mi.fields, mfi) + } + + mi.unrecognized = invalidField + if f, ok := t.FieldByName("XXX_unrecognized"); ok { + if f.Type != reflect.TypeOf([]byte{}) { + panic("expected XXX_unrecognized to be of type []byte") + } + mi.unrecognized = toField(&f) + } + + atomic.StoreInt32(&mi.initialized, 1) +} diff --git a/vendor/github.com/golang/protobuf/proto/table_unmarshal.go b/vendor/github.com/golang/protobuf/proto/table_unmarshal.go new file mode 100644 index 00000000..55f0340a --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/table_unmarshal.go @@ -0,0 +1,1967 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2016 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +import ( + "errors" + "fmt" + "io" + "math" + "reflect" + "strconv" + "strings" + "sync" + "sync/atomic" + "unicode/utf8" +) + +// Unmarshal is the entry point from the generated .pb.go files. +// This function is not intended to be used by non-generated code. +// This function is not subject to any compatibility guarantee. +// msg contains a pointer to a protocol buffer struct. +// b is the data to be unmarshaled into the protocol buffer. +// a is a pointer to a place to store cached unmarshal information. +func (a *InternalMessageInfo) Unmarshal(msg Message, b []byte) error { + // Load the unmarshal information for this message type. + // The atomic load ensures memory consistency. + u := atomicLoadUnmarshalInfo(&a.unmarshal) + if u == nil { + // Slow path: find unmarshal info for msg, update a with it. + u = getUnmarshalInfo(reflect.TypeOf(msg).Elem()) + atomicStoreUnmarshalInfo(&a.unmarshal, u) + } + // Then do the unmarshaling. + err := u.unmarshal(toPointer(&msg), b) + return err +} + +type unmarshalInfo struct { + typ reflect.Type // type of the protobuf struct + + // 0 = only typ field is initialized + // 1 = completely initialized + initialized int32 + lock sync.Mutex // prevents double initialization + dense []unmarshalFieldInfo // fields indexed by tag # + sparse map[uint64]unmarshalFieldInfo // fields indexed by tag # + reqFields []string // names of required fields + reqMask uint64 // 1< 0 { + // Read tag and wire type. + // Special case 1 and 2 byte varints. + var x uint64 + if b[0] < 128 { + x = uint64(b[0]) + b = b[1:] + } else if len(b) >= 2 && b[1] < 128 { + x = uint64(b[0]&0x7f) + uint64(b[1])<<7 + b = b[2:] + } else { + var n int + x, n = decodeVarint(b) + if n == 0 { + return io.ErrUnexpectedEOF + } + b = b[n:] + } + tag := x >> 3 + wire := int(x) & 7 + + // Dispatch on the tag to one of the unmarshal* functions below. + var f unmarshalFieldInfo + if tag < uint64(len(u.dense)) { + f = u.dense[tag] + } else { + f = u.sparse[tag] + } + if fn := f.unmarshal; fn != nil { + var err error + b, err = fn(b, m.offset(f.field), wire) + if err == nil { + reqMask |= f.reqMask + continue + } + if r, ok := err.(*RequiredNotSetError); ok { + // Remember this error, but keep parsing. We need to produce + // a full parse even if a required field is missing. + rnse = r + reqMask |= f.reqMask + continue + } + if err != errInternalBadWireType { + return err + } + // Fragments with bad wire type are treated as unknown fields. + } + + // Unknown tag. + if !u.unrecognized.IsValid() { + // Don't keep unrecognized data; just skip it. + var err error + b, err = skipField(b, wire) + if err != nil { + return err + } + continue + } + // Keep unrecognized data around. + // maybe in extensions, maybe in the unrecognized field. + z := m.offset(u.unrecognized).toBytes() + var emap map[int32]Extension + var e Extension + for _, r := range u.extensionRanges { + if uint64(r.Start) <= tag && tag <= uint64(r.End) { + if u.extensions.IsValid() { + mp := m.offset(u.extensions).toExtensions() + emap = mp.extensionsWrite() + e = emap[int32(tag)] + z = &e.enc + break + } + if u.oldExtensions.IsValid() { + p := m.offset(u.oldExtensions).toOldExtensions() + emap = *p + if emap == nil { + emap = map[int32]Extension{} + *p = emap + } + e = emap[int32(tag)] + z = &e.enc + break + } + panic("no extensions field available") + } + } + + // Use wire type to skip data. + var err error + b0 := b + b, err = skipField(b, wire) + if err != nil { + return err + } + *z = encodeVarint(*z, tag<<3|uint64(wire)) + *z = append(*z, b0[:len(b0)-len(b)]...) + + if emap != nil { + emap[int32(tag)] = e + } + } + if rnse != nil { + // A required field of a submessage/group is missing. Return that error. + return rnse + } + if reqMask != u.reqMask { + // A required field of this message is missing. + for _, n := range u.reqFields { + if reqMask&1 == 0 { + return &RequiredNotSetError{n} + } + reqMask >>= 1 + } + } + return nil +} + +// computeUnmarshalInfo fills in u with information for use +// in unmarshaling protocol buffers of type u.typ. +func (u *unmarshalInfo) computeUnmarshalInfo() { + u.lock.Lock() + defer u.lock.Unlock() + if u.initialized != 0 { + return + } + t := u.typ + n := t.NumField() + + // Set up the "not found" value for the unrecognized byte buffer. + // This is the default for proto3. + u.unrecognized = invalidField + u.extensions = invalidField + u.oldExtensions = invalidField + + // List of the generated type and offset for each oneof field. + type oneofField struct { + ityp reflect.Type // interface type of oneof field + field field // offset in containing message + } + var oneofFields []oneofField + + for i := 0; i < n; i++ { + f := t.Field(i) + if f.Name == "XXX_unrecognized" { + // The byte slice used to hold unrecognized input is special. + if f.Type != reflect.TypeOf(([]byte)(nil)) { + panic("bad type for XXX_unrecognized field: " + f.Type.Name()) + } + u.unrecognized = toField(&f) + continue + } + if f.Name == "XXX_InternalExtensions" { + // Ditto here. + if f.Type != reflect.TypeOf(XXX_InternalExtensions{}) { + panic("bad type for XXX_InternalExtensions field: " + f.Type.Name()) + } + u.extensions = toField(&f) + if f.Tag.Get("protobuf_messageset") == "1" { + u.isMessageSet = true + } + continue + } + if f.Name == "XXX_extensions" { + // An older form of the extensions field. + if f.Type != reflect.TypeOf((map[int32]Extension)(nil)) { + panic("bad type for XXX_extensions field: " + f.Type.Name()) + } + u.oldExtensions = toField(&f) + continue + } + if f.Name == "XXX_NoUnkeyedLiteral" || f.Name == "XXX_sizecache" { + continue + } + + oneof := f.Tag.Get("protobuf_oneof") + if oneof != "" { + oneofFields = append(oneofFields, oneofField{f.Type, toField(&f)}) + // The rest of oneof processing happens below. + continue + } + + tags := f.Tag.Get("protobuf") + tagArray := strings.Split(tags, ",") + if len(tagArray) < 2 { + panic("protobuf tag not enough fields in " + t.Name() + "." + f.Name + ": " + tags) + } + tag, err := strconv.Atoi(tagArray[1]) + if err != nil { + panic("protobuf tag field not an integer: " + tagArray[1]) + } + + name := "" + for _, tag := range tagArray[3:] { + if strings.HasPrefix(tag, "name=") { + name = tag[5:] + } + } + + // Extract unmarshaling function from the field (its type and tags). + unmarshal := fieldUnmarshaler(&f) + + // Required field? + var reqMask uint64 + if tagArray[2] == "req" { + bit := len(u.reqFields) + u.reqFields = append(u.reqFields, name) + reqMask = uint64(1) << uint(bit) + // TODO: if we have more than 64 required fields, we end up + // not verifying that all required fields are present. + // Fix this, perhaps using a count of required fields? + } + + // Store the info in the correct slot in the message. + u.setTag(tag, toField(&f), unmarshal, reqMask) + } + + // Find any types associated with oneof fields. + // TODO: XXX_OneofFuncs returns more info than we need. Get rid of some of it? + fn := reflect.Zero(reflect.PtrTo(t)).MethodByName("XXX_OneofFuncs") + if fn.IsValid() { + res := fn.Call(nil)[3] // last return value from XXX_OneofFuncs: []interface{} + for i := res.Len() - 1; i >= 0; i-- { + v := res.Index(i) // interface{} + tptr := reflect.ValueOf(v.Interface()).Type() // *Msg_X + typ := tptr.Elem() // Msg_X + + f := typ.Field(0) // oneof implementers have one field + baseUnmarshal := fieldUnmarshaler(&f) + tagstr := strings.Split(f.Tag.Get("protobuf"), ",")[1] + tag, err := strconv.Atoi(tagstr) + if err != nil { + panic("protobuf tag field not an integer: " + tagstr) + } + + // Find the oneof field that this struct implements. + // Might take O(n^2) to process all of the oneofs, but who cares. + for _, of := range oneofFields { + if tptr.Implements(of.ityp) { + // We have found the corresponding interface for this struct. + // That lets us know where this struct should be stored + // when we encounter it during unmarshaling. + unmarshal := makeUnmarshalOneof(typ, of.ityp, baseUnmarshal) + u.setTag(tag, of.field, unmarshal, 0) + } + } + } + } + + // Get extension ranges, if any. + fn = reflect.Zero(reflect.PtrTo(t)).MethodByName("ExtensionRangeArray") + if fn.IsValid() { + if !u.extensions.IsValid() && !u.oldExtensions.IsValid() { + panic("a message with extensions, but no extensions field in " + t.Name()) + } + u.extensionRanges = fn.Call(nil)[0].Interface().([]ExtensionRange) + } + + // Explicitly disallow tag 0. This will ensure we flag an error + // when decoding a buffer of all zeros. Without this code, we + // would decode and skip an all-zero buffer of even length. + // [0 0] is [tag=0/wiretype=varint varint-encoded-0]. + u.setTag(0, zeroField, func(b []byte, f pointer, w int) ([]byte, error) { + return nil, fmt.Errorf("proto: %s: illegal tag 0 (wire type %d)", t, w) + }, 0) + + // Set mask for required field check. + u.reqMask = uint64(1)<= 0 && (tag < 16 || tag < 2*n) { // TODO: what are the right numbers here? + for len(u.dense) <= tag { + u.dense = append(u.dense, unmarshalFieldInfo{}) + } + u.dense[tag] = i + return + } + if u.sparse == nil { + u.sparse = map[uint64]unmarshalFieldInfo{} + } + u.sparse[uint64(tag)] = i +} + +// fieldUnmarshaler returns an unmarshaler for the given field. +func fieldUnmarshaler(f *reflect.StructField) unmarshaler { + if f.Type.Kind() == reflect.Map { + return makeUnmarshalMap(f) + } + return typeUnmarshaler(f.Type, f.Tag.Get("protobuf")) +} + +// typeUnmarshaler returns an unmarshaler for the given field type / field tag pair. +func typeUnmarshaler(t reflect.Type, tags string) unmarshaler { + tagArray := strings.Split(tags, ",") + encoding := tagArray[0] + name := "unknown" + for _, tag := range tagArray[3:] { + if strings.HasPrefix(tag, "name=") { + name = tag[5:] + } + } + + // Figure out packaging (pointer, slice, or both) + slice := false + pointer := false + if t.Kind() == reflect.Slice && t.Elem().Kind() != reflect.Uint8 { + slice = true + t = t.Elem() + } + if t.Kind() == reflect.Ptr { + pointer = true + t = t.Elem() + } + + // We'll never have both pointer and slice for basic types. + if pointer && slice && t.Kind() != reflect.Struct { + panic("both pointer and slice for basic type in " + t.Name()) + } + + switch t.Kind() { + case reflect.Bool: + if pointer { + return unmarshalBoolPtr + } + if slice { + return unmarshalBoolSlice + } + return unmarshalBoolValue + case reflect.Int32: + switch encoding { + case "fixed32": + if pointer { + return unmarshalFixedS32Ptr + } + if slice { + return unmarshalFixedS32Slice + } + return unmarshalFixedS32Value + case "varint": + // this could be int32 or enum + if pointer { + return unmarshalInt32Ptr + } + if slice { + return unmarshalInt32Slice + } + return unmarshalInt32Value + case "zigzag32": + if pointer { + return unmarshalSint32Ptr + } + if slice { + return unmarshalSint32Slice + } + return unmarshalSint32Value + } + case reflect.Int64: + switch encoding { + case "fixed64": + if pointer { + return unmarshalFixedS64Ptr + } + if slice { + return unmarshalFixedS64Slice + } + return unmarshalFixedS64Value + case "varint": + if pointer { + return unmarshalInt64Ptr + } + if slice { + return unmarshalInt64Slice + } + return unmarshalInt64Value + case "zigzag64": + if pointer { + return unmarshalSint64Ptr + } + if slice { + return unmarshalSint64Slice + } + return unmarshalSint64Value + } + case reflect.Uint32: + switch encoding { + case "fixed32": + if pointer { + return unmarshalFixed32Ptr + } + if slice { + return unmarshalFixed32Slice + } + return unmarshalFixed32Value + case "varint": + if pointer { + return unmarshalUint32Ptr + } + if slice { + return unmarshalUint32Slice + } + return unmarshalUint32Value + } + case reflect.Uint64: + switch encoding { + case "fixed64": + if pointer { + return unmarshalFixed64Ptr + } + if slice { + return unmarshalFixed64Slice + } + return unmarshalFixed64Value + case "varint": + if pointer { + return unmarshalUint64Ptr + } + if slice { + return unmarshalUint64Slice + } + return unmarshalUint64Value + } + case reflect.Float32: + if pointer { + return unmarshalFloat32Ptr + } + if slice { + return unmarshalFloat32Slice + } + return unmarshalFloat32Value + case reflect.Float64: + if pointer { + return unmarshalFloat64Ptr + } + if slice { + return unmarshalFloat64Slice + } + return unmarshalFloat64Value + case reflect.Map: + panic("map type in typeUnmarshaler in " + t.Name()) + case reflect.Slice: + if pointer { + panic("bad pointer in slice case in " + t.Name()) + } + if slice { + return unmarshalBytesSlice + } + return unmarshalBytesValue + case reflect.String: + if pointer { + return unmarshalStringPtr + } + if slice { + return unmarshalStringSlice + } + return unmarshalStringValue + case reflect.Struct: + // message or group field + if !pointer { + panic(fmt.Sprintf("message/group field %s:%s without pointer", t, encoding)) + } + switch encoding { + case "bytes": + if slice { + return makeUnmarshalMessageSlicePtr(getUnmarshalInfo(t), name) + } + return makeUnmarshalMessagePtr(getUnmarshalInfo(t), name) + case "group": + if slice { + return makeUnmarshalGroupSlicePtr(getUnmarshalInfo(t), name) + } + return makeUnmarshalGroupPtr(getUnmarshalInfo(t), name) + } + } + panic(fmt.Sprintf("unmarshaler not found type:%s encoding:%s", t, encoding)) +} + +// Below are all the unmarshalers for individual fields of various types. + +func unmarshalInt64Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x) + *f.toInt64() = v + return b, nil +} + +func unmarshalInt64Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x) + *f.toInt64Ptr() = &v + return b, nil +} + +func unmarshalInt64Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x) + s := f.toInt64Slice() + *s = append(*s, v) + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x) + s := f.toInt64Slice() + *s = append(*s, v) + return b, nil +} + +func unmarshalSint64Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x>>1) ^ int64(x)<<63>>63 + *f.toInt64() = v + return b, nil +} + +func unmarshalSint64Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x>>1) ^ int64(x)<<63>>63 + *f.toInt64Ptr() = &v + return b, nil +} + +func unmarshalSint64Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x>>1) ^ int64(x)<<63>>63 + s := f.toInt64Slice() + *s = append(*s, v) + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int64(x>>1) ^ int64(x)<<63>>63 + s := f.toInt64Slice() + *s = append(*s, v) + return b, nil +} + +func unmarshalUint64Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint64(x) + *f.toUint64() = v + return b, nil +} + +func unmarshalUint64Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint64(x) + *f.toUint64Ptr() = &v + return b, nil +} + +func unmarshalUint64Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint64(x) + s := f.toUint64Slice() + *s = append(*s, v) + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint64(x) + s := f.toUint64Slice() + *s = append(*s, v) + return b, nil +} + +func unmarshalInt32Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x) + *f.toInt32() = v + return b, nil +} + +func unmarshalInt32Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x) + f.setInt32Ptr(v) + return b, nil +} + +func unmarshalInt32Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x) + f.appendInt32Slice(v) + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x) + f.appendInt32Slice(v) + return b, nil +} + +func unmarshalSint32Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x>>1) ^ int32(x)<<31>>31 + *f.toInt32() = v + return b, nil +} + +func unmarshalSint32Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x>>1) ^ int32(x)<<31>>31 + f.setInt32Ptr(v) + return b, nil +} + +func unmarshalSint32Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x>>1) ^ int32(x)<<31>>31 + f.appendInt32Slice(v) + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := int32(x>>1) ^ int32(x)<<31>>31 + f.appendInt32Slice(v) + return b, nil +} + +func unmarshalUint32Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint32(x) + *f.toUint32() = v + return b, nil +} + +func unmarshalUint32Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint32(x) + *f.toUint32Ptr() = &v + return b, nil +} + +func unmarshalUint32Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint32(x) + s := f.toUint32Slice() + *s = append(*s, v) + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + v := uint32(x) + s := f.toUint32Slice() + *s = append(*s, v) + return b, nil +} + +func unmarshalFixed64Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 + *f.toUint64() = v + return b[8:], nil +} + +func unmarshalFixed64Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 + *f.toUint64Ptr() = &v + return b[8:], nil +} + +func unmarshalFixed64Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 + s := f.toUint64Slice() + *s = append(*s, v) + b = b[8:] + } + return res, nil + } + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 + s := f.toUint64Slice() + *s = append(*s, v) + return b[8:], nil +} + +func unmarshalFixedS64Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := int64(b[0]) | int64(b[1])<<8 | int64(b[2])<<16 | int64(b[3])<<24 | int64(b[4])<<32 | int64(b[5])<<40 | int64(b[6])<<48 | int64(b[7])<<56 + *f.toInt64() = v + return b[8:], nil +} + +func unmarshalFixedS64Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := int64(b[0]) | int64(b[1])<<8 | int64(b[2])<<16 | int64(b[3])<<24 | int64(b[4])<<32 | int64(b[5])<<40 | int64(b[6])<<48 | int64(b[7])<<56 + *f.toInt64Ptr() = &v + return b[8:], nil +} + +func unmarshalFixedS64Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := int64(b[0]) | int64(b[1])<<8 | int64(b[2])<<16 | int64(b[3])<<24 | int64(b[4])<<32 | int64(b[5])<<40 | int64(b[6])<<48 | int64(b[7])<<56 + s := f.toInt64Slice() + *s = append(*s, v) + b = b[8:] + } + return res, nil + } + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := int64(b[0]) | int64(b[1])<<8 | int64(b[2])<<16 | int64(b[3])<<24 | int64(b[4])<<32 | int64(b[5])<<40 | int64(b[6])<<48 | int64(b[7])<<56 + s := f.toInt64Slice() + *s = append(*s, v) + return b[8:], nil +} + +func unmarshalFixed32Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24 + *f.toUint32() = v + return b[4:], nil +} + +func unmarshalFixed32Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24 + *f.toUint32Ptr() = &v + return b[4:], nil +} + +func unmarshalFixed32Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24 + s := f.toUint32Slice() + *s = append(*s, v) + b = b[4:] + } + return res, nil + } + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24 + s := f.toUint32Slice() + *s = append(*s, v) + return b[4:], nil +} + +func unmarshalFixedS32Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := int32(b[0]) | int32(b[1])<<8 | int32(b[2])<<16 | int32(b[3])<<24 + *f.toInt32() = v + return b[4:], nil +} + +func unmarshalFixedS32Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := int32(b[0]) | int32(b[1])<<8 | int32(b[2])<<16 | int32(b[3])<<24 + f.setInt32Ptr(v) + return b[4:], nil +} + +func unmarshalFixedS32Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := int32(b[0]) | int32(b[1])<<8 | int32(b[2])<<16 | int32(b[3])<<24 + f.appendInt32Slice(v) + b = b[4:] + } + return res, nil + } + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := int32(b[0]) | int32(b[1])<<8 | int32(b[2])<<16 | int32(b[3])<<24 + f.appendInt32Slice(v) + return b[4:], nil +} + +func unmarshalBoolValue(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + // Note: any length varint is allowed, even though any sane + // encoder will use one byte. + // See https://github.com/golang/protobuf/issues/76 + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + // TODO: check if x>1? Tests seem to indicate no. + v := x != 0 + *f.toBool() = v + return b[n:], nil +} + +func unmarshalBoolPtr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + v := x != 0 + *f.toBoolPtr() = &v + return b[n:], nil +} + +func unmarshalBoolSlice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + x, n = decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + v := x != 0 + s := f.toBoolSlice() + *s = append(*s, v) + b = b[n:] + } + return res, nil + } + if w != WireVarint { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + v := x != 0 + s := f.toBoolSlice() + *s = append(*s, v) + return b[n:], nil +} + +func unmarshalFloat64Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float64frombits(uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56) + *f.toFloat64() = v + return b[8:], nil +} + +func unmarshalFloat64Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float64frombits(uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56) + *f.toFloat64Ptr() = &v + return b[8:], nil +} + +func unmarshalFloat64Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float64frombits(uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56) + s := f.toFloat64Slice() + *s = append(*s, v) + b = b[8:] + } + return res, nil + } + if w != WireFixed64 { + return b, errInternalBadWireType + } + if len(b) < 8 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float64frombits(uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56) + s := f.toFloat64Slice() + *s = append(*s, v) + return b[8:], nil +} + +func unmarshalFloat32Value(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float32frombits(uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24) + *f.toFloat32() = v + return b[4:], nil +} + +func unmarshalFloat32Ptr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float32frombits(uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24) + *f.toFloat32Ptr() = &v + return b[4:], nil +} + +func unmarshalFloat32Slice(b []byte, f pointer, w int) ([]byte, error) { + if w == WireBytes { // packed + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + res := b[x:] + b = b[:x] + for len(b) > 0 { + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float32frombits(uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24) + s := f.toFloat32Slice() + *s = append(*s, v) + b = b[4:] + } + return res, nil + } + if w != WireFixed32 { + return b, errInternalBadWireType + } + if len(b) < 4 { + return nil, io.ErrUnexpectedEOF + } + v := math.Float32frombits(uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24) + s := f.toFloat32Slice() + *s = append(*s, v) + return b[4:], nil +} + +func unmarshalStringValue(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + v := string(b[:x]) + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + *f.toString() = v + return b[x:], nil +} + +func unmarshalStringPtr(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + v := string(b[:x]) + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + *f.toStringPtr() = &v + return b[x:], nil +} + +func unmarshalStringSlice(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + v := string(b[:x]) + if !utf8.ValidString(v) { + return nil, errInvalidUTF8 + } + s := f.toStringSlice() + *s = append(*s, v) + return b[x:], nil +} + +var emptyBuf [0]byte + +func unmarshalBytesValue(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + // The use of append here is a trick which avoids the zeroing + // that would be required if we used a make/copy pair. + // We append to emptyBuf instead of nil because we want + // a non-nil result even when the length is 0. + v := append(emptyBuf[:], b[:x]...) + *f.toBytes() = v + return b[x:], nil +} + +func unmarshalBytesSlice(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + v := append(emptyBuf[:], b[:x]...) + s := f.toBytesSlice() + *s = append(*s, v) + return b[x:], nil +} + +func makeUnmarshalMessagePtr(sub *unmarshalInfo, name string) unmarshaler { + return func(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + // First read the message field to see if something is there. + // The semantics of multiple submessages are weird. Instead of + // the last one winning (as it is for all other fields), multiple + // submessages are merged. + v := f.getPointer() + if v.isNil() { + v = valToPointer(reflect.New(sub.typ)) + f.setPointer(v) + } + err := sub.unmarshal(v, b[:x]) + if err != nil { + if r, ok := err.(*RequiredNotSetError); ok { + r.field = name + "." + r.field + } else { + return nil, err + } + } + return b[x:], err + } +} + +func makeUnmarshalMessageSlicePtr(sub *unmarshalInfo, name string) unmarshaler { + return func(b []byte, f pointer, w int) ([]byte, error) { + if w != WireBytes { + return b, errInternalBadWireType + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + v := valToPointer(reflect.New(sub.typ)) + err := sub.unmarshal(v, b[:x]) + if err != nil { + if r, ok := err.(*RequiredNotSetError); ok { + r.field = name + "." + r.field + } else { + return nil, err + } + } + f.appendPointer(v) + return b[x:], err + } +} + +func makeUnmarshalGroupPtr(sub *unmarshalInfo, name string) unmarshaler { + return func(b []byte, f pointer, w int) ([]byte, error) { + if w != WireStartGroup { + return b, errInternalBadWireType + } + x, y := findEndGroup(b) + if x < 0 { + return nil, io.ErrUnexpectedEOF + } + v := f.getPointer() + if v.isNil() { + v = valToPointer(reflect.New(sub.typ)) + f.setPointer(v) + } + err := sub.unmarshal(v, b[:x]) + if err != nil { + if r, ok := err.(*RequiredNotSetError); ok { + r.field = name + "." + r.field + } else { + return nil, err + } + } + return b[y:], err + } +} + +func makeUnmarshalGroupSlicePtr(sub *unmarshalInfo, name string) unmarshaler { + return func(b []byte, f pointer, w int) ([]byte, error) { + if w != WireStartGroup { + return b, errInternalBadWireType + } + x, y := findEndGroup(b) + if x < 0 { + return nil, io.ErrUnexpectedEOF + } + v := valToPointer(reflect.New(sub.typ)) + err := sub.unmarshal(v, b[:x]) + if err != nil { + if r, ok := err.(*RequiredNotSetError); ok { + r.field = name + "." + r.field + } else { + return nil, err + } + } + f.appendPointer(v) + return b[y:], err + } +} + +func makeUnmarshalMap(f *reflect.StructField) unmarshaler { + t := f.Type + kt := t.Key() + vt := t.Elem() + unmarshalKey := typeUnmarshaler(kt, f.Tag.Get("protobuf_key")) + unmarshalVal := typeUnmarshaler(vt, f.Tag.Get("protobuf_val")) + return func(b []byte, f pointer, w int) ([]byte, error) { + // The map entry is a submessage. Figure out how big it is. + if w != WireBytes { + return nil, fmt.Errorf("proto: bad wiretype for map field: got %d want %d", w, WireBytes) + } + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + b = b[n:] + if x > uint64(len(b)) { + return nil, io.ErrUnexpectedEOF + } + r := b[x:] // unused data to return + b = b[:x] // data for map entry + + // Note: we could use #keys * #values ~= 200 functions + // to do map decoding without reflection. Probably not worth it. + // Maps will be somewhat slow. Oh well. + + // Read key and value from data. + k := reflect.New(kt) + v := reflect.New(vt) + for len(b) > 0 { + x, n := decodeVarint(b) + if n == 0 { + return nil, io.ErrUnexpectedEOF + } + wire := int(x) & 7 + b = b[n:] + + var err error + switch x >> 3 { + case 1: + b, err = unmarshalKey(b, valToPointer(k), wire) + case 2: + b, err = unmarshalVal(b, valToPointer(v), wire) + default: + err = errInternalBadWireType // skip unknown tag + } + + if err == nil { + continue + } + if err != errInternalBadWireType { + return nil, err + } + + // Skip past unknown fields. + b, err = skipField(b, wire) + if err != nil { + return nil, err + } + } + + // Get map, allocate if needed. + m := f.asPointerTo(t).Elem() // an addressable map[K]T + if m.IsNil() { + m.Set(reflect.MakeMap(t)) + } + + // Insert into map. + m.SetMapIndex(k.Elem(), v.Elem()) + + return r, nil + } +} + +// makeUnmarshalOneof makes an unmarshaler for oneof fields. +// for: +// message Msg { +// oneof F { +// int64 X = 1; +// float64 Y = 2; +// } +// } +// typ is the type of the concrete entry for a oneof case (e.g. Msg_X). +// ityp is the interface type of the oneof field (e.g. isMsg_F). +// unmarshal is the unmarshaler for the base type of the oneof case (e.g. int64). +// Note that this function will be called once for each case in the oneof. +func makeUnmarshalOneof(typ, ityp reflect.Type, unmarshal unmarshaler) unmarshaler { + sf := typ.Field(0) + field0 := toField(&sf) + return func(b []byte, f pointer, w int) ([]byte, error) { + // Allocate holder for value. + v := reflect.New(typ) + + // Unmarshal data into holder. + // We unmarshal into the first field of the holder object. + var err error + b, err = unmarshal(b, valToPointer(v).offset(field0), w) + if err != nil { + return nil, err + } + + // Write pointer to holder into target field. + f.asPointerTo(ityp).Elem().Set(v) + + return b, nil + } +} + +// Error used by decode internally. +var errInternalBadWireType = errors.New("proto: internal error: bad wiretype") + +// skipField skips past a field of type wire and returns the remaining bytes. +func skipField(b []byte, wire int) ([]byte, error) { + switch wire { + case WireVarint: + _, k := decodeVarint(b) + if k == 0 { + return b, io.ErrUnexpectedEOF + } + b = b[k:] + case WireFixed32: + if len(b) < 4 { + return b, io.ErrUnexpectedEOF + } + b = b[4:] + case WireFixed64: + if len(b) < 8 { + return b, io.ErrUnexpectedEOF + } + b = b[8:] + case WireBytes: + m, k := decodeVarint(b) + if k == 0 || uint64(len(b)-k) < m { + return b, io.ErrUnexpectedEOF + } + b = b[uint64(k)+m:] + case WireStartGroup: + _, i := findEndGroup(b) + if i == -1 { + return b, io.ErrUnexpectedEOF + } + b = b[i:] + default: + return b, fmt.Errorf("proto: can't skip unknown wire type %d", wire) + } + return b, nil +} + +// findEndGroup finds the index of the next EndGroup tag. +// Groups may be nested, so the "next" EndGroup tag is the first +// unpaired EndGroup. +// findEndGroup returns the indexes of the start and end of the EndGroup tag. +// Returns (-1,-1) if it can't find one. +func findEndGroup(b []byte) (int, int) { + depth := 1 + i := 0 + for { + x, n := decodeVarint(b[i:]) + if n == 0 { + return -1, -1 + } + j := i + i += n + switch x & 7 { + case WireVarint: + _, k := decodeVarint(b[i:]) + if k == 0 { + return -1, -1 + } + i += k + case WireFixed32: + if len(b)-4 < i { + return -1, -1 + } + i += 4 + case WireFixed64: + if len(b)-8 < i { + return -1, -1 + } + i += 8 + case WireBytes: + m, k := decodeVarint(b[i:]) + if k == 0 { + return -1, -1 + } + i += k + if uint64(len(b)-i) < m { + return -1, -1 + } + i += int(m) + case WireStartGroup: + depth++ + case WireEndGroup: + depth-- + if depth == 0 { + return j, i + } + default: + return -1, -1 + } + } +} + +// encodeVarint appends a varint-encoded integer to b and returns the result. +func encodeVarint(b []byte, x uint64) []byte { + for x >= 1<<7 { + b = append(b, byte(x&0x7f|0x80)) + x >>= 7 + } + return append(b, byte(x)) +} + +// decodeVarint reads a varint-encoded integer from b. +// Returns the decoded integer and the number of bytes read. +// If there is an error, it returns 0,0. +func decodeVarint(b []byte) (uint64, int) { + var x, y uint64 + if len(b) <= 0 { + goto bad + } + x = uint64(b[0]) + if x < 0x80 { + return x, 1 + } + x -= 0x80 + + if len(b) <= 1 { + goto bad + } + y = uint64(b[1]) + x += y << 7 + if y < 0x80 { + return x, 2 + } + x -= 0x80 << 7 + + if len(b) <= 2 { + goto bad + } + y = uint64(b[2]) + x += y << 14 + if y < 0x80 { + return x, 3 + } + x -= 0x80 << 14 + + if len(b) <= 3 { + goto bad + } + y = uint64(b[3]) + x += y << 21 + if y < 0x80 { + return x, 4 + } + x -= 0x80 << 21 + + if len(b) <= 4 { + goto bad + } + y = uint64(b[4]) + x += y << 28 + if y < 0x80 { + return x, 5 + } + x -= 0x80 << 28 + + if len(b) <= 5 { + goto bad + } + y = uint64(b[5]) + x += y << 35 + if y < 0x80 { + return x, 6 + } + x -= 0x80 << 35 + + if len(b) <= 6 { + goto bad + } + y = uint64(b[6]) + x += y << 42 + if y < 0x80 { + return x, 7 + } + x -= 0x80 << 42 + + if len(b) <= 7 { + goto bad + } + y = uint64(b[7]) + x += y << 49 + if y < 0x80 { + return x, 8 + } + x -= 0x80 << 49 + + if len(b) <= 8 { + goto bad + } + y = uint64(b[8]) + x += y << 56 + if y < 0x80 { + return x, 9 + } + x -= 0x80 << 56 + + if len(b) <= 9 { + goto bad + } + y = uint64(b[9]) + x += y << 63 + if y < 2 { + return x, 10 + } + +bad: + return 0, 0 +} diff --git a/vendor/github.com/golang/protobuf/proto/text.go b/vendor/github.com/golang/protobuf/proto/text.go new file mode 100644 index 00000000..2205fdaa --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/text.go @@ -0,0 +1,843 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +// Functions for writing the text protocol buffer format. + +import ( + "bufio" + "bytes" + "encoding" + "errors" + "fmt" + "io" + "log" + "math" + "reflect" + "sort" + "strings" +) + +var ( + newline = []byte("\n") + spaces = []byte(" ") + endBraceNewline = []byte("}\n") + backslashN = []byte{'\\', 'n'} + backslashR = []byte{'\\', 'r'} + backslashT = []byte{'\\', 't'} + backslashDQ = []byte{'\\', '"'} + backslashBS = []byte{'\\', '\\'} + posInf = []byte("inf") + negInf = []byte("-inf") + nan = []byte("nan") +) + +type writer interface { + io.Writer + WriteByte(byte) error +} + +// textWriter is an io.Writer that tracks its indentation level. +type textWriter struct { + ind int + complete bool // if the current position is a complete line + compact bool // whether to write out as a one-liner + w writer +} + +func (w *textWriter) WriteString(s string) (n int, err error) { + if !strings.Contains(s, "\n") { + if !w.compact && w.complete { + w.writeIndent() + } + w.complete = false + return io.WriteString(w.w, s) + } + // WriteString is typically called without newlines, so this + // codepath and its copy are rare. We copy to avoid + // duplicating all of Write's logic here. + return w.Write([]byte(s)) +} + +func (w *textWriter) Write(p []byte) (n int, err error) { + newlines := bytes.Count(p, newline) + if newlines == 0 { + if !w.compact && w.complete { + w.writeIndent() + } + n, err = w.w.Write(p) + w.complete = false + return n, err + } + + frags := bytes.SplitN(p, newline, newlines+1) + if w.compact { + for i, frag := range frags { + if i > 0 { + if err := w.w.WriteByte(' '); err != nil { + return n, err + } + n++ + } + nn, err := w.w.Write(frag) + n += nn + if err != nil { + return n, err + } + } + return n, nil + } + + for i, frag := range frags { + if w.complete { + w.writeIndent() + } + nn, err := w.w.Write(frag) + n += nn + if err != nil { + return n, err + } + if i+1 < len(frags) { + if err := w.w.WriteByte('\n'); err != nil { + return n, err + } + n++ + } + } + w.complete = len(frags[len(frags)-1]) == 0 + return n, nil +} + +func (w *textWriter) WriteByte(c byte) error { + if w.compact && c == '\n' { + c = ' ' + } + if !w.compact && w.complete { + w.writeIndent() + } + err := w.w.WriteByte(c) + w.complete = c == '\n' + return err +} + +func (w *textWriter) indent() { w.ind++ } + +func (w *textWriter) unindent() { + if w.ind == 0 { + log.Print("proto: textWriter unindented too far") + return + } + w.ind-- +} + +func writeName(w *textWriter, props *Properties) error { + if _, err := w.WriteString(props.OrigName); err != nil { + return err + } + if props.Wire != "group" { + return w.WriteByte(':') + } + return nil +} + +func requiresQuotes(u string) bool { + // When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted. + for _, ch := range u { + switch { + case ch == '.' || ch == '/' || ch == '_': + continue + case '0' <= ch && ch <= '9': + continue + case 'A' <= ch && ch <= 'Z': + continue + case 'a' <= ch && ch <= 'z': + continue + default: + return true + } + } + return false +} + +// isAny reports whether sv is a google.protobuf.Any message +func isAny(sv reflect.Value) bool { + type wkt interface { + XXX_WellKnownType() string + } + t, ok := sv.Addr().Interface().(wkt) + return ok && t.XXX_WellKnownType() == "Any" +} + +// writeProto3Any writes an expanded google.protobuf.Any message. +// +// It returns (false, nil) if sv value can't be unmarshaled (e.g. because +// required messages are not linked in). +// +// It returns (true, error) when sv was written in expanded format or an error +// was encountered. +func (tm *TextMarshaler) writeProto3Any(w *textWriter, sv reflect.Value) (bool, error) { + turl := sv.FieldByName("TypeUrl") + val := sv.FieldByName("Value") + if !turl.IsValid() || !val.IsValid() { + return true, errors.New("proto: invalid google.protobuf.Any message") + } + + b, ok := val.Interface().([]byte) + if !ok { + return true, errors.New("proto: invalid google.protobuf.Any message") + } + + parts := strings.Split(turl.String(), "/") + mt := MessageType(parts[len(parts)-1]) + if mt == nil { + return false, nil + } + m := reflect.New(mt.Elem()) + if err := Unmarshal(b, m.Interface().(Message)); err != nil { + return false, nil + } + w.Write([]byte("[")) + u := turl.String() + if requiresQuotes(u) { + writeString(w, u) + } else { + w.Write([]byte(u)) + } + if w.compact { + w.Write([]byte("]:<")) + } else { + w.Write([]byte("]: <\n")) + w.ind++ + } + if err := tm.writeStruct(w, m.Elem()); err != nil { + return true, err + } + if w.compact { + w.Write([]byte("> ")) + } else { + w.ind-- + w.Write([]byte(">\n")) + } + return true, nil +} + +func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error { + if tm.ExpandAny && isAny(sv) { + if canExpand, err := tm.writeProto3Any(w, sv); canExpand { + return err + } + } + st := sv.Type() + sprops := GetProperties(st) + for i := 0; i < sv.NumField(); i++ { + fv := sv.Field(i) + props := sprops.Prop[i] + name := st.Field(i).Name + + if name == "XXX_NoUnkeyedLiteral" { + continue + } + + if strings.HasPrefix(name, "XXX_") { + // There are two XXX_ fields: + // XXX_unrecognized []byte + // XXX_extensions map[int32]proto.Extension + // The first is handled here; + // the second is handled at the bottom of this function. + if name == "XXX_unrecognized" && !fv.IsNil() { + if err := writeUnknownStruct(w, fv.Interface().([]byte)); err != nil { + return err + } + } + continue + } + if fv.Kind() == reflect.Ptr && fv.IsNil() { + // Field not filled in. This could be an optional field or + // a required field that wasn't filled in. Either way, there + // isn't anything we can show for it. + continue + } + if fv.Kind() == reflect.Slice && fv.IsNil() { + // Repeated field that is empty, or a bytes field that is unused. + continue + } + + if props.Repeated && fv.Kind() == reflect.Slice { + // Repeated field. + for j := 0; j < fv.Len(); j++ { + if err := writeName(w, props); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte(' '); err != nil { + return err + } + } + v := fv.Index(j) + if v.Kind() == reflect.Ptr && v.IsNil() { + // A nil message in a repeated field is not valid, + // but we can handle that more gracefully than panicking. + if _, err := w.Write([]byte("\n")); err != nil { + return err + } + continue + } + if err := tm.writeAny(w, v, props); err != nil { + return err + } + if err := w.WriteByte('\n'); err != nil { + return err + } + } + continue + } + if fv.Kind() == reflect.Map { + // Map fields are rendered as a repeated struct with key/value fields. + keys := fv.MapKeys() + sort.Sort(mapKeys(keys)) + for _, key := range keys { + val := fv.MapIndex(key) + if err := writeName(w, props); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte(' '); err != nil { + return err + } + } + // open struct + if err := w.WriteByte('<'); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte('\n'); err != nil { + return err + } + } + w.indent() + // key + if _, err := w.WriteString("key:"); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte(' '); err != nil { + return err + } + } + if err := tm.writeAny(w, key, props.mkeyprop); err != nil { + return err + } + if err := w.WriteByte('\n'); err != nil { + return err + } + // nil values aren't legal, but we can avoid panicking because of them. + if val.Kind() != reflect.Ptr || !val.IsNil() { + // value + if _, err := w.WriteString("value:"); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte(' '); err != nil { + return err + } + } + if err := tm.writeAny(w, val, props.mvalprop); err != nil { + return err + } + if err := w.WriteByte('\n'); err != nil { + return err + } + } + // close struct + w.unindent() + if err := w.WriteByte('>'); err != nil { + return err + } + if err := w.WriteByte('\n'); err != nil { + return err + } + } + continue + } + if props.proto3 && fv.Kind() == reflect.Slice && fv.Len() == 0 { + // empty bytes field + continue + } + if fv.Kind() != reflect.Ptr && fv.Kind() != reflect.Slice { + // proto3 non-repeated scalar field; skip if zero value + if isProto3Zero(fv) { + continue + } + } + + if fv.Kind() == reflect.Interface { + // Check if it is a oneof. + if st.Field(i).Tag.Get("protobuf_oneof") != "" { + // fv is nil, or holds a pointer to generated struct. + // That generated struct has exactly one field, + // which has a protobuf struct tag. + if fv.IsNil() { + continue + } + inner := fv.Elem().Elem() // interface -> *T -> T + tag := inner.Type().Field(0).Tag.Get("protobuf") + props = new(Properties) // Overwrite the outer props var, but not its pointee. + props.Parse(tag) + // Write the value in the oneof, not the oneof itself. + fv = inner.Field(0) + + // Special case to cope with malformed messages gracefully: + // If the value in the oneof is a nil pointer, don't panic + // in writeAny. + if fv.Kind() == reflect.Ptr && fv.IsNil() { + // Use errors.New so writeAny won't render quotes. + msg := errors.New("/* nil */") + fv = reflect.ValueOf(&msg).Elem() + } + } + } + + if err := writeName(w, props); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte(' '); err != nil { + return err + } + } + + // Enums have a String method, so writeAny will work fine. + if err := tm.writeAny(w, fv, props); err != nil { + return err + } + + if err := w.WriteByte('\n'); err != nil { + return err + } + } + + // Extensions (the XXX_extensions field). + pv := sv.Addr() + if _, err := extendable(pv.Interface()); err == nil { + if err := tm.writeExtensions(w, pv); err != nil { + return err + } + } + + return nil +} + +// writeAny writes an arbitrary field. +func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Properties) error { + v = reflect.Indirect(v) + + // Floats have special cases. + if v.Kind() == reflect.Float32 || v.Kind() == reflect.Float64 { + x := v.Float() + var b []byte + switch { + case math.IsInf(x, 1): + b = posInf + case math.IsInf(x, -1): + b = negInf + case math.IsNaN(x): + b = nan + } + if b != nil { + _, err := w.Write(b) + return err + } + // Other values are handled below. + } + + // We don't attempt to serialise every possible value type; only those + // that can occur in protocol buffers. + switch v.Kind() { + case reflect.Slice: + // Should only be a []byte; repeated fields are handled in writeStruct. + if err := writeString(w, string(v.Bytes())); err != nil { + return err + } + case reflect.String: + if err := writeString(w, v.String()); err != nil { + return err + } + case reflect.Struct: + // Required/optional group/message. + var bra, ket byte = '<', '>' + if props != nil && props.Wire == "group" { + bra, ket = '{', '}' + } + if err := w.WriteByte(bra); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte('\n'); err != nil { + return err + } + } + w.indent() + if v.CanAddr() { + // Calling v.Interface on a struct causes the reflect package to + // copy the entire struct. This is racy with the new Marshaler + // since we atomically update the XXX_sizecache. + // + // Thus, we retrieve a pointer to the struct if possible to avoid + // a race since v.Interface on the pointer doesn't copy the struct. + // + // If v is not addressable, then we are not worried about a race + // since it implies that the binary Marshaler cannot possibly be + // mutating this value. + v = v.Addr() + } + if etm, ok := v.Interface().(encoding.TextMarshaler); ok { + text, err := etm.MarshalText() + if err != nil { + return err + } + if _, err = w.Write(text); err != nil { + return err + } + } else { + if v.Kind() == reflect.Ptr { + v = v.Elem() + } + if err := tm.writeStruct(w, v); err != nil { + return err + } + } + w.unindent() + if err := w.WriteByte(ket); err != nil { + return err + } + default: + _, err := fmt.Fprint(w, v.Interface()) + return err + } + return nil +} + +// equivalent to C's isprint. +func isprint(c byte) bool { + return c >= 0x20 && c < 0x7f +} + +// writeString writes a string in the protocol buffer text format. +// It is similar to strconv.Quote except we don't use Go escape sequences, +// we treat the string as a byte sequence, and we use octal escapes. +// These differences are to maintain interoperability with the other +// languages' implementations of the text format. +func writeString(w *textWriter, s string) error { + // use WriteByte here to get any needed indent + if err := w.WriteByte('"'); err != nil { + return err + } + // Loop over the bytes, not the runes. + for i := 0; i < len(s); i++ { + var err error + // Divergence from C++: we don't escape apostrophes. + // There's no need to escape them, and the C++ parser + // copes with a naked apostrophe. + switch c := s[i]; c { + case '\n': + _, err = w.w.Write(backslashN) + case '\r': + _, err = w.w.Write(backslashR) + case '\t': + _, err = w.w.Write(backslashT) + case '"': + _, err = w.w.Write(backslashDQ) + case '\\': + _, err = w.w.Write(backslashBS) + default: + if isprint(c) { + err = w.w.WriteByte(c) + } else { + _, err = fmt.Fprintf(w.w, "\\%03o", c) + } + } + if err != nil { + return err + } + } + return w.WriteByte('"') +} + +func writeUnknownStruct(w *textWriter, data []byte) (err error) { + if !w.compact { + if _, err := fmt.Fprintf(w, "/* %d unknown bytes */\n", len(data)); err != nil { + return err + } + } + b := NewBuffer(data) + for b.index < len(b.buf) { + x, err := b.DecodeVarint() + if err != nil { + _, err := fmt.Fprintf(w, "/* %v */\n", err) + return err + } + wire, tag := x&7, x>>3 + if wire == WireEndGroup { + w.unindent() + if _, err := w.Write(endBraceNewline); err != nil { + return err + } + continue + } + if _, err := fmt.Fprint(w, tag); err != nil { + return err + } + if wire != WireStartGroup { + if err := w.WriteByte(':'); err != nil { + return err + } + } + if !w.compact || wire == WireStartGroup { + if err := w.WriteByte(' '); err != nil { + return err + } + } + switch wire { + case WireBytes: + buf, e := b.DecodeRawBytes(false) + if e == nil { + _, err = fmt.Fprintf(w, "%q", buf) + } else { + _, err = fmt.Fprintf(w, "/* %v */", e) + } + case WireFixed32: + x, err = b.DecodeFixed32() + err = writeUnknownInt(w, x, err) + case WireFixed64: + x, err = b.DecodeFixed64() + err = writeUnknownInt(w, x, err) + case WireStartGroup: + err = w.WriteByte('{') + w.indent() + case WireVarint: + x, err = b.DecodeVarint() + err = writeUnknownInt(w, x, err) + default: + _, err = fmt.Fprintf(w, "/* unknown wire type %d */", wire) + } + if err != nil { + return err + } + if err = w.WriteByte('\n'); err != nil { + return err + } + } + return nil +} + +func writeUnknownInt(w *textWriter, x uint64, err error) error { + if err == nil { + _, err = fmt.Fprint(w, x) + } else { + _, err = fmt.Fprintf(w, "/* %v */", err) + } + return err +} + +type int32Slice []int32 + +func (s int32Slice) Len() int { return len(s) } +func (s int32Slice) Less(i, j int) bool { return s[i] < s[j] } +func (s int32Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +// writeExtensions writes all the extensions in pv. +// pv is assumed to be a pointer to a protocol message struct that is extendable. +func (tm *TextMarshaler) writeExtensions(w *textWriter, pv reflect.Value) error { + emap := extensionMaps[pv.Type().Elem()] + ep, _ := extendable(pv.Interface()) + + // Order the extensions by ID. + // This isn't strictly necessary, but it will give us + // canonical output, which will also make testing easier. + m, mu := ep.extensionsRead() + if m == nil { + return nil + } + mu.Lock() + ids := make([]int32, 0, len(m)) + for id := range m { + ids = append(ids, id) + } + sort.Sort(int32Slice(ids)) + mu.Unlock() + + for _, extNum := range ids { + ext := m[extNum] + var desc *ExtensionDesc + if emap != nil { + desc = emap[extNum] + } + if desc == nil { + // Unknown extension. + if err := writeUnknownStruct(w, ext.enc); err != nil { + return err + } + continue + } + + pb, err := GetExtension(ep, desc) + if err != nil { + return fmt.Errorf("failed getting extension: %v", err) + } + + // Repeated extensions will appear as a slice. + if !desc.repeated() { + if err := tm.writeExtension(w, desc.Name, pb); err != nil { + return err + } + } else { + v := reflect.ValueOf(pb) + for i := 0; i < v.Len(); i++ { + if err := tm.writeExtension(w, desc.Name, v.Index(i).Interface()); err != nil { + return err + } + } + } + } + return nil +} + +func (tm *TextMarshaler) writeExtension(w *textWriter, name string, pb interface{}) error { + if _, err := fmt.Fprintf(w, "[%s]:", name); err != nil { + return err + } + if !w.compact { + if err := w.WriteByte(' '); err != nil { + return err + } + } + if err := tm.writeAny(w, reflect.ValueOf(pb), nil); err != nil { + return err + } + if err := w.WriteByte('\n'); err != nil { + return err + } + return nil +} + +func (w *textWriter) writeIndent() { + if !w.complete { + return + } + remain := w.ind * 2 + for remain > 0 { + n := remain + if n > len(spaces) { + n = len(spaces) + } + w.w.Write(spaces[:n]) + remain -= n + } + w.complete = false +} + +// TextMarshaler is a configurable text format marshaler. +type TextMarshaler struct { + Compact bool // use compact text format (one line). + ExpandAny bool // expand google.protobuf.Any messages of known types +} + +// Marshal writes a given protocol buffer in text format. +// The only errors returned are from w. +func (tm *TextMarshaler) Marshal(w io.Writer, pb Message) error { + val := reflect.ValueOf(pb) + if pb == nil || val.IsNil() { + w.Write([]byte("")) + return nil + } + var bw *bufio.Writer + ww, ok := w.(writer) + if !ok { + bw = bufio.NewWriter(w) + ww = bw + } + aw := &textWriter{ + w: ww, + complete: true, + compact: tm.Compact, + } + + if etm, ok := pb.(encoding.TextMarshaler); ok { + text, err := etm.MarshalText() + if err != nil { + return err + } + if _, err = aw.Write(text); err != nil { + return err + } + if bw != nil { + return bw.Flush() + } + return nil + } + // Dereference the received pointer so we don't have outer < and >. + v := reflect.Indirect(val) + if err := tm.writeStruct(aw, v); err != nil { + return err + } + if bw != nil { + return bw.Flush() + } + return nil +} + +// Text is the same as Marshal, but returns the string directly. +func (tm *TextMarshaler) Text(pb Message) string { + var buf bytes.Buffer + tm.Marshal(&buf, pb) + return buf.String() +} + +var ( + defaultTextMarshaler = TextMarshaler{} + compactTextMarshaler = TextMarshaler{Compact: true} +) + +// TODO: consider removing some of the Marshal functions below. + +// MarshalText writes a given protocol buffer in text format. +// The only errors returned are from w. +func MarshalText(w io.Writer, pb Message) error { return defaultTextMarshaler.Marshal(w, pb) } + +// MarshalTextString is the same as MarshalText, but returns the string directly. +func MarshalTextString(pb Message) string { return defaultTextMarshaler.Text(pb) } + +// CompactText writes a given protocol buffer in compact text format (one line). +func CompactText(w io.Writer, pb Message) error { return compactTextMarshaler.Marshal(w, pb) } + +// CompactTextString is the same as CompactText, but returns the string directly. +func CompactTextString(pb Message) string { return compactTextMarshaler.Text(pb) } diff --git a/vendor/github.com/golang/protobuf/proto/text_parser.go b/vendor/github.com/golang/protobuf/proto/text_parser.go new file mode 100644 index 00000000..0685bae3 --- /dev/null +++ b/vendor/github.com/golang/protobuf/proto/text_parser.go @@ -0,0 +1,880 @@ +// Go support for Protocol Buffers - Google's data interchange format +// +// Copyright 2010 The Go Authors. All rights reserved. +// https://github.com/golang/protobuf +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following disclaimer +// in the documentation and/or other materials provided with the +// distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived from +// this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package proto + +// Functions for parsing the Text protocol buffer format. +// TODO: message sets. + +import ( + "encoding" + "errors" + "fmt" + "reflect" + "strconv" + "strings" + "unicode/utf8" +) + +// Error string emitted when deserializing Any and fields are already set +const anyRepeatedlyUnpacked = "Any message unpacked multiple times, or %q already set" + +type ParseError struct { + Message string + Line int // 1-based line number + Offset int // 0-based byte offset from start of input +} + +func (p *ParseError) Error() string { + if p.Line == 1 { + // show offset only for first line + return fmt.Sprintf("line 1.%d: %v", p.Offset, p.Message) + } + return fmt.Sprintf("line %d: %v", p.Line, p.Message) +} + +type token struct { + value string + err *ParseError + line int // line number + offset int // byte number from start of input, not start of line + unquoted string // the unquoted version of value, if it was a quoted string +} + +func (t *token) String() string { + if t.err == nil { + return fmt.Sprintf("%q (line=%d, offset=%d)", t.value, t.line, t.offset) + } + return fmt.Sprintf("parse error: %v", t.err) +} + +type textParser struct { + s string // remaining input + done bool // whether the parsing is finished (success or error) + backed bool // whether back() was called + offset, line int + cur token +} + +func newTextParser(s string) *textParser { + p := new(textParser) + p.s = s + p.line = 1 + p.cur.line = 1 + return p +} + +func (p *textParser) errorf(format string, a ...interface{}) *ParseError { + pe := &ParseError{fmt.Sprintf(format, a...), p.cur.line, p.cur.offset} + p.cur.err = pe + p.done = true + return pe +} + +// Numbers and identifiers are matched by [-+._A-Za-z0-9] +func isIdentOrNumberChar(c byte) bool { + switch { + case 'A' <= c && c <= 'Z', 'a' <= c && c <= 'z': + return true + case '0' <= c && c <= '9': + return true + } + switch c { + case '-', '+', '.', '_': + return true + } + return false +} + +func isWhitespace(c byte) bool { + switch c { + case ' ', '\t', '\n', '\r': + return true + } + return false +} + +func isQuote(c byte) bool { + switch c { + case '"', '\'': + return true + } + return false +} + +func (p *textParser) skipWhitespace() { + i := 0 + for i < len(p.s) && (isWhitespace(p.s[i]) || p.s[i] == '#') { + if p.s[i] == '#' { + // comment; skip to end of line or input + for i < len(p.s) && p.s[i] != '\n' { + i++ + } + if i == len(p.s) { + break + } + } + if p.s[i] == '\n' { + p.line++ + } + i++ + } + p.offset += i + p.s = p.s[i:len(p.s)] + if len(p.s) == 0 { + p.done = true + } +} + +func (p *textParser) advance() { + // Skip whitespace + p.skipWhitespace() + if p.done { + return + } + + // Start of non-whitespace + p.cur.err = nil + p.cur.offset, p.cur.line = p.offset, p.line + p.cur.unquoted = "" + switch p.s[0] { + case '<', '>', '{', '}', ':', '[', ']', ';', ',', '/': + // Single symbol + p.cur.value, p.s = p.s[0:1], p.s[1:len(p.s)] + case '"', '\'': + // Quoted string + i := 1 + for i < len(p.s) && p.s[i] != p.s[0] && p.s[i] != '\n' { + if p.s[i] == '\\' && i+1 < len(p.s) { + // skip escaped char + i++ + } + i++ + } + if i >= len(p.s) || p.s[i] != p.s[0] { + p.errorf("unmatched quote") + return + } + unq, err := unquoteC(p.s[1:i], rune(p.s[0])) + if err != nil { + p.errorf("invalid quoted string %s: %v", p.s[0:i+1], err) + return + } + p.cur.value, p.s = p.s[0:i+1], p.s[i+1:len(p.s)] + p.cur.unquoted = unq + default: + i := 0 + for i < len(p.s) && isIdentOrNumberChar(p.s[i]) { + i++ + } + if i == 0 { + p.errorf("unexpected byte %#x", p.s[0]) + return + } + p.cur.value, p.s = p.s[0:i], p.s[i:len(p.s)] + } + p.offset += len(p.cur.value) +} + +var ( + errBadUTF8 = errors.New("proto: bad UTF-8") +) + +func unquoteC(s string, quote rune) (string, error) { + // This is based on C++'s tokenizer.cc. + // Despite its name, this is *not* parsing C syntax. + // For instance, "\0" is an invalid quoted string. + + // Avoid allocation in trivial cases. + simple := true + for _, r := range s { + if r == '\\' || r == quote { + simple = false + break + } + } + if simple { + return s, nil + } + + buf := make([]byte, 0, 3*len(s)/2) + for len(s) > 0 { + r, n := utf8.DecodeRuneInString(s) + if r == utf8.RuneError && n == 1 { + return "", errBadUTF8 + } + s = s[n:] + if r != '\\' { + if r < utf8.RuneSelf { + buf = append(buf, byte(r)) + } else { + buf = append(buf, string(r)...) + } + continue + } + + ch, tail, err := unescape(s) + if err != nil { + return "", err + } + buf = append(buf, ch...) + s = tail + } + return string(buf), nil +} + +func unescape(s string) (ch string, tail string, err error) { + r, n := utf8.DecodeRuneInString(s) + if r == utf8.RuneError && n == 1 { + return "", "", errBadUTF8 + } + s = s[n:] + switch r { + case 'a': + return "\a", s, nil + case 'b': + return "\b", s, nil + case 'f': + return "\f", s, nil + case 'n': + return "\n", s, nil + case 'r': + return "\r", s, nil + case 't': + return "\t", s, nil + case 'v': + return "\v", s, nil + case '?': + return "?", s, nil // trigraph workaround + case '\'', '"', '\\': + return string(r), s, nil + case '0', '1', '2', '3', '4', '5', '6', '7': + if len(s) < 2 { + return "", "", fmt.Errorf(`\%c requires 2 following digits`, r) + } + ss := string(r) + s[:2] + s = s[2:] + i, err := strconv.ParseUint(ss, 8, 8) + if err != nil { + return "", "", fmt.Errorf(`\%s contains non-octal digits`, ss) + } + return string([]byte{byte(i)}), s, nil + case 'x', 'X', 'u', 'U': + var n int + switch r { + case 'x', 'X': + n = 2 + case 'u': + n = 4 + case 'U': + n = 8 + } + if len(s) < n { + return "", "", fmt.Errorf(`\%c requires %d following digits`, r, n) + } + ss := s[:n] + s = s[n:] + i, err := strconv.ParseUint(ss, 16, 64) + if err != nil { + return "", "", fmt.Errorf(`\%c%s contains non-hexadecimal digits`, r, ss) + } + if r == 'x' || r == 'X' { + return string([]byte{byte(i)}), s, nil + } + if i > utf8.MaxRune { + return "", "", fmt.Errorf(`\%c%s is not a valid Unicode code point`, r, ss) + } + return string(i), s, nil + } + return "", "", fmt.Errorf(`unknown escape \%c`, r) +} + +// Back off the parser by one token. Can only be done between calls to next(). +// It makes the next advance() a no-op. +func (p *textParser) back() { p.backed = true } + +// Advances the parser and returns the new current token. +func (p *textParser) next() *token { + if p.backed || p.done { + p.backed = false + return &p.cur + } + p.advance() + if p.done { + p.cur.value = "" + } else if len(p.cur.value) > 0 && isQuote(p.cur.value[0]) { + // Look for multiple quoted strings separated by whitespace, + // and concatenate them. + cat := p.cur + for { + p.skipWhitespace() + if p.done || !isQuote(p.s[0]) { + break + } + p.advance() + if p.cur.err != nil { + return &p.cur + } + cat.value += " " + p.cur.value + cat.unquoted += p.cur.unquoted + } + p.done = false // parser may have seen EOF, but we want to return cat + p.cur = cat + } + return &p.cur +} + +func (p *textParser) consumeToken(s string) error { + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value != s { + p.back() + return p.errorf("expected %q, found %q", s, tok.value) + } + return nil +} + +// Return a RequiredNotSetError indicating which required field was not set. +func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSetError { + st := sv.Type() + sprops := GetProperties(st) + for i := 0; i < st.NumField(); i++ { + if !isNil(sv.Field(i)) { + continue + } + + props := sprops.Prop[i] + if props.Required { + return &RequiredNotSetError{fmt.Sprintf("%v.%v", st, props.OrigName)} + } + } + return &RequiredNotSetError{fmt.Sprintf("%v.", st)} // should not happen +} + +// Returns the index in the struct for the named field, as well as the parsed tag properties. +func structFieldByName(sprops *StructProperties, name string) (int, *Properties, bool) { + i, ok := sprops.decoderOrigNames[name] + if ok { + return i, sprops.Prop[i], true + } + return -1, nil, false +} + +// Consume a ':' from the input stream (if the next token is a colon), +// returning an error if a colon is needed but not present. +func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseError { + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value != ":" { + // Colon is optional when the field is a group or message. + needColon := true + switch props.Wire { + case "group": + needColon = false + case "bytes": + // A "bytes" field is either a message, a string, or a repeated field; + // those three become *T, *string and []T respectively, so we can check for + // this field being a pointer to a non-string. + if typ.Kind() == reflect.Ptr { + // *T or *string + if typ.Elem().Kind() == reflect.String { + break + } + } else if typ.Kind() == reflect.Slice { + // []T or []*T + if typ.Elem().Kind() != reflect.Ptr { + break + } + } else if typ.Kind() == reflect.String { + // The proto3 exception is for a string field, + // which requires a colon. + break + } + needColon = false + } + if needColon { + return p.errorf("expected ':', found %q", tok.value) + } + p.back() + } + return nil +} + +func (p *textParser) readStruct(sv reflect.Value, terminator string) error { + st := sv.Type() + sprops := GetProperties(st) + reqCount := sprops.reqCount + var reqFieldErr error + fieldSet := make(map[string]bool) + // A struct is a sequence of "name: value", terminated by one of + // '>' or '}', or the end of the input. A name may also be + // "[extension]" or "[type/url]". + // + // The whole struct can also be an expanded Any message, like: + // [type/url] < ... struct contents ... > + for { + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value == terminator { + break + } + if tok.value == "[" { + // Looks like an extension or an Any. + // + // TODO: Check whether we need to handle + // namespace rooted names (e.g. ".something.Foo"). + extName, err := p.consumeExtName() + if err != nil { + return err + } + + if s := strings.LastIndex(extName, "/"); s >= 0 { + // If it contains a slash, it's an Any type URL. + messageName := extName[s+1:] + mt := MessageType(messageName) + if mt == nil { + return p.errorf("unrecognized message %q in google.protobuf.Any", messageName) + } + tok = p.next() + if tok.err != nil { + return tok.err + } + // consume an optional colon + if tok.value == ":" { + tok = p.next() + if tok.err != nil { + return tok.err + } + } + var terminator string + switch tok.value { + case "<": + terminator = ">" + case "{": + terminator = "}" + default: + return p.errorf("expected '{' or '<', found %q", tok.value) + } + v := reflect.New(mt.Elem()) + if pe := p.readStruct(v.Elem(), terminator); pe != nil { + return pe + } + b, err := Marshal(v.Interface().(Message)) + if err != nil { + return p.errorf("failed to marshal message of type %q: %v", messageName, err) + } + if fieldSet["type_url"] { + return p.errorf(anyRepeatedlyUnpacked, "type_url") + } + if fieldSet["value"] { + return p.errorf(anyRepeatedlyUnpacked, "value") + } + sv.FieldByName("TypeUrl").SetString(extName) + sv.FieldByName("Value").SetBytes(b) + fieldSet["type_url"] = true + fieldSet["value"] = true + continue + } + + var desc *ExtensionDesc + // This could be faster, but it's functional. + // TODO: Do something smarter than a linear scan. + for _, d := range RegisteredExtensions(reflect.New(st).Interface().(Message)) { + if d.Name == extName { + desc = d + break + } + } + if desc == nil { + return p.errorf("unrecognized extension %q", extName) + } + + props := &Properties{} + props.Parse(desc.Tag) + + typ := reflect.TypeOf(desc.ExtensionType) + if err := p.checkForColon(props, typ); err != nil { + return err + } + + rep := desc.repeated() + + // Read the extension structure, and set it in + // the value we're constructing. + var ext reflect.Value + if !rep { + ext = reflect.New(typ).Elem() + } else { + ext = reflect.New(typ.Elem()).Elem() + } + if err := p.readAny(ext, props); err != nil { + if _, ok := err.(*RequiredNotSetError); !ok { + return err + } + reqFieldErr = err + } + ep := sv.Addr().Interface().(Message) + if !rep { + SetExtension(ep, desc, ext.Interface()) + } else { + old, err := GetExtension(ep, desc) + var sl reflect.Value + if err == nil { + sl = reflect.ValueOf(old) // existing slice + } else { + sl = reflect.MakeSlice(typ, 0, 1) + } + sl = reflect.Append(sl, ext) + SetExtension(ep, desc, sl.Interface()) + } + if err := p.consumeOptionalSeparator(); err != nil { + return err + } + continue + } + + // This is a normal, non-extension field. + name := tok.value + var dst reflect.Value + fi, props, ok := structFieldByName(sprops, name) + if ok { + dst = sv.Field(fi) + } else if oop, ok := sprops.OneofTypes[name]; ok { + // It is a oneof. + props = oop.Prop + nv := reflect.New(oop.Type.Elem()) + dst = nv.Elem().Field(0) + field := sv.Field(oop.Field) + if !field.IsNil() { + return p.errorf("field '%s' would overwrite already parsed oneof '%s'", name, sv.Type().Field(oop.Field).Name) + } + field.Set(nv) + } + if !dst.IsValid() { + return p.errorf("unknown field name %q in %v", name, st) + } + + if dst.Kind() == reflect.Map { + // Consume any colon. + if err := p.checkForColon(props, dst.Type()); err != nil { + return err + } + + // Construct the map if it doesn't already exist. + if dst.IsNil() { + dst.Set(reflect.MakeMap(dst.Type())) + } + key := reflect.New(dst.Type().Key()).Elem() + val := reflect.New(dst.Type().Elem()).Elem() + + // The map entry should be this sequence of tokens: + // < key : KEY value : VALUE > + // However, implementations may omit key or value, and technically + // we should support them in any order. See b/28924776 for a time + // this went wrong. + + tok := p.next() + var terminator string + switch tok.value { + case "<": + terminator = ">" + case "{": + terminator = "}" + default: + return p.errorf("expected '{' or '<', found %q", tok.value) + } + for { + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value == terminator { + break + } + switch tok.value { + case "key": + if err := p.consumeToken(":"); err != nil { + return err + } + if err := p.readAny(key, props.mkeyprop); err != nil { + return err + } + if err := p.consumeOptionalSeparator(); err != nil { + return err + } + case "value": + if err := p.checkForColon(props.mvalprop, dst.Type().Elem()); err != nil { + return err + } + if err := p.readAny(val, props.mvalprop); err != nil { + return err + } + if err := p.consumeOptionalSeparator(); err != nil { + return err + } + default: + p.back() + return p.errorf(`expected "key", "value", or %q, found %q`, terminator, tok.value) + } + } + + dst.SetMapIndex(key, val) + continue + } + + // Check that it's not already set if it's not a repeated field. + if !props.Repeated && fieldSet[name] { + return p.errorf("non-repeated field %q was repeated", name) + } + + if err := p.checkForColon(props, dst.Type()); err != nil { + return err + } + + // Parse into the field. + fieldSet[name] = true + if err := p.readAny(dst, props); err != nil { + if _, ok := err.(*RequiredNotSetError); !ok { + return err + } + reqFieldErr = err + } + if props.Required { + reqCount-- + } + + if err := p.consumeOptionalSeparator(); err != nil { + return err + } + + } + + if reqCount > 0 { + return p.missingRequiredFieldError(sv) + } + return reqFieldErr +} + +// consumeExtName consumes extension name or expanded Any type URL and the +// following ']'. It returns the name or URL consumed. +func (p *textParser) consumeExtName() (string, error) { + tok := p.next() + if tok.err != nil { + return "", tok.err + } + + // If extension name or type url is quoted, it's a single token. + if len(tok.value) > 2 && isQuote(tok.value[0]) && tok.value[len(tok.value)-1] == tok.value[0] { + name, err := unquoteC(tok.value[1:len(tok.value)-1], rune(tok.value[0])) + if err != nil { + return "", err + } + return name, p.consumeToken("]") + } + + // Consume everything up to "]" + var parts []string + for tok.value != "]" { + parts = append(parts, tok.value) + tok = p.next() + if tok.err != nil { + return "", p.errorf("unrecognized type_url or extension name: %s", tok.err) + } + if p.done && tok.value != "]" { + return "", p.errorf("unclosed type_url or extension name") + } + } + return strings.Join(parts, ""), nil +} + +// consumeOptionalSeparator consumes an optional semicolon or comma. +// It is used in readStruct to provide backward compatibility. +func (p *textParser) consumeOptionalSeparator() error { + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value != ";" && tok.value != "," { + p.back() + } + return nil +} + +func (p *textParser) readAny(v reflect.Value, props *Properties) error { + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value == "" { + return p.errorf("unexpected EOF") + } + + switch fv := v; fv.Kind() { + case reflect.Slice: + at := v.Type() + if at.Elem().Kind() == reflect.Uint8 { + // Special case for []byte + if tok.value[0] != '"' && tok.value[0] != '\'' { + // Deliberately written out here, as the error after + // this switch statement would write "invalid []byte: ...", + // which is not as user-friendly. + return p.errorf("invalid string: %v", tok.value) + } + bytes := []byte(tok.unquoted) + fv.Set(reflect.ValueOf(bytes)) + return nil + } + // Repeated field. + if tok.value == "[" { + // Repeated field with list notation, like [1,2,3]. + for { + fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem())) + err := p.readAny(fv.Index(fv.Len()-1), props) + if err != nil { + return err + } + tok := p.next() + if tok.err != nil { + return tok.err + } + if tok.value == "]" { + break + } + if tok.value != "," { + return p.errorf("Expected ']' or ',' found %q", tok.value) + } + } + return nil + } + // One value of the repeated field. + p.back() + fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem())) + return p.readAny(fv.Index(fv.Len()-1), props) + case reflect.Bool: + // true/1/t/True or false/f/0/False. + switch tok.value { + case "true", "1", "t", "True": + fv.SetBool(true) + return nil + case "false", "0", "f", "False": + fv.SetBool(false) + return nil + } + case reflect.Float32, reflect.Float64: + v := tok.value + // Ignore 'f' for compatibility with output generated by C++, but don't + // remove 'f' when the value is "-inf" or "inf". + if strings.HasSuffix(v, "f") && tok.value != "-inf" && tok.value != "inf" { + v = v[:len(v)-1] + } + if f, err := strconv.ParseFloat(v, fv.Type().Bits()); err == nil { + fv.SetFloat(f) + return nil + } + case reflect.Int32: + if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil { + fv.SetInt(x) + return nil + } + + if len(props.Enum) == 0 { + break + } + m, ok := enumValueMaps[props.Enum] + if !ok { + break + } + x, ok := m[tok.value] + if !ok { + break + } + fv.SetInt(int64(x)) + return nil + case reflect.Int64: + if x, err := strconv.ParseInt(tok.value, 0, 64); err == nil { + fv.SetInt(x) + return nil + } + + case reflect.Ptr: + // A basic field (indirected through pointer), or a repeated message/group + p.back() + fv.Set(reflect.New(fv.Type().Elem())) + return p.readAny(fv.Elem(), props) + case reflect.String: + if tok.value[0] == '"' || tok.value[0] == '\'' { + fv.SetString(tok.unquoted) + return nil + } + case reflect.Struct: + var terminator string + switch tok.value { + case "{": + terminator = "}" + case "<": + terminator = ">" + default: + return p.errorf("expected '{' or '<', found %q", tok.value) + } + // TODO: Handle nested messages which implement encoding.TextUnmarshaler. + return p.readStruct(fv, terminator) + case reflect.Uint32: + if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil { + fv.SetUint(uint64(x)) + return nil + } + case reflect.Uint64: + if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil { + fv.SetUint(x) + return nil + } + } + return p.errorf("invalid %v: %v", v.Type(), tok.value) +} + +// UnmarshalText reads a protocol buffer in Text format. UnmarshalText resets pb +// before starting to unmarshal, so any existing data in pb is always removed. +// If a required field is not set and no other error occurs, +// UnmarshalText returns *RequiredNotSetError. +func UnmarshalText(s string, pb Message) error { + if um, ok := pb.(encoding.TextUnmarshaler); ok { + return um.UnmarshalText([]byte(s)) + } + pb.Reset() + v := reflect.ValueOf(pb) + return newTextParser(s).readStruct(v.Elem(), "") +} diff --git a/vendor/github.com/gorilla/context/LICENSE b/vendor/github.com/gorilla/context/LICENSE new file mode 100644 index 00000000..0e5fb872 --- /dev/null +++ b/vendor/github.com/gorilla/context/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2012 Rodrigo Moraes. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/gorilla/context/context.go b/vendor/github.com/gorilla/context/context.go new file mode 100644 index 00000000..81cb128b --- /dev/null +++ b/vendor/github.com/gorilla/context/context.go @@ -0,0 +1,143 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package context + +import ( + "net/http" + "sync" + "time" +) + +var ( + mutex sync.RWMutex + data = make(map[*http.Request]map[interface{}]interface{}) + datat = make(map[*http.Request]int64) +) + +// Set stores a value for a given key in a given request. +func Set(r *http.Request, key, val interface{}) { + mutex.Lock() + if data[r] == nil { + data[r] = make(map[interface{}]interface{}) + datat[r] = time.Now().Unix() + } + data[r][key] = val + mutex.Unlock() +} + +// Get returns a value stored for a given key in a given request. +func Get(r *http.Request, key interface{}) interface{} { + mutex.RLock() + if ctx := data[r]; ctx != nil { + value := ctx[key] + mutex.RUnlock() + return value + } + mutex.RUnlock() + return nil +} + +// GetOk returns stored value and presence state like multi-value return of map access. +func GetOk(r *http.Request, key interface{}) (interface{}, bool) { + mutex.RLock() + if _, ok := data[r]; ok { + value, ok := data[r][key] + mutex.RUnlock() + return value, ok + } + mutex.RUnlock() + return nil, false +} + +// GetAll returns all stored values for the request as a map. Nil is returned for invalid requests. +func GetAll(r *http.Request) map[interface{}]interface{} { + mutex.RLock() + if context, ok := data[r]; ok { + result := make(map[interface{}]interface{}, len(context)) + for k, v := range context { + result[k] = v + } + mutex.RUnlock() + return result + } + mutex.RUnlock() + return nil +} + +// GetAllOk returns all stored values for the request as a map and a boolean value that indicates if +// the request was registered. +func GetAllOk(r *http.Request) (map[interface{}]interface{}, bool) { + mutex.RLock() + context, ok := data[r] + result := make(map[interface{}]interface{}, len(context)) + for k, v := range context { + result[k] = v + } + mutex.RUnlock() + return result, ok +} + +// Delete removes a value stored for a given key in a given request. +func Delete(r *http.Request, key interface{}) { + mutex.Lock() + if data[r] != nil { + delete(data[r], key) + } + mutex.Unlock() +} + +// Clear removes all values stored for a given request. +// +// This is usually called by a handler wrapper to clean up request +// variables at the end of a request lifetime. See ClearHandler(). +func Clear(r *http.Request) { + mutex.Lock() + clear(r) + mutex.Unlock() +} + +// clear is Clear without the lock. +func clear(r *http.Request) { + delete(data, r) + delete(datat, r) +} + +// Purge removes request data stored for longer than maxAge, in seconds. +// It returns the amount of requests removed. +// +// If maxAge <= 0, all request data is removed. +// +// This is only used for sanity check: in case context cleaning was not +// properly set some request data can be kept forever, consuming an increasing +// amount of memory. In case this is detected, Purge() must be called +// periodically until the problem is fixed. +func Purge(maxAge int) int { + mutex.Lock() + count := 0 + if maxAge <= 0 { + count = len(data) + data = make(map[*http.Request]map[interface{}]interface{}) + datat = make(map[*http.Request]int64) + } else { + min := time.Now().Unix() - int64(maxAge) + for r := range data { + if datat[r] < min { + clear(r) + count++ + } + } + } + mutex.Unlock() + return count +} + +// ClearHandler wraps an http.Handler and clears request values at the end +// of a request lifetime. +func ClearHandler(h http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + defer Clear(r) + h.ServeHTTP(w, r) + }) +} diff --git a/vendor/github.com/gorilla/context/doc.go b/vendor/github.com/gorilla/context/doc.go new file mode 100644 index 00000000..448d1bfc --- /dev/null +++ b/vendor/github.com/gorilla/context/doc.go @@ -0,0 +1,88 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package context stores values shared during a request lifetime. + +Note: gorilla/context, having been born well before `context.Context` existed, +does not play well > with the shallow copying of the request that +[`http.Request.WithContext`](https://golang.org/pkg/net/http/#Request.WithContext) +(added to net/http Go 1.7 onwards) performs. You should either use *just* +gorilla/context, or moving forward, the new `http.Request.Context()`. + +For example, a router can set variables extracted from the URL and later +application handlers can access those values, or it can be used to store +sessions values to be saved at the end of a request. There are several +others common uses. + +The idea was posted by Brad Fitzpatrick to the go-nuts mailing list: + + http://groups.google.com/group/golang-nuts/msg/e2d679d303aa5d53 + +Here's the basic usage: first define the keys that you will need. The key +type is interface{} so a key can be of any type that supports equality. +Here we define a key using a custom int type to avoid name collisions: + + package foo + + import ( + "github.com/gorilla/context" + ) + + type key int + + const MyKey key = 0 + +Then set a variable. Variables are bound to an http.Request object, so you +need a request instance to set a value: + + context.Set(r, MyKey, "bar") + +The application can later access the variable using the same key you provided: + + func MyHandler(w http.ResponseWriter, r *http.Request) { + // val is "bar". + val := context.Get(r, foo.MyKey) + + // returns ("bar", true) + val, ok := context.GetOk(r, foo.MyKey) + // ... + } + +And that's all about the basic usage. We discuss some other ideas below. + +Any type can be stored in the context. To enforce a given type, make the key +private and wrap Get() and Set() to accept and return values of a specific +type: + + type key int + + const mykey key = 0 + + // GetMyKey returns a value for this package from the request values. + func GetMyKey(r *http.Request) SomeType { + if rv := context.Get(r, mykey); rv != nil { + return rv.(SomeType) + } + return nil + } + + // SetMyKey sets a value for this package in the request values. + func SetMyKey(r *http.Request, val SomeType) { + context.Set(r, mykey, val) + } + +Variables must be cleared at the end of a request, to remove all values +that were stored. This can be done in an http.Handler, after a request was +served. Just call Clear() passing the request: + + context.Clear(r) + +...or use ClearHandler(), which conveniently wraps an http.Handler to clear +variables at the end of a request lifetime. + +The Routers from the packages gorilla/mux and gorilla/pat call Clear() +so if you are using either of them you don't need to clear the context manually. +*/ +package context diff --git a/vendor/github.com/gorilla/mux/LICENSE b/vendor/github.com/gorilla/mux/LICENSE new file mode 100644 index 00000000..0e5fb872 --- /dev/null +++ b/vendor/github.com/gorilla/mux/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2012 Rodrigo Moraes. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/gorilla/mux/context_gorilla.go b/vendor/github.com/gorilla/mux/context_gorilla.go new file mode 100644 index 00000000..d7adaa8f --- /dev/null +++ b/vendor/github.com/gorilla/mux/context_gorilla.go @@ -0,0 +1,26 @@ +// +build !go1.7 + +package mux + +import ( + "net/http" + + "github.com/gorilla/context" +) + +func contextGet(r *http.Request, key interface{}) interface{} { + return context.Get(r, key) +} + +func contextSet(r *http.Request, key, val interface{}) *http.Request { + if val == nil { + return r + } + + context.Set(r, key, val) + return r +} + +func contextClear(r *http.Request) { + context.Clear(r) +} diff --git a/vendor/github.com/gorilla/mux/context_native.go b/vendor/github.com/gorilla/mux/context_native.go new file mode 100644 index 00000000..209cbea7 --- /dev/null +++ b/vendor/github.com/gorilla/mux/context_native.go @@ -0,0 +1,24 @@ +// +build go1.7 + +package mux + +import ( + "context" + "net/http" +) + +func contextGet(r *http.Request, key interface{}) interface{} { + return r.Context().Value(key) +} + +func contextSet(r *http.Request, key, val interface{}) *http.Request { + if val == nil { + return r + } + + return r.WithContext(context.WithValue(r.Context(), key, val)) +} + +func contextClear(r *http.Request) { + return +} diff --git a/vendor/github.com/gorilla/mux/doc.go b/vendor/github.com/gorilla/mux/doc.go new file mode 100644 index 00000000..38957dee --- /dev/null +++ b/vendor/github.com/gorilla/mux/doc.go @@ -0,0 +1,306 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package mux implements a request router and dispatcher. + +The name mux stands for "HTTP request multiplexer". Like the standard +http.ServeMux, mux.Router matches incoming requests against a list of +registered routes and calls a handler for the route that matches the URL +or other conditions. The main features are: + + * Requests can be matched based on URL host, path, path prefix, schemes, + header and query values, HTTP methods or using custom matchers. + * URL hosts, paths and query values can have variables with an optional + regular expression. + * Registered URLs can be built, or "reversed", which helps maintaining + references to resources. + * Routes can be used as subrouters: nested routes are only tested if the + parent route matches. This is useful to define groups of routes that + share common conditions like a host, a path prefix or other repeated + attributes. As a bonus, this optimizes request matching. + * It implements the http.Handler interface so it is compatible with the + standard http.ServeMux. + +Let's start registering a couple of URL paths and handlers: + + func main() { + r := mux.NewRouter() + r.HandleFunc("/", HomeHandler) + r.HandleFunc("/products", ProductsHandler) + r.HandleFunc("/articles", ArticlesHandler) + http.Handle("/", r) + } + +Here we register three routes mapping URL paths to handlers. This is +equivalent to how http.HandleFunc() works: if an incoming request URL matches +one of the paths, the corresponding handler is called passing +(http.ResponseWriter, *http.Request) as parameters. + +Paths can have variables. They are defined using the format {name} or +{name:pattern}. If a regular expression pattern is not defined, the matched +variable will be anything until the next slash. For example: + + r := mux.NewRouter() + r.HandleFunc("/products/{key}", ProductHandler) + r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler) + r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler) + +Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: + + r.HandleFunc("/articles/{category}/{sort:(?:asc|desc|new)}", ArticlesCategoryHandler) + +The names are used to create a map of route variables which can be retrieved +calling mux.Vars(): + + vars := mux.Vars(request) + category := vars["category"] + +Note that if any capturing groups are present, mux will panic() during parsing. To prevent +this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to +"/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably +when capturing groups were present. + +And this is all you need to know about the basic usage. More advanced options +are explained below. + +Routes can also be restricted to a domain or subdomain. Just define a host +pattern to be matched. They can also have variables: + + r := mux.NewRouter() + // Only matches if domain is "www.example.com". + r.Host("www.example.com") + // Matches a dynamic subdomain. + r.Host("{subdomain:[a-z]+}.domain.com") + +There are several other matchers that can be added. To match path prefixes: + + r.PathPrefix("/products/") + +...or HTTP methods: + + r.Methods("GET", "POST") + +...or URL schemes: + + r.Schemes("https") + +...or header values: + + r.Headers("X-Requested-With", "XMLHttpRequest") + +...or query values: + + r.Queries("key", "value") + +...or to use a custom matcher function: + + r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool { + return r.ProtoMajor == 0 + }) + +...and finally, it is possible to combine several matchers in a single route: + + r.HandleFunc("/products", ProductsHandler). + Host("www.example.com"). + Methods("GET"). + Schemes("http") + +Setting the same matching conditions again and again can be boring, so we have +a way to group several routes that share the same requirements. +We call it "subrouting". + +For example, let's say we have several URLs that should only match when the +host is "www.example.com". Create a route for that host and get a "subrouter" +from it: + + r := mux.NewRouter() + s := r.Host("www.example.com").Subrouter() + +Then register routes in the subrouter: + + s.HandleFunc("/products/", ProductsHandler) + s.HandleFunc("/products/{key}", ProductHandler) + s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) + +The three URL paths we registered above will only be tested if the domain is +"www.example.com", because the subrouter is tested first. This is not +only convenient, but also optimizes request matching. You can create +subrouters combining any attribute matchers accepted by a route. + +Subrouters can be used to create domain or path "namespaces": you define +subrouters in a central place and then parts of the app can register its +paths relatively to a given subrouter. + +There's one more thing about subroutes. When a subrouter has a path prefix, +the inner routes use it as base for their paths: + + r := mux.NewRouter() + s := r.PathPrefix("/products").Subrouter() + // "/products/" + s.HandleFunc("/", ProductsHandler) + // "/products/{key}/" + s.HandleFunc("/{key}/", ProductHandler) + // "/products/{key}/details" + s.HandleFunc("/{key}/details", ProductDetailsHandler) + +Note that the path provided to PathPrefix() represents a "wildcard": calling +PathPrefix("/static/").Handler(...) means that the handler will be passed any +request that matches "/static/*". This makes it easy to serve static files with mux: + + func main() { + var dir string + + flag.StringVar(&dir, "dir", ".", "the directory to serve files from. Defaults to the current dir") + flag.Parse() + r := mux.NewRouter() + + // This will serve files under http://localhost:8000/static/ + r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.Dir(dir)))) + + srv := &http.Server{ + Handler: r, + Addr: "127.0.0.1:8000", + // Good practice: enforce timeouts for servers you create! + WriteTimeout: 15 * time.Second, + ReadTimeout: 15 * time.Second, + } + + log.Fatal(srv.ListenAndServe()) + } + +Now let's see how to build registered URLs. + +Routes can be named. All routes that define a name can have their URLs built, +or "reversed". We define a name calling Name() on a route. For example: + + r := mux.NewRouter() + r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). + Name("article") + +To build a URL, get the route and call the URL() method, passing a sequence of +key/value pairs for the route variables. For the previous route, we would do: + + url, err := r.Get("article").URL("category", "technology", "id", "42") + +...and the result will be a url.URL with the following path: + + "/articles/technology/42" + +This also works for host and query value variables: + + r := mux.NewRouter() + r.Host("{subdomain}.domain.com"). + Path("/articles/{category}/{id:[0-9]+}"). + Queries("filter", "{filter}"). + HandlerFunc(ArticleHandler). + Name("article") + + // url.String() will be "http://news.domain.com/articles/technology/42?filter=gorilla" + url, err := r.Get("article").URL("subdomain", "news", + "category", "technology", + "id", "42", + "filter", "gorilla") + +All variables defined in the route are required, and their values must +conform to the corresponding patterns. These requirements guarantee that a +generated URL will always match a registered route -- the only exception is +for explicitly defined "build-only" routes which never match. + +Regex support also exists for matching Headers within a route. For example, we could do: + + r.HeadersRegexp("Content-Type", "application/(text|json)") + +...and the route will match both requests with a Content-Type of `application/json` as well as +`application/text` + +There's also a way to build only the URL host or path for a route: +use the methods URLHost() or URLPath() instead. For the previous route, +we would do: + + // "http://news.domain.com/" + host, err := r.Get("article").URLHost("subdomain", "news") + + // "/articles/technology/42" + path, err := r.Get("article").URLPath("category", "technology", "id", "42") + +And if you use subrouters, host and path defined separately can be built +as well: + + r := mux.NewRouter() + s := r.Host("{subdomain}.domain.com").Subrouter() + s.Path("/articles/{category}/{id:[0-9]+}"). + HandlerFunc(ArticleHandler). + Name("article") + + // "http://news.domain.com/articles/technology/42" + url, err := r.Get("article").URL("subdomain", "news", + "category", "technology", + "id", "42") + +Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. + + type MiddlewareFunc func(http.Handler) http.Handler + +Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). + +A very basic middleware which logs the URI of the request being handled could be written as: + + func simpleMw(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // Do stuff here + log.Println(r.RequestURI) + // Call the next handler, which can be another middleware in the chain, or the final handler. + next.ServeHTTP(w, r) + }) + } + +Middlewares can be added to a router using `Router.Use()`: + + r := mux.NewRouter() + r.HandleFunc("/", handler) + r.Use(simpleMw) + +A more complex authentication middleware, which maps session token to users, could be written as: + + // Define our struct + type authenticationMiddleware struct { + tokenUsers map[string]string + } + + // Initialize it somewhere + func (amw *authenticationMiddleware) Populate() { + amw.tokenUsers["00000000"] = "user0" + amw.tokenUsers["aaaaaaaa"] = "userA" + amw.tokenUsers["05f717e5"] = "randomUser" + amw.tokenUsers["deadbeef"] = "user0" + } + + // Middleware function, which will be called for each request + func (amw *authenticationMiddleware) Middleware(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + token := r.Header.Get("X-Session-Token") + + if user, found := amw.tokenUsers[token]; found { + // We found the token in our map + log.Printf("Authenticated user %s\n", user) + next.ServeHTTP(w, r) + } else { + http.Error(w, "Forbidden", http.StatusForbidden) + } + }) + } + + r := mux.NewRouter() + r.HandleFunc("/", handler) + + amw := authenticationMiddleware{} + amw.Populate() + + r.Use(amw.Middleware) + +Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to. + +*/ +package mux diff --git a/vendor/github.com/gorilla/mux/middleware.go b/vendor/github.com/gorilla/mux/middleware.go new file mode 100644 index 00000000..ceb812ce --- /dev/null +++ b/vendor/github.com/gorilla/mux/middleware.go @@ -0,0 +1,72 @@ +package mux + +import ( + "net/http" + "strings" +) + +// MiddlewareFunc is a function which receives an http.Handler and returns another http.Handler. +// Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed +// to it, and then calls the handler passed as parameter to the MiddlewareFunc. +type MiddlewareFunc func(http.Handler) http.Handler + +// middleware interface is anything which implements a MiddlewareFunc named Middleware. +type middleware interface { + Middleware(handler http.Handler) http.Handler +} + +// Middleware allows MiddlewareFunc to implement the middleware interface. +func (mw MiddlewareFunc) Middleware(handler http.Handler) http.Handler { + return mw(handler) +} + +// Use appends a MiddlewareFunc to the chain. Middleware can be used to intercept or otherwise modify requests and/or responses, and are executed in the order that they are applied to the Router. +func (r *Router) Use(mwf ...MiddlewareFunc) { + for _, fn := range mwf { + r.middlewares = append(r.middlewares, fn) + } +} + +// useInterface appends a middleware to the chain. Middleware can be used to intercept or otherwise modify requests and/or responses, and are executed in the order that they are applied to the Router. +func (r *Router) useInterface(mw middleware) { + r.middlewares = append(r.middlewares, mw) +} + +// CORSMethodMiddleware sets the Access-Control-Allow-Methods response header +// on a request, by matching routes based only on paths. It also handles +// OPTIONS requests, by settings Access-Control-Allow-Methods, and then +// returning without calling the next http handler. +func CORSMethodMiddleware(r *Router) MiddlewareFunc { + return func(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { + var allMethods []string + + err := r.Walk(func(route *Route, _ *Router, _ []*Route) error { + for _, m := range route.matchers { + if _, ok := m.(*routeRegexp); ok { + if m.Match(req, &RouteMatch{}) { + methods, err := route.GetMethods() + if err != nil { + return err + } + + allMethods = append(allMethods, methods...) + } + break + } + } + return nil + }) + + if err == nil { + w.Header().Set("Access-Control-Allow-Methods", strings.Join(append(allMethods, "OPTIONS"), ",")) + + if req.Method == "OPTIONS" { + return + } + } + + next.ServeHTTP(w, req) + }) + } +} diff --git a/vendor/github.com/gorilla/mux/mux.go b/vendor/github.com/gorilla/mux/mux.go new file mode 100644 index 00000000..4bbafa51 --- /dev/null +++ b/vendor/github.com/gorilla/mux/mux.go @@ -0,0 +1,588 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import ( + "errors" + "fmt" + "net/http" + "path" + "regexp" +) + +var ( + // ErrMethodMismatch is returned when the method in the request does not match + // the method defined against the route. + ErrMethodMismatch = errors.New("method is not allowed") + // ErrNotFound is returned when no route match is found. + ErrNotFound = errors.New("no matching route was found") +) + +// NewRouter returns a new router instance. +func NewRouter() *Router { + return &Router{namedRoutes: make(map[string]*Route), KeepContext: false} +} + +// Router registers routes to be matched and dispatches a handler. +// +// It implements the http.Handler interface, so it can be registered to serve +// requests: +// +// var router = mux.NewRouter() +// +// func main() { +// http.Handle("/", router) +// } +// +// Or, for Google App Engine, register it in a init() function: +// +// func init() { +// http.Handle("/", router) +// } +// +// This will send all incoming requests to the router. +type Router struct { + // Configurable Handler to be used when no route matches. + NotFoundHandler http.Handler + + // Configurable Handler to be used when the request method does not match the route. + MethodNotAllowedHandler http.Handler + + // Parent route, if this is a subrouter. + parent parentRoute + // Routes to be matched, in order. + routes []*Route + // Routes by name for URL building. + namedRoutes map[string]*Route + // See Router.StrictSlash(). This defines the flag for new routes. + strictSlash bool + // See Router.SkipClean(). This defines the flag for new routes. + skipClean bool + // If true, do not clear the request context after handling the request. + // This has no effect when go1.7+ is used, since the context is stored + // on the request itself. + KeepContext bool + // see Router.UseEncodedPath(). This defines a flag for all routes. + useEncodedPath bool + // Slice of middlewares to be called after a match is found + middlewares []middleware +} + +// Match attempts to match the given request against the router's registered routes. +// +// If the request matches a route of this router or one of its subrouters the Route, +// Handler, and Vars fields of the the match argument are filled and this function +// returns true. +// +// If the request does not match any of this router's or its subrouters' routes +// then this function returns false. If available, a reason for the match failure +// will be filled in the match argument's MatchErr field. If the match failure type +// (eg: not found) has a registered handler, the handler is assigned to the Handler +// field of the match argument. +func (r *Router) Match(req *http.Request, match *RouteMatch) bool { + for _, route := range r.routes { + if route.Match(req, match) { + // Build middleware chain if no error was found + if match.MatchErr == nil { + for i := len(r.middlewares) - 1; i >= 0; i-- { + match.Handler = r.middlewares[i].Middleware(match.Handler) + } + } + return true + } + } + + if match.MatchErr == ErrMethodMismatch { + if r.MethodNotAllowedHandler != nil { + match.Handler = r.MethodNotAllowedHandler + return true + } + + return false + } + + // Closest match for a router (includes sub-routers) + if r.NotFoundHandler != nil { + match.Handler = r.NotFoundHandler + match.MatchErr = ErrNotFound + return true + } + + match.MatchErr = ErrNotFound + return false +} + +// ServeHTTP dispatches the handler registered in the matched route. +// +// When there is a match, the route variables can be retrieved calling +// mux.Vars(request). +func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) { + if !r.skipClean { + path := req.URL.Path + if r.useEncodedPath { + path = req.URL.EscapedPath() + } + // Clean path to canonical form and redirect. + if p := cleanPath(path); p != path { + + // Added 3 lines (Philip Schlump) - It was dropping the query string and #whatever from query. + // This matches with fix in go 1.2 r.c. 4 for same problem. Go Issue: + // http://code.google.com/p/go/issues/detail?id=5252 + url := *req.URL + url.Path = p + p = url.String() + + w.Header().Set("Location", p) + w.WriteHeader(http.StatusMovedPermanently) + return + } + } + var match RouteMatch + var handler http.Handler + if r.Match(req, &match) { + handler = match.Handler + req = setVars(req, match.Vars) + req = setCurrentRoute(req, match.Route) + } + + if handler == nil && match.MatchErr == ErrMethodMismatch { + handler = methodNotAllowedHandler() + } + + if handler == nil { + handler = http.NotFoundHandler() + } + + if !r.KeepContext { + defer contextClear(req) + } + + handler.ServeHTTP(w, req) +} + +// Get returns a route registered with the given name. +func (r *Router) Get(name string) *Route { + return r.getNamedRoutes()[name] +} + +// GetRoute returns a route registered with the given name. This method +// was renamed to Get() and remains here for backwards compatibility. +func (r *Router) GetRoute(name string) *Route { + return r.getNamedRoutes()[name] +} + +// StrictSlash defines the trailing slash behavior for new routes. The initial +// value is false. +// +// When true, if the route path is "/path/", accessing "/path" will perform a redirect +// to the former and vice versa. In other words, your application will always +// see the path as specified in the route. +// +// When false, if the route path is "/path", accessing "/path/" will not match +// this route and vice versa. +// +// The re-direct is a HTTP 301 (Moved Permanently). Note that when this is set for +// routes with a non-idempotent method (e.g. POST, PUT), the subsequent re-directed +// request will be made as a GET by most clients. Use middleware or client settings +// to modify this behaviour as needed. +// +// Special case: when a route sets a path prefix using the PathPrefix() method, +// strict slash is ignored for that route because the redirect behavior can't +// be determined from a prefix alone. However, any subrouters created from that +// route inherit the original StrictSlash setting. +func (r *Router) StrictSlash(value bool) *Router { + r.strictSlash = value + return r +} + +// SkipClean defines the path cleaning behaviour for new routes. The initial +// value is false. Users should be careful about which routes are not cleaned +// +// When true, if the route path is "/path//to", it will remain with the double +// slash. This is helpful if you have a route like: /fetch/http://xkcd.com/534/ +// +// When false, the path will be cleaned, so /fetch/http://xkcd.com/534/ will +// become /fetch/http/xkcd.com/534 +func (r *Router) SkipClean(value bool) *Router { + r.skipClean = value + return r +} + +// UseEncodedPath tells the router to match the encoded original path +// to the routes. +// For eg. "/path/foo%2Fbar/to" will match the path "/path/{var}/to". +// +// If not called, the router will match the unencoded path to the routes. +// For eg. "/path/foo%2Fbar/to" will match the path "/path/foo/bar/to" +func (r *Router) UseEncodedPath() *Router { + r.useEncodedPath = true + return r +} + +// ---------------------------------------------------------------------------- +// parentRoute +// ---------------------------------------------------------------------------- + +func (r *Router) getBuildScheme() string { + if r.parent != nil { + return r.parent.getBuildScheme() + } + return "" +} + +// getNamedRoutes returns the map where named routes are registered. +func (r *Router) getNamedRoutes() map[string]*Route { + if r.namedRoutes == nil { + if r.parent != nil { + r.namedRoutes = r.parent.getNamedRoutes() + } else { + r.namedRoutes = make(map[string]*Route) + } + } + return r.namedRoutes +} + +// getRegexpGroup returns regexp definitions from the parent route, if any. +func (r *Router) getRegexpGroup() *routeRegexpGroup { + if r.parent != nil { + return r.parent.getRegexpGroup() + } + return nil +} + +func (r *Router) buildVars(m map[string]string) map[string]string { + if r.parent != nil { + m = r.parent.buildVars(m) + } + return m +} + +// ---------------------------------------------------------------------------- +// Route factories +// ---------------------------------------------------------------------------- + +// NewRoute registers an empty route. +func (r *Router) NewRoute() *Route { + route := &Route{parent: r, strictSlash: r.strictSlash, skipClean: r.skipClean, useEncodedPath: r.useEncodedPath} + r.routes = append(r.routes, route) + return route +} + +// Handle registers a new route with a matcher for the URL path. +// See Route.Path() and Route.Handler(). +func (r *Router) Handle(path string, handler http.Handler) *Route { + return r.NewRoute().Path(path).Handler(handler) +} + +// HandleFunc registers a new route with a matcher for the URL path. +// See Route.Path() and Route.HandlerFunc(). +func (r *Router) HandleFunc(path string, f func(http.ResponseWriter, + *http.Request)) *Route { + return r.NewRoute().Path(path).HandlerFunc(f) +} + +// Headers registers a new route with a matcher for request header values. +// See Route.Headers(). +func (r *Router) Headers(pairs ...string) *Route { + return r.NewRoute().Headers(pairs...) +} + +// Host registers a new route with a matcher for the URL host. +// See Route.Host(). +func (r *Router) Host(tpl string) *Route { + return r.NewRoute().Host(tpl) +} + +// MatcherFunc registers a new route with a custom matcher function. +// See Route.MatcherFunc(). +func (r *Router) MatcherFunc(f MatcherFunc) *Route { + return r.NewRoute().MatcherFunc(f) +} + +// Methods registers a new route with a matcher for HTTP methods. +// See Route.Methods(). +func (r *Router) Methods(methods ...string) *Route { + return r.NewRoute().Methods(methods...) +} + +// Path registers a new route with a matcher for the URL path. +// See Route.Path(). +func (r *Router) Path(tpl string) *Route { + return r.NewRoute().Path(tpl) +} + +// PathPrefix registers a new route with a matcher for the URL path prefix. +// See Route.PathPrefix(). +func (r *Router) PathPrefix(tpl string) *Route { + return r.NewRoute().PathPrefix(tpl) +} + +// Queries registers a new route with a matcher for URL query values. +// See Route.Queries(). +func (r *Router) Queries(pairs ...string) *Route { + return r.NewRoute().Queries(pairs...) +} + +// Schemes registers a new route with a matcher for URL schemes. +// See Route.Schemes(). +func (r *Router) Schemes(schemes ...string) *Route { + return r.NewRoute().Schemes(schemes...) +} + +// BuildVarsFunc registers a new route with a custom function for modifying +// route variables before building a URL. +func (r *Router) BuildVarsFunc(f BuildVarsFunc) *Route { + return r.NewRoute().BuildVarsFunc(f) +} + +// Walk walks the router and all its sub-routers, calling walkFn for each route +// in the tree. The routes are walked in the order they were added. Sub-routers +// are explored depth-first. +func (r *Router) Walk(walkFn WalkFunc) error { + return r.walk(walkFn, []*Route{}) +} + +// SkipRouter is used as a return value from WalkFuncs to indicate that the +// router that walk is about to descend down to should be skipped. +var SkipRouter = errors.New("skip this router") + +// WalkFunc is the type of the function called for each route visited by Walk. +// At every invocation, it is given the current route, and the current router, +// and a list of ancestor routes that lead to the current route. +type WalkFunc func(route *Route, router *Router, ancestors []*Route) error + +func (r *Router) walk(walkFn WalkFunc, ancestors []*Route) error { + for _, t := range r.routes { + err := walkFn(t, r, ancestors) + if err == SkipRouter { + continue + } + if err != nil { + return err + } + for _, sr := range t.matchers { + if h, ok := sr.(*Router); ok { + ancestors = append(ancestors, t) + err := h.walk(walkFn, ancestors) + if err != nil { + return err + } + ancestors = ancestors[:len(ancestors)-1] + } + } + if h, ok := t.handler.(*Router); ok { + ancestors = append(ancestors, t) + err := h.walk(walkFn, ancestors) + if err != nil { + return err + } + ancestors = ancestors[:len(ancestors)-1] + } + } + return nil +} + +// ---------------------------------------------------------------------------- +// Context +// ---------------------------------------------------------------------------- + +// RouteMatch stores information about a matched route. +type RouteMatch struct { + Route *Route + Handler http.Handler + Vars map[string]string + + // MatchErr is set to appropriate matching error + // It is set to ErrMethodMismatch if there is a mismatch in + // the request method and route method + MatchErr error +} + +type contextKey int + +const ( + varsKey contextKey = iota + routeKey +) + +// Vars returns the route variables for the current request, if any. +func Vars(r *http.Request) map[string]string { + if rv := contextGet(r, varsKey); rv != nil { + return rv.(map[string]string) + } + return nil +} + +// CurrentRoute returns the matched route for the current request, if any. +// This only works when called inside the handler of the matched route +// because the matched route is stored in the request context which is cleared +// after the handler returns, unless the KeepContext option is set on the +// Router. +func CurrentRoute(r *http.Request) *Route { + if rv := contextGet(r, routeKey); rv != nil { + return rv.(*Route) + } + return nil +} + +func setVars(r *http.Request, val interface{}) *http.Request { + return contextSet(r, varsKey, val) +} + +func setCurrentRoute(r *http.Request, val interface{}) *http.Request { + return contextSet(r, routeKey, val) +} + +// ---------------------------------------------------------------------------- +// Helpers +// ---------------------------------------------------------------------------- + +// cleanPath returns the canonical path for p, eliminating . and .. elements. +// Borrowed from the net/http package. +func cleanPath(p string) string { + if p == "" { + return "/" + } + if p[0] != '/' { + p = "/" + p + } + np := path.Clean(p) + // path.Clean removes trailing slash except for root; + // put the trailing slash back if necessary. + if p[len(p)-1] == '/' && np != "/" { + np += "/" + } + + return np +} + +// uniqueVars returns an error if two slices contain duplicated strings. +func uniqueVars(s1, s2 []string) error { + for _, v1 := range s1 { + for _, v2 := range s2 { + if v1 == v2 { + return fmt.Errorf("mux: duplicated route variable %q", v2) + } + } + } + return nil +} + +// checkPairs returns the count of strings passed in, and an error if +// the count is not an even number. +func checkPairs(pairs ...string) (int, error) { + length := len(pairs) + if length%2 != 0 { + return length, fmt.Errorf( + "mux: number of parameters must be multiple of 2, got %v", pairs) + } + return length, nil +} + +// mapFromPairsToString converts variadic string parameters to a +// string to string map. +func mapFromPairsToString(pairs ...string) (map[string]string, error) { + length, err := checkPairs(pairs...) + if err != nil { + return nil, err + } + m := make(map[string]string, length/2) + for i := 0; i < length; i += 2 { + m[pairs[i]] = pairs[i+1] + } + return m, nil +} + +// mapFromPairsToRegex converts variadic string parameters to a +// string to regex map. +func mapFromPairsToRegex(pairs ...string) (map[string]*regexp.Regexp, error) { + length, err := checkPairs(pairs...) + if err != nil { + return nil, err + } + m := make(map[string]*regexp.Regexp, length/2) + for i := 0; i < length; i += 2 { + regex, err := regexp.Compile(pairs[i+1]) + if err != nil { + return nil, err + } + m[pairs[i]] = regex + } + return m, nil +} + +// matchInArray returns true if the given string value is in the array. +func matchInArray(arr []string, value string) bool { + for _, v := range arr { + if v == value { + return true + } + } + return false +} + +// matchMapWithString returns true if the given key/value pairs exist in a given map. +func matchMapWithString(toCheck map[string]string, toMatch map[string][]string, canonicalKey bool) bool { + for k, v := range toCheck { + // Check if key exists. + if canonicalKey { + k = http.CanonicalHeaderKey(k) + } + if values := toMatch[k]; values == nil { + return false + } else if v != "" { + // If value was defined as an empty string we only check that the + // key exists. Otherwise we also check for equality. + valueExists := false + for _, value := range values { + if v == value { + valueExists = true + break + } + } + if !valueExists { + return false + } + } + } + return true +} + +// matchMapWithRegex returns true if the given key/value pairs exist in a given map compiled against +// the given regex +func matchMapWithRegex(toCheck map[string]*regexp.Regexp, toMatch map[string][]string, canonicalKey bool) bool { + for k, v := range toCheck { + // Check if key exists. + if canonicalKey { + k = http.CanonicalHeaderKey(k) + } + if values := toMatch[k]; values == nil { + return false + } else if v != nil { + // If value was defined as an empty string we only check that the + // key exists. Otherwise we also check for equality. + valueExists := false + for _, value := range values { + if v.MatchString(value) { + valueExists = true + break + } + } + if !valueExists { + return false + } + } + } + return true +} + +// methodNotAllowed replies to the request with an HTTP status code 405. +func methodNotAllowed(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusMethodNotAllowed) +} + +// methodNotAllowedHandler returns a simple request handler +// that replies to each request with a status code 405. +func methodNotAllowedHandler() http.Handler { return http.HandlerFunc(methodNotAllowed) } diff --git a/vendor/github.com/gorilla/mux/regexp.go b/vendor/github.com/gorilla/mux/regexp.go new file mode 100644 index 00000000..2b57e562 --- /dev/null +++ b/vendor/github.com/gorilla/mux/regexp.go @@ -0,0 +1,332 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import ( + "bytes" + "fmt" + "net/http" + "net/url" + "regexp" + "strconv" + "strings" +) + +type routeRegexpOptions struct { + strictSlash bool + useEncodedPath bool +} + +type regexpType int + +const ( + regexpTypePath regexpType = 0 + regexpTypeHost regexpType = 1 + regexpTypePrefix regexpType = 2 + regexpTypeQuery regexpType = 3 +) + +// newRouteRegexp parses a route template and returns a routeRegexp, +// used to match a host, a path or a query string. +// +// It will extract named variables, assemble a regexp to be matched, create +// a "reverse" template to build URLs and compile regexps to validate variable +// values used in URL building. +// +// Previously we accepted only Python-like identifiers for variable +// names ([a-zA-Z_][a-zA-Z0-9_]*), but currently the only restriction is that +// name and pattern can't be empty, and names can't contain a colon. +func newRouteRegexp(tpl string, typ regexpType, options routeRegexpOptions) (*routeRegexp, error) { + // Check if it is well-formed. + idxs, errBraces := braceIndices(tpl) + if errBraces != nil { + return nil, errBraces + } + // Backup the original. + template := tpl + // Now let's parse it. + defaultPattern := "[^/]+" + if typ == regexpTypeQuery { + defaultPattern = ".*" + } else if typ == regexpTypeHost { + defaultPattern = "[^.]+" + } + // Only match strict slash if not matching + if typ != regexpTypePath { + options.strictSlash = false + } + // Set a flag for strictSlash. + endSlash := false + if options.strictSlash && strings.HasSuffix(tpl, "/") { + tpl = tpl[:len(tpl)-1] + endSlash = true + } + varsN := make([]string, len(idxs)/2) + varsR := make([]*regexp.Regexp, len(idxs)/2) + pattern := bytes.NewBufferString("") + pattern.WriteByte('^') + reverse := bytes.NewBufferString("") + var end int + var err error + for i := 0; i < len(idxs); i += 2 { + // Set all values we are interested in. + raw := tpl[end:idxs[i]] + end = idxs[i+1] + parts := strings.SplitN(tpl[idxs[i]+1:end-1], ":", 2) + name := parts[0] + patt := defaultPattern + if len(parts) == 2 { + patt = parts[1] + } + // Name or pattern can't be empty. + if name == "" || patt == "" { + return nil, fmt.Errorf("mux: missing name or pattern in %q", + tpl[idxs[i]:end]) + } + // Build the regexp pattern. + fmt.Fprintf(pattern, "%s(?P<%s>%s)", regexp.QuoteMeta(raw), varGroupName(i/2), patt) + + // Build the reverse template. + fmt.Fprintf(reverse, "%s%%s", raw) + + // Append variable name and compiled pattern. + varsN[i/2] = name + varsR[i/2], err = regexp.Compile(fmt.Sprintf("^%s$", patt)) + if err != nil { + return nil, err + } + } + // Add the remaining. + raw := tpl[end:] + pattern.WriteString(regexp.QuoteMeta(raw)) + if options.strictSlash { + pattern.WriteString("[/]?") + } + if typ == regexpTypeQuery { + // Add the default pattern if the query value is empty + if queryVal := strings.SplitN(template, "=", 2)[1]; queryVal == "" { + pattern.WriteString(defaultPattern) + } + } + if typ != regexpTypePrefix { + pattern.WriteByte('$') + } + reverse.WriteString(raw) + if endSlash { + reverse.WriteByte('/') + } + // Compile full regexp. + reg, errCompile := regexp.Compile(pattern.String()) + if errCompile != nil { + return nil, errCompile + } + + // Check for capturing groups which used to work in older versions + if reg.NumSubexp() != len(idxs)/2 { + panic(fmt.Sprintf("route %s contains capture groups in its regexp. ", template) + + "Only non-capturing groups are accepted: e.g. (?:pattern) instead of (pattern)") + } + + // Done! + return &routeRegexp{ + template: template, + regexpType: typ, + options: options, + regexp: reg, + reverse: reverse.String(), + varsN: varsN, + varsR: varsR, + }, nil +} + +// routeRegexp stores a regexp to match a host or path and information to +// collect and validate route variables. +type routeRegexp struct { + // The unmodified template. + template string + // The type of match + regexpType regexpType + // Options for matching + options routeRegexpOptions + // Expanded regexp. + regexp *regexp.Regexp + // Reverse template. + reverse string + // Variable names. + varsN []string + // Variable regexps (validators). + varsR []*regexp.Regexp +} + +// Match matches the regexp against the URL host or path. +func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool { + if r.regexpType != regexpTypeHost { + if r.regexpType == regexpTypeQuery { + return r.matchQueryString(req) + } + path := req.URL.Path + if r.options.useEncodedPath { + path = req.URL.EscapedPath() + } + return r.regexp.MatchString(path) + } + + return r.regexp.MatchString(getHost(req)) +} + +// url builds a URL part using the given values. +func (r *routeRegexp) url(values map[string]string) (string, error) { + urlValues := make([]interface{}, len(r.varsN)) + for k, v := range r.varsN { + value, ok := values[v] + if !ok { + return "", fmt.Errorf("mux: missing route variable %q", v) + } + if r.regexpType == regexpTypeQuery { + value = url.QueryEscape(value) + } + urlValues[k] = value + } + rv := fmt.Sprintf(r.reverse, urlValues...) + if !r.regexp.MatchString(rv) { + // The URL is checked against the full regexp, instead of checking + // individual variables. This is faster but to provide a good error + // message, we check individual regexps if the URL doesn't match. + for k, v := range r.varsN { + if !r.varsR[k].MatchString(values[v]) { + return "", fmt.Errorf( + "mux: variable %q doesn't match, expected %q", values[v], + r.varsR[k].String()) + } + } + } + return rv, nil +} + +// getURLQuery returns a single query parameter from a request URL. +// For a URL with foo=bar&baz=ding, we return only the relevant key +// value pair for the routeRegexp. +func (r *routeRegexp) getURLQuery(req *http.Request) string { + if r.regexpType != regexpTypeQuery { + return "" + } + templateKey := strings.SplitN(r.template, "=", 2)[0] + for key, vals := range req.URL.Query() { + if key == templateKey && len(vals) > 0 { + return key + "=" + vals[0] + } + } + return "" +} + +func (r *routeRegexp) matchQueryString(req *http.Request) bool { + return r.regexp.MatchString(r.getURLQuery(req)) +} + +// braceIndices returns the first level curly brace indices from a string. +// It returns an error in case of unbalanced braces. +func braceIndices(s string) ([]int, error) { + var level, idx int + var idxs []int + for i := 0; i < len(s); i++ { + switch s[i] { + case '{': + if level++; level == 1 { + idx = i + } + case '}': + if level--; level == 0 { + idxs = append(idxs, idx, i+1) + } else if level < 0 { + return nil, fmt.Errorf("mux: unbalanced braces in %q", s) + } + } + } + if level != 0 { + return nil, fmt.Errorf("mux: unbalanced braces in %q", s) + } + return idxs, nil +} + +// varGroupName builds a capturing group name for the indexed variable. +func varGroupName(idx int) string { + return "v" + strconv.Itoa(idx) +} + +// ---------------------------------------------------------------------------- +// routeRegexpGroup +// ---------------------------------------------------------------------------- + +// routeRegexpGroup groups the route matchers that carry variables. +type routeRegexpGroup struct { + host *routeRegexp + path *routeRegexp + queries []*routeRegexp +} + +// setMatch extracts the variables from the URL once a route matches. +func (v *routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route) { + // Store host variables. + if v.host != nil { + host := getHost(req) + matches := v.host.regexp.FindStringSubmatchIndex(host) + if len(matches) > 0 { + extractVars(host, matches, v.host.varsN, m.Vars) + } + } + path := req.URL.Path + if r.useEncodedPath { + path = req.URL.EscapedPath() + } + // Store path variables. + if v.path != nil { + matches := v.path.regexp.FindStringSubmatchIndex(path) + if len(matches) > 0 { + extractVars(path, matches, v.path.varsN, m.Vars) + // Check if we should redirect. + if v.path.options.strictSlash { + p1 := strings.HasSuffix(path, "/") + p2 := strings.HasSuffix(v.path.template, "/") + if p1 != p2 { + u, _ := url.Parse(req.URL.String()) + if p1 { + u.Path = u.Path[:len(u.Path)-1] + } else { + u.Path += "/" + } + m.Handler = http.RedirectHandler(u.String(), 301) + } + } + } + } + // Store query string variables. + for _, q := range v.queries { + queryURL := q.getURLQuery(req) + matches := q.regexp.FindStringSubmatchIndex(queryURL) + if len(matches) > 0 { + extractVars(queryURL, matches, q.varsN, m.Vars) + } + } +} + +// getHost tries its best to return the request host. +func getHost(r *http.Request) string { + if r.URL.IsAbs() { + return r.URL.Host + } + host := r.Host + // Slice off any port information. + if i := strings.Index(host, ":"); i != -1 { + host = host[:i] + } + return host + +} + +func extractVars(input string, matches []int, names []string, output map[string]string) { + for i, name := range names { + output[name] = input[matches[2*i+2]:matches[2*i+3]] + } +} diff --git a/vendor/github.com/gorilla/mux/route.go b/vendor/github.com/gorilla/mux/route.go new file mode 100644 index 00000000..a591d735 --- /dev/null +++ b/vendor/github.com/gorilla/mux/route.go @@ -0,0 +1,763 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import ( + "errors" + "fmt" + "net/http" + "net/url" + "regexp" + "strings" +) + +// Route stores information to match a request and build URLs. +type Route struct { + // Parent where the route was registered (a Router). + parent parentRoute + // Request handler for the route. + handler http.Handler + // List of matchers. + matchers []matcher + // Manager for the variables from host and path. + regexp *routeRegexpGroup + // If true, when the path pattern is "/path/", accessing "/path" will + // redirect to the former and vice versa. + strictSlash bool + // If true, when the path pattern is "/path//to", accessing "/path//to" + // will not redirect + skipClean bool + // If true, "/path/foo%2Fbar/to" will match the path "/path/{var}/to" + useEncodedPath bool + // The scheme used when building URLs. + buildScheme string + // If true, this route never matches: it is only used to build URLs. + buildOnly bool + // The name used to build URLs. + name string + // Error resulted from building a route. + err error + + buildVarsFunc BuildVarsFunc +} + +// SkipClean reports whether path cleaning is enabled for this route via +// Router.SkipClean. +func (r *Route) SkipClean() bool { + return r.skipClean +} + +// Match matches the route against the request. +func (r *Route) Match(req *http.Request, match *RouteMatch) bool { + if r.buildOnly || r.err != nil { + return false + } + + var matchErr error + + // Match everything. + for _, m := range r.matchers { + if matched := m.Match(req, match); !matched { + if _, ok := m.(methodMatcher); ok { + matchErr = ErrMethodMismatch + continue + } + matchErr = nil + return false + } + } + + if matchErr != nil { + match.MatchErr = matchErr + return false + } + + if match.MatchErr == ErrMethodMismatch { + // We found a route which matches request method, clear MatchErr + match.MatchErr = nil + // Then override the mis-matched handler + match.Handler = r.handler + } + + // Yay, we have a match. Let's collect some info about it. + if match.Route == nil { + match.Route = r + } + if match.Handler == nil { + match.Handler = r.handler + } + if match.Vars == nil { + match.Vars = make(map[string]string) + } + + // Set variables. + if r.regexp != nil { + r.regexp.setMatch(req, match, r) + } + return true +} + +// ---------------------------------------------------------------------------- +// Route attributes +// ---------------------------------------------------------------------------- + +// GetError returns an error resulted from building the route, if any. +func (r *Route) GetError() error { + return r.err +} + +// BuildOnly sets the route to never match: it is only used to build URLs. +func (r *Route) BuildOnly() *Route { + r.buildOnly = true + return r +} + +// Handler -------------------------------------------------------------------- + +// Handler sets a handler for the route. +func (r *Route) Handler(handler http.Handler) *Route { + if r.err == nil { + r.handler = handler + } + return r +} + +// HandlerFunc sets a handler function for the route. +func (r *Route) HandlerFunc(f func(http.ResponseWriter, *http.Request)) *Route { + return r.Handler(http.HandlerFunc(f)) +} + +// GetHandler returns the handler for the route, if any. +func (r *Route) GetHandler() http.Handler { + return r.handler +} + +// Name ----------------------------------------------------------------------- + +// Name sets the name for the route, used to build URLs. +// If the name was registered already it will be overwritten. +func (r *Route) Name(name string) *Route { + if r.name != "" { + r.err = fmt.Errorf("mux: route already has name %q, can't set %q", + r.name, name) + } + if r.err == nil { + r.name = name + r.getNamedRoutes()[name] = r + } + return r +} + +// GetName returns the name for the route, if any. +func (r *Route) GetName() string { + return r.name +} + +// ---------------------------------------------------------------------------- +// Matchers +// ---------------------------------------------------------------------------- + +// matcher types try to match a request. +type matcher interface { + Match(*http.Request, *RouteMatch) bool +} + +// addMatcher adds a matcher to the route. +func (r *Route) addMatcher(m matcher) *Route { + if r.err == nil { + r.matchers = append(r.matchers, m) + } + return r +} + +// addRegexpMatcher adds a host or path matcher and builder to a route. +func (r *Route) addRegexpMatcher(tpl string, typ regexpType) error { + if r.err != nil { + return r.err + } + r.regexp = r.getRegexpGroup() + if typ == regexpTypePath || typ == regexpTypePrefix { + if len(tpl) > 0 && tpl[0] != '/' { + return fmt.Errorf("mux: path must start with a slash, got %q", tpl) + } + if r.regexp.path != nil { + tpl = strings.TrimRight(r.regexp.path.template, "/") + tpl + } + } + rr, err := newRouteRegexp(tpl, typ, routeRegexpOptions{ + strictSlash: r.strictSlash, + useEncodedPath: r.useEncodedPath, + }) + if err != nil { + return err + } + for _, q := range r.regexp.queries { + if err = uniqueVars(rr.varsN, q.varsN); err != nil { + return err + } + } + if typ == regexpTypeHost { + if r.regexp.path != nil { + if err = uniqueVars(rr.varsN, r.regexp.path.varsN); err != nil { + return err + } + } + r.regexp.host = rr + } else { + if r.regexp.host != nil { + if err = uniqueVars(rr.varsN, r.regexp.host.varsN); err != nil { + return err + } + } + if typ == regexpTypeQuery { + r.regexp.queries = append(r.regexp.queries, rr) + } else { + r.regexp.path = rr + } + } + r.addMatcher(rr) + return nil +} + +// Headers -------------------------------------------------------------------- + +// headerMatcher matches the request against header values. +type headerMatcher map[string]string + +func (m headerMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchMapWithString(m, r.Header, true) +} + +// Headers adds a matcher for request header values. +// It accepts a sequence of key/value pairs to be matched. For example: +// +// r := mux.NewRouter() +// r.Headers("Content-Type", "application/json", +// "X-Requested-With", "XMLHttpRequest") +// +// The above route will only match if both request header values match. +// If the value is an empty string, it will match any value if the key is set. +func (r *Route) Headers(pairs ...string) *Route { + if r.err == nil { + var headers map[string]string + headers, r.err = mapFromPairsToString(pairs...) + return r.addMatcher(headerMatcher(headers)) + } + return r +} + +// headerRegexMatcher matches the request against the route given a regex for the header +type headerRegexMatcher map[string]*regexp.Regexp + +func (m headerRegexMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchMapWithRegex(m, r.Header, true) +} + +// HeadersRegexp accepts a sequence of key/value pairs, where the value has regex +// support. For example: +// +// r := mux.NewRouter() +// r.HeadersRegexp("Content-Type", "application/(text|json)", +// "X-Requested-With", "XMLHttpRequest") +// +// The above route will only match if both the request header matches both regular expressions. +// If the value is an empty string, it will match any value if the key is set. +// Use the start and end of string anchors (^ and $) to match an exact value. +func (r *Route) HeadersRegexp(pairs ...string) *Route { + if r.err == nil { + var headers map[string]*regexp.Regexp + headers, r.err = mapFromPairsToRegex(pairs...) + return r.addMatcher(headerRegexMatcher(headers)) + } + return r +} + +// Host ----------------------------------------------------------------------- + +// Host adds a matcher for the URL host. +// It accepts a template with zero or more URL variables enclosed by {}. +// Variables can define an optional regexp pattern to be matched: +// +// - {name} matches anything until the next dot. +// +// - {name:pattern} matches the given regexp pattern. +// +// For example: +// +// r := mux.NewRouter() +// r.Host("www.example.com") +// r.Host("{subdomain}.domain.com") +// r.Host("{subdomain:[a-z]+}.domain.com") +// +// Variable names must be unique in a given route. They can be retrieved +// calling mux.Vars(request). +func (r *Route) Host(tpl string) *Route { + r.err = r.addRegexpMatcher(tpl, regexpTypeHost) + return r +} + +// MatcherFunc ---------------------------------------------------------------- + +// MatcherFunc is the function signature used by custom matchers. +type MatcherFunc func(*http.Request, *RouteMatch) bool + +// Match returns the match for a given request. +func (m MatcherFunc) Match(r *http.Request, match *RouteMatch) bool { + return m(r, match) +} + +// MatcherFunc adds a custom function to be used as request matcher. +func (r *Route) MatcherFunc(f MatcherFunc) *Route { + return r.addMatcher(f) +} + +// Methods -------------------------------------------------------------------- + +// methodMatcher matches the request against HTTP methods. +type methodMatcher []string + +func (m methodMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchInArray(m, r.Method) +} + +// Methods adds a matcher for HTTP methods. +// It accepts a sequence of one or more methods to be matched, e.g.: +// "GET", "POST", "PUT". +func (r *Route) Methods(methods ...string) *Route { + for k, v := range methods { + methods[k] = strings.ToUpper(v) + } + return r.addMatcher(methodMatcher(methods)) +} + +// Path ----------------------------------------------------------------------- + +// Path adds a matcher for the URL path. +// It accepts a template with zero or more URL variables enclosed by {}. The +// template must start with a "/". +// Variables can define an optional regexp pattern to be matched: +// +// - {name} matches anything until the next slash. +// +// - {name:pattern} matches the given regexp pattern. +// +// For example: +// +// r := mux.NewRouter() +// r.Path("/products/").Handler(ProductsHandler) +// r.Path("/products/{key}").Handler(ProductsHandler) +// r.Path("/articles/{category}/{id:[0-9]+}"). +// Handler(ArticleHandler) +// +// Variable names must be unique in a given route. They can be retrieved +// calling mux.Vars(request). +func (r *Route) Path(tpl string) *Route { + r.err = r.addRegexpMatcher(tpl, regexpTypePath) + return r +} + +// PathPrefix ----------------------------------------------------------------- + +// PathPrefix adds a matcher for the URL path prefix. This matches if the given +// template is a prefix of the full URL path. See Route.Path() for details on +// the tpl argument. +// +// Note that it does not treat slashes specially ("/foobar/" will be matched by +// the prefix "/foo") so you may want to use a trailing slash here. +// +// Also note that the setting of Router.StrictSlash() has no effect on routes +// with a PathPrefix matcher. +func (r *Route) PathPrefix(tpl string) *Route { + r.err = r.addRegexpMatcher(tpl, regexpTypePrefix) + return r +} + +// Query ---------------------------------------------------------------------- + +// Queries adds a matcher for URL query values. +// It accepts a sequence of key/value pairs. Values may define variables. +// For example: +// +// r := mux.NewRouter() +// r.Queries("foo", "bar", "id", "{id:[0-9]+}") +// +// The above route will only match if the URL contains the defined queries +// values, e.g.: ?foo=bar&id=42. +// +// It the value is an empty string, it will match any value if the key is set. +// +// Variables can define an optional regexp pattern to be matched: +// +// - {name} matches anything until the next slash. +// +// - {name:pattern} matches the given regexp pattern. +func (r *Route) Queries(pairs ...string) *Route { + length := len(pairs) + if length%2 != 0 { + r.err = fmt.Errorf( + "mux: number of parameters must be multiple of 2, got %v", pairs) + return nil + } + for i := 0; i < length; i += 2 { + if r.err = r.addRegexpMatcher(pairs[i]+"="+pairs[i+1], regexpTypeQuery); r.err != nil { + return r + } + } + + return r +} + +// Schemes -------------------------------------------------------------------- + +// schemeMatcher matches the request against URL schemes. +type schemeMatcher []string + +func (m schemeMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchInArray(m, r.URL.Scheme) +} + +// Schemes adds a matcher for URL schemes. +// It accepts a sequence of schemes to be matched, e.g.: "http", "https". +func (r *Route) Schemes(schemes ...string) *Route { + for k, v := range schemes { + schemes[k] = strings.ToLower(v) + } + if r.buildScheme == "" && len(schemes) > 0 { + r.buildScheme = schemes[0] + } + return r.addMatcher(schemeMatcher(schemes)) +} + +// BuildVarsFunc -------------------------------------------------------------- + +// BuildVarsFunc is the function signature used by custom build variable +// functions (which can modify route variables before a route's URL is built). +type BuildVarsFunc func(map[string]string) map[string]string + +// BuildVarsFunc adds a custom function to be used to modify build variables +// before a route's URL is built. +func (r *Route) BuildVarsFunc(f BuildVarsFunc) *Route { + r.buildVarsFunc = f + return r +} + +// Subrouter ------------------------------------------------------------------ + +// Subrouter creates a subrouter for the route. +// +// It will test the inner routes only if the parent route matched. For example: +// +// r := mux.NewRouter() +// s := r.Host("www.example.com").Subrouter() +// s.HandleFunc("/products/", ProductsHandler) +// s.HandleFunc("/products/{key}", ProductHandler) +// s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) +// +// Here, the routes registered in the subrouter won't be tested if the host +// doesn't match. +func (r *Route) Subrouter() *Router { + router := &Router{parent: r, strictSlash: r.strictSlash} + r.addMatcher(router) + return router +} + +// ---------------------------------------------------------------------------- +// URL building +// ---------------------------------------------------------------------------- + +// URL builds a URL for the route. +// +// It accepts a sequence of key/value pairs for the route variables. For +// example, given this route: +// +// r := mux.NewRouter() +// r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). +// Name("article") +// +// ...a URL for it can be built using: +// +// url, err := r.Get("article").URL("category", "technology", "id", "42") +// +// ...which will return an url.URL with the following path: +// +// "/articles/technology/42" +// +// This also works for host variables: +// +// r := mux.NewRouter() +// r.Host("{subdomain}.domain.com"). +// HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). +// Name("article") +// +// // url.String() will be "http://news.domain.com/articles/technology/42" +// url, err := r.Get("article").URL("subdomain", "news", +// "category", "technology", +// "id", "42") +// +// All variables defined in the route are required, and their values must +// conform to the corresponding patterns. +func (r *Route) URL(pairs ...string) (*url.URL, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp == nil { + return nil, errors.New("mux: route doesn't have a host or path") + } + values, err := r.prepareVars(pairs...) + if err != nil { + return nil, err + } + var scheme, host, path string + queries := make([]string, 0, len(r.regexp.queries)) + if r.regexp.host != nil { + if host, err = r.regexp.host.url(values); err != nil { + return nil, err + } + scheme = "http" + if s := r.getBuildScheme(); s != "" { + scheme = s + } + } + if r.regexp.path != nil { + if path, err = r.regexp.path.url(values); err != nil { + return nil, err + } + } + for _, q := range r.regexp.queries { + var query string + if query, err = q.url(values); err != nil { + return nil, err + } + queries = append(queries, query) + } + return &url.URL{ + Scheme: scheme, + Host: host, + Path: path, + RawQuery: strings.Join(queries, "&"), + }, nil +} + +// URLHost builds the host part of the URL for a route. See Route.URL(). +// +// The route must have a host defined. +func (r *Route) URLHost(pairs ...string) (*url.URL, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp == nil || r.regexp.host == nil { + return nil, errors.New("mux: route doesn't have a host") + } + values, err := r.prepareVars(pairs...) + if err != nil { + return nil, err + } + host, err := r.regexp.host.url(values) + if err != nil { + return nil, err + } + u := &url.URL{ + Scheme: "http", + Host: host, + } + if s := r.getBuildScheme(); s != "" { + u.Scheme = s + } + return u, nil +} + +// URLPath builds the path part of the URL for a route. See Route.URL(). +// +// The route must have a path defined. +func (r *Route) URLPath(pairs ...string) (*url.URL, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp == nil || r.regexp.path == nil { + return nil, errors.New("mux: route doesn't have a path") + } + values, err := r.prepareVars(pairs...) + if err != nil { + return nil, err + } + path, err := r.regexp.path.url(values) + if err != nil { + return nil, err + } + return &url.URL{ + Path: path, + }, nil +} + +// GetPathTemplate returns the template used to build the +// route match. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define a path. +func (r *Route) GetPathTemplate() (string, error) { + if r.err != nil { + return "", r.err + } + if r.regexp == nil || r.regexp.path == nil { + return "", errors.New("mux: route doesn't have a path") + } + return r.regexp.path.template, nil +} + +// GetPathRegexp returns the expanded regular expression used to match route path. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define a path. +func (r *Route) GetPathRegexp() (string, error) { + if r.err != nil { + return "", r.err + } + if r.regexp == nil || r.regexp.path == nil { + return "", errors.New("mux: route does not have a path") + } + return r.regexp.path.regexp.String(), nil +} + +// GetQueriesRegexp returns the expanded regular expressions used to match the +// route queries. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not have queries. +func (r *Route) GetQueriesRegexp() ([]string, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp == nil || r.regexp.queries == nil { + return nil, errors.New("mux: route doesn't have queries") + } + var queries []string + for _, query := range r.regexp.queries { + queries = append(queries, query.regexp.String()) + } + return queries, nil +} + +// GetQueriesTemplates returns the templates used to build the +// query matching. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define queries. +func (r *Route) GetQueriesTemplates() ([]string, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp == nil || r.regexp.queries == nil { + return nil, errors.New("mux: route doesn't have queries") + } + var queries []string + for _, query := range r.regexp.queries { + queries = append(queries, query.template) + } + return queries, nil +} + +// GetMethods returns the methods the route matches against +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if route does not have methods. +func (r *Route) GetMethods() ([]string, error) { + if r.err != nil { + return nil, r.err + } + for _, m := range r.matchers { + if methods, ok := m.(methodMatcher); ok { + return []string(methods), nil + } + } + return nil, errors.New("mux: route doesn't have methods") +} + +// GetHostTemplate returns the template used to build the +// route match. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define a host. +func (r *Route) GetHostTemplate() (string, error) { + if r.err != nil { + return "", r.err + } + if r.regexp == nil || r.regexp.host == nil { + return "", errors.New("mux: route doesn't have a host") + } + return r.regexp.host.template, nil +} + +// prepareVars converts the route variable pairs into a map. If the route has a +// BuildVarsFunc, it is invoked. +func (r *Route) prepareVars(pairs ...string) (map[string]string, error) { + m, err := mapFromPairsToString(pairs...) + if err != nil { + return nil, err + } + return r.buildVars(m), nil +} + +func (r *Route) buildVars(m map[string]string) map[string]string { + if r.parent != nil { + m = r.parent.buildVars(m) + } + if r.buildVarsFunc != nil { + m = r.buildVarsFunc(m) + } + return m +} + +// ---------------------------------------------------------------------------- +// parentRoute +// ---------------------------------------------------------------------------- + +// parentRoute allows routes to know about parent host and path definitions. +type parentRoute interface { + getBuildScheme() string + getNamedRoutes() map[string]*Route + getRegexpGroup() *routeRegexpGroup + buildVars(map[string]string) map[string]string +} + +func (r *Route) getBuildScheme() string { + if r.buildScheme != "" { + return r.buildScheme + } + if r.parent != nil { + return r.parent.getBuildScheme() + } + return "" +} + +// getNamedRoutes returns the map where named routes are registered. +func (r *Route) getNamedRoutes() map[string]*Route { + if r.parent == nil { + // During tests router is not always set. + r.parent = NewRouter() + } + return r.parent.getNamedRoutes() +} + +// getRegexpGroup returns regexp definitions from this route. +func (r *Route) getRegexpGroup() *routeRegexpGroup { + if r.regexp == nil { + if r.parent == nil { + // During tests router is not always set. + r.parent = NewRouter() + } + regexp := r.parent.getRegexpGroup() + if regexp == nil { + r.regexp = new(routeRegexpGroup) + } else { + // Copy. + r.regexp = &routeRegexpGroup{ + host: regexp.host, + path: regexp.path, + queries: regexp.queries, + } + } + } + return r.regexp +} diff --git a/vendor/github.com/gorilla/mux/test_helpers.go b/vendor/github.com/gorilla/mux/test_helpers.go new file mode 100644 index 00000000..32ecffde --- /dev/null +++ b/vendor/github.com/gorilla/mux/test_helpers.go @@ -0,0 +1,19 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import "net/http" + +// SetURLVars sets the URL variables for the given request, to be accessed via +// mux.Vars for testing route behaviour. Arguments are not modified, a shallow +// copy is returned. +// +// This API should only be used for testing purposes; it provides a way to +// inject variables into the request context. Alternatively, URL variables +// can be set by making a route that captures the required variables, +// starting a server and sending the request to that server. +func SetURLVars(r *http.Request, val map[string]string) *http.Request { + return setVars(r, val) +} diff --git a/vendor/github.com/jaymccon/osb-broker-lib/LICENSE b/vendor/github.com/jaymccon/osb-broker-lib/LICENSE new file mode 100644 index 00000000..261eeb9e --- /dev/null +++ b/vendor/github.com/jaymccon/osb-broker-lib/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/jaymccon/osb-broker-lib/pkg/server/server.go b/vendor/github.com/jaymccon/osb-broker-lib/pkg/server/server.go new file mode 100644 index 00000000..3b065315 --- /dev/null +++ b/vendor/github.com/jaymccon/osb-broker-lib/pkg/server/server.go @@ -0,0 +1,152 @@ +package server + +import ( + "context" + "crypto/tls" + "encoding/base64" + "net/http" + "time" + + auth "github.com/abbot/go-http-auth" + "github.com/golang/glog" + "github.com/gorilla/mux" + prom "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promhttp" + "golang.org/x/crypto/bcrypt" + + "github.com/pmorie/osb-broker-lib/pkg/rest" +) + +type BasicAuth struct { + User string + Pass string +} + +func (b *BasicAuth) Secret(user, realm string) string { + if user == b.User { + hashedPassword, err := bcrypt.GenerateFromPassword([]byte(b.Pass), bcrypt.DefaultCost) + if err == nil { + return string(hashedPassword) + } + } + return "" +} + +// Server is the server for the OSB REST API and the metrics API. A Server glues +// the HTTP operations to their implementations. +type Server struct { + // Router is a mux.Router that registers the handlers for the HTTP + // operations: + // + // - OSB API + // - metrics API + Router *mux.Router +} + +// New creates a new Router and registers all the necessary endpoints and handlers. +func New(api *rest.APISurface, reg prom.Gatherer, enableBasicAuth bool, secret func(user, realm string) string) *Server { + router := mux.NewRouter() + + if api.EnableCORS { + router.Methods("OPTIONS").HandlerFunc(api.OptionsHandler) + } + + registerAPIHandlers(router, api, enableBasicAuth, secret) + router.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{})) + + return &Server{ + Router: router, + } +} + +// NewHTTPHandler creates a new Router and registers API handlers +func NewHTTPHandler(api *rest.APISurface, enableBasicAuth bool, secret func(user, realm string) string) http.Handler { + router := mux.NewRouter() + registerAPIHandlers(router, api, enableBasicAuth, secret) + return router +} + +func getHandleFunc(handler func(w http.ResponseWriter, r *http.Request), enableBasicAuth bool, secret func(user, realm string) string) func(w http.ResponseWriter, r *http.Request) { + if enableBasicAuth { + authenticator := auth.NewBasicAuthenticator("aws-service-broker", secret) + return auth.JustCheck(authenticator, handler) + } else { + return handler + } +} + +// registerAPIHandlers registers the APISurface endpoints and handlers. +func registerAPIHandlers(router *mux.Router, api *rest.APISurface, enableBasicAuth bool, secret func(user, realm string) string) { + router.HandleFunc("/v2/catalog", getHandleFunc(api.GetCatalogHandler, enableBasicAuth, secret)).Methods("GET") + router.HandleFunc("/v2/service_instances/{instance_id}/last_operation", getHandleFunc(api.LastOperationHandler, enableBasicAuth, secret)).Methods("GET") + router.HandleFunc("/v2/service_instances/{instance_id}", getHandleFunc(api.ProvisionHandler, enableBasicAuth, secret)).Methods("PUT") + router.HandleFunc("/v2/service_instances/{instance_id}", getHandleFunc(api.DeprovisionHandler, enableBasicAuth, secret)).Methods("DELETE") + router.HandleFunc("/v2/service_instances/{instance_id}", getHandleFunc(api.UpdateHandler, enableBasicAuth, secret)).Methods("PATCH") + router.HandleFunc("/v2/service_instances/{instance_id}/service_bindings/{binding_id}", getHandleFunc(api.BindHandler, enableBasicAuth, secret)).Methods("PUT") + router.HandleFunc("/v2/service_instances/{instance_id}/service_bindings/{binding_id}", getHandleFunc(api.GetBindingHandler, enableBasicAuth, secret)).Methods("GET") + router.HandleFunc("/v2/service_instances/{instance_id}/service_bindings/{binding_id}/last_operation", getHandleFunc(api.BindingLastOperationHandler, enableBasicAuth, secret)).Methods("GET") + router.HandleFunc("/v2/service_instances/{instance_id}/service_bindings/{binding_id}", getHandleFunc(api.UnbindHandler, enableBasicAuth, secret)).Methods("DELETE") + router.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { + w.Write([]byte("OK")) + }) +} + +// Run creates the HTTP handler and begins to listen on the specified address. +func (s *Server) Run(ctx context.Context, addr string) error { + listenAndServe := func(srv *http.Server) error { + return srv.ListenAndServe() + } + return s.run(ctx, addr, listenAndServe) +} + +// RunTLS creates the HTTPS handler based on the certifications that were passed +// and begins to listen on the specified address. +func (s *Server) RunTLS(ctx context.Context, addr string, cert string, key string) error { + var decodedCert, decodedKey []byte + var tlsCert tls.Certificate + var err error + decodedCert, err = base64.StdEncoding.DecodeString(cert) + if err != nil { + return err + } + decodedKey, err = base64.StdEncoding.DecodeString(key) + if err != nil { + return err + } + tlsCert, err = tls.X509KeyPair(decodedCert, decodedKey) + if err != nil { + return err + } + listenAndServe := func(srv *http.Server) error { + srv.TLSConfig = new(tls.Config) + srv.TLSConfig.Certificates = []tls.Certificate{tlsCert} + return srv.ListenAndServeTLS("", "") + } + return s.run(ctx, addr, listenAndServe) +} + +// RunTLSWithTLSFiles creates the HTTPS handler based on the certification +// files that were passed and begins to listen on the specified address. +func (s *Server) RunTLSWithTLSFiles(ctx context.Context, addr string, certFilePath string, keyFilePath string) error { + listenAndServe := func(srv *http.Server) error { + return srv.ListenAndServeTLS(certFilePath, keyFilePath) + } + return s.run(ctx, addr, listenAndServe) +} + +func (s *Server) run(ctx context.Context, addr string, listenAndServe func(srv *http.Server) error) error { + glog.Infof("Starting server on %s\n", addr) + srv := &http.Server{ + Addr: addr, + Handler: s.Router, + } + go func() { + <-ctx.Done() + c, cancel := context.WithTimeout(context.Background(), 3*time.Second) + defer cancel() + if srv.Shutdown(c) != nil { + srv.Close() + } + }() + return listenAndServe(srv) +} diff --git a/vendor/github.com/jmespath/go-jmespath/LICENSE b/vendor/github.com/jmespath/go-jmespath/LICENSE new file mode 100644 index 00000000..b03310a9 --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/LICENSE @@ -0,0 +1,13 @@ +Copyright 2015 James Saryerwinnie + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/vendor/github.com/jmespath/go-jmespath/api.go b/vendor/github.com/jmespath/go-jmespath/api.go new file mode 100644 index 00000000..9cfa988b --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/api.go @@ -0,0 +1,49 @@ +package jmespath + +import "strconv" + +// JmesPath is the epresentation of a compiled JMES path query. A JmesPath is +// safe for concurrent use by multiple goroutines. +type JMESPath struct { + ast ASTNode + intr *treeInterpreter +} + +// Compile parses a JMESPath expression and returns, if successful, a JMESPath +// object that can be used to match against data. +func Compile(expression string) (*JMESPath, error) { + parser := NewParser() + ast, err := parser.Parse(expression) + if err != nil { + return nil, err + } + jmespath := &JMESPath{ast: ast, intr: newInterpreter()} + return jmespath, nil +} + +// MustCompile is like Compile but panics if the expression cannot be parsed. +// It simplifies safe initialization of global variables holding compiled +// JMESPaths. +func MustCompile(expression string) *JMESPath { + jmespath, err := Compile(expression) + if err != nil { + panic(`jmespath: Compile(` + strconv.Quote(expression) + `): ` + err.Error()) + } + return jmespath +} + +// Search evaluates a JMESPath expression against input data and returns the result. +func (jp *JMESPath) Search(data interface{}) (interface{}, error) { + return jp.intr.Execute(jp.ast, data) +} + +// Search evaluates a JMESPath expression against input data and returns the result. +func Search(expression string, data interface{}) (interface{}, error) { + intr := newInterpreter() + parser := NewParser() + ast, err := parser.Parse(expression) + if err != nil { + return nil, err + } + return intr.Execute(ast, data) +} diff --git a/vendor/github.com/jmespath/go-jmespath/astnodetype_string.go b/vendor/github.com/jmespath/go-jmespath/astnodetype_string.go new file mode 100644 index 00000000..1cd2d239 --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/astnodetype_string.go @@ -0,0 +1,16 @@ +// generated by stringer -type astNodeType; DO NOT EDIT + +package jmespath + +import "fmt" + +const _astNodeType_name = "ASTEmptyASTComparatorASTCurrentNodeASTExpRefASTFunctionExpressionASTFieldASTFilterProjectionASTFlattenASTIdentityASTIndexASTIndexExpressionASTKeyValPairASTLiteralASTMultiSelectHashASTMultiSelectListASTOrExpressionASTAndExpressionASTNotExpressionASTPipeASTProjectionASTSubexpressionASTSliceASTValueProjection" + +var _astNodeType_index = [...]uint16{0, 8, 21, 35, 44, 65, 73, 92, 102, 113, 121, 139, 152, 162, 180, 198, 213, 229, 245, 252, 265, 281, 289, 307} + +func (i astNodeType) String() string { + if i < 0 || i >= astNodeType(len(_astNodeType_index)-1) { + return fmt.Sprintf("astNodeType(%d)", i) + } + return _astNodeType_name[_astNodeType_index[i]:_astNodeType_index[i+1]] +} diff --git a/vendor/github.com/jmespath/go-jmespath/functions.go b/vendor/github.com/jmespath/go-jmespath/functions.go new file mode 100644 index 00000000..9b7cd89b --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/functions.go @@ -0,0 +1,842 @@ +package jmespath + +import ( + "encoding/json" + "errors" + "fmt" + "math" + "reflect" + "sort" + "strconv" + "strings" + "unicode/utf8" +) + +type jpFunction func(arguments []interface{}) (interface{}, error) + +type jpType string + +const ( + jpUnknown jpType = "unknown" + jpNumber jpType = "number" + jpString jpType = "string" + jpArray jpType = "array" + jpObject jpType = "object" + jpArrayNumber jpType = "array[number]" + jpArrayString jpType = "array[string]" + jpExpref jpType = "expref" + jpAny jpType = "any" +) + +type functionEntry struct { + name string + arguments []argSpec + handler jpFunction + hasExpRef bool +} + +type argSpec struct { + types []jpType + variadic bool +} + +type byExprString struct { + intr *treeInterpreter + node ASTNode + items []interface{} + hasError bool +} + +func (a *byExprString) Len() int { + return len(a.items) +} +func (a *byExprString) Swap(i, j int) { + a.items[i], a.items[j] = a.items[j], a.items[i] +} +func (a *byExprString) Less(i, j int) bool { + first, err := a.intr.Execute(a.node, a.items[i]) + if err != nil { + a.hasError = true + // Return a dummy value. + return true + } + ith, ok := first.(string) + if !ok { + a.hasError = true + return true + } + second, err := a.intr.Execute(a.node, a.items[j]) + if err != nil { + a.hasError = true + // Return a dummy value. + return true + } + jth, ok := second.(string) + if !ok { + a.hasError = true + return true + } + return ith < jth +} + +type byExprFloat struct { + intr *treeInterpreter + node ASTNode + items []interface{} + hasError bool +} + +func (a *byExprFloat) Len() int { + return len(a.items) +} +func (a *byExprFloat) Swap(i, j int) { + a.items[i], a.items[j] = a.items[j], a.items[i] +} +func (a *byExprFloat) Less(i, j int) bool { + first, err := a.intr.Execute(a.node, a.items[i]) + if err != nil { + a.hasError = true + // Return a dummy value. + return true + } + ith, ok := first.(float64) + if !ok { + a.hasError = true + return true + } + second, err := a.intr.Execute(a.node, a.items[j]) + if err != nil { + a.hasError = true + // Return a dummy value. + return true + } + jth, ok := second.(float64) + if !ok { + a.hasError = true + return true + } + return ith < jth +} + +type functionCaller struct { + functionTable map[string]functionEntry +} + +func newFunctionCaller() *functionCaller { + caller := &functionCaller{} + caller.functionTable = map[string]functionEntry{ + "length": { + name: "length", + arguments: []argSpec{ + {types: []jpType{jpString, jpArray, jpObject}}, + }, + handler: jpfLength, + }, + "starts_with": { + name: "starts_with", + arguments: []argSpec{ + {types: []jpType{jpString}}, + {types: []jpType{jpString}}, + }, + handler: jpfStartsWith, + }, + "abs": { + name: "abs", + arguments: []argSpec{ + {types: []jpType{jpNumber}}, + }, + handler: jpfAbs, + }, + "avg": { + name: "avg", + arguments: []argSpec{ + {types: []jpType{jpArrayNumber}}, + }, + handler: jpfAvg, + }, + "ceil": { + name: "ceil", + arguments: []argSpec{ + {types: []jpType{jpNumber}}, + }, + handler: jpfCeil, + }, + "contains": { + name: "contains", + arguments: []argSpec{ + {types: []jpType{jpArray, jpString}}, + {types: []jpType{jpAny}}, + }, + handler: jpfContains, + }, + "ends_with": { + name: "ends_with", + arguments: []argSpec{ + {types: []jpType{jpString}}, + {types: []jpType{jpString}}, + }, + handler: jpfEndsWith, + }, + "floor": { + name: "floor", + arguments: []argSpec{ + {types: []jpType{jpNumber}}, + }, + handler: jpfFloor, + }, + "map": { + name: "amp", + arguments: []argSpec{ + {types: []jpType{jpExpref}}, + {types: []jpType{jpArray}}, + }, + handler: jpfMap, + hasExpRef: true, + }, + "max": { + name: "max", + arguments: []argSpec{ + {types: []jpType{jpArrayNumber, jpArrayString}}, + }, + handler: jpfMax, + }, + "merge": { + name: "merge", + arguments: []argSpec{ + {types: []jpType{jpObject}, variadic: true}, + }, + handler: jpfMerge, + }, + "max_by": { + name: "max_by", + arguments: []argSpec{ + {types: []jpType{jpArray}}, + {types: []jpType{jpExpref}}, + }, + handler: jpfMaxBy, + hasExpRef: true, + }, + "sum": { + name: "sum", + arguments: []argSpec{ + {types: []jpType{jpArrayNumber}}, + }, + handler: jpfSum, + }, + "min": { + name: "min", + arguments: []argSpec{ + {types: []jpType{jpArrayNumber, jpArrayString}}, + }, + handler: jpfMin, + }, + "min_by": { + name: "min_by", + arguments: []argSpec{ + {types: []jpType{jpArray}}, + {types: []jpType{jpExpref}}, + }, + handler: jpfMinBy, + hasExpRef: true, + }, + "type": { + name: "type", + arguments: []argSpec{ + {types: []jpType{jpAny}}, + }, + handler: jpfType, + }, + "keys": { + name: "keys", + arguments: []argSpec{ + {types: []jpType{jpObject}}, + }, + handler: jpfKeys, + }, + "values": { + name: "values", + arguments: []argSpec{ + {types: []jpType{jpObject}}, + }, + handler: jpfValues, + }, + "sort": { + name: "sort", + arguments: []argSpec{ + {types: []jpType{jpArrayString, jpArrayNumber}}, + }, + handler: jpfSort, + }, + "sort_by": { + name: "sort_by", + arguments: []argSpec{ + {types: []jpType{jpArray}}, + {types: []jpType{jpExpref}}, + }, + handler: jpfSortBy, + hasExpRef: true, + }, + "join": { + name: "join", + arguments: []argSpec{ + {types: []jpType{jpString}}, + {types: []jpType{jpArrayString}}, + }, + handler: jpfJoin, + }, + "reverse": { + name: "reverse", + arguments: []argSpec{ + {types: []jpType{jpArray, jpString}}, + }, + handler: jpfReverse, + }, + "to_array": { + name: "to_array", + arguments: []argSpec{ + {types: []jpType{jpAny}}, + }, + handler: jpfToArray, + }, + "to_string": { + name: "to_string", + arguments: []argSpec{ + {types: []jpType{jpAny}}, + }, + handler: jpfToString, + }, + "to_number": { + name: "to_number", + arguments: []argSpec{ + {types: []jpType{jpAny}}, + }, + handler: jpfToNumber, + }, + "not_null": { + name: "not_null", + arguments: []argSpec{ + {types: []jpType{jpAny}, variadic: true}, + }, + handler: jpfNotNull, + }, + } + return caller +} + +func (e *functionEntry) resolveArgs(arguments []interface{}) ([]interface{}, error) { + if len(e.arguments) == 0 { + return arguments, nil + } + if !e.arguments[len(e.arguments)-1].variadic { + if len(e.arguments) != len(arguments) { + return nil, errors.New("incorrect number of args") + } + for i, spec := range e.arguments { + userArg := arguments[i] + err := spec.typeCheck(userArg) + if err != nil { + return nil, err + } + } + return arguments, nil + } + if len(arguments) < len(e.arguments) { + return nil, errors.New("Invalid arity.") + } + return arguments, nil +} + +func (a *argSpec) typeCheck(arg interface{}) error { + for _, t := range a.types { + switch t { + case jpNumber: + if _, ok := arg.(float64); ok { + return nil + } + case jpString: + if _, ok := arg.(string); ok { + return nil + } + case jpArray: + if isSliceType(arg) { + return nil + } + case jpObject: + if _, ok := arg.(map[string]interface{}); ok { + return nil + } + case jpArrayNumber: + if _, ok := toArrayNum(arg); ok { + return nil + } + case jpArrayString: + if _, ok := toArrayStr(arg); ok { + return nil + } + case jpAny: + return nil + case jpExpref: + if _, ok := arg.(expRef); ok { + return nil + } + } + } + return fmt.Errorf("Invalid type for: %v, expected: %#v", arg, a.types) +} + +func (f *functionCaller) CallFunction(name string, arguments []interface{}, intr *treeInterpreter) (interface{}, error) { + entry, ok := f.functionTable[name] + if !ok { + return nil, errors.New("unknown function: " + name) + } + resolvedArgs, err := entry.resolveArgs(arguments) + if err != nil { + return nil, err + } + if entry.hasExpRef { + var extra []interface{} + extra = append(extra, intr) + resolvedArgs = append(extra, resolvedArgs...) + } + return entry.handler(resolvedArgs) +} + +func jpfAbs(arguments []interface{}) (interface{}, error) { + num := arguments[0].(float64) + return math.Abs(num), nil +} + +func jpfLength(arguments []interface{}) (interface{}, error) { + arg := arguments[0] + if c, ok := arg.(string); ok { + return float64(utf8.RuneCountInString(c)), nil + } else if isSliceType(arg) { + v := reflect.ValueOf(arg) + return float64(v.Len()), nil + } else if c, ok := arg.(map[string]interface{}); ok { + return float64(len(c)), nil + } + return nil, errors.New("could not compute length()") +} + +func jpfStartsWith(arguments []interface{}) (interface{}, error) { + search := arguments[0].(string) + prefix := arguments[1].(string) + return strings.HasPrefix(search, prefix), nil +} + +func jpfAvg(arguments []interface{}) (interface{}, error) { + // We've already type checked the value so we can safely use + // type assertions. + args := arguments[0].([]interface{}) + length := float64(len(args)) + numerator := 0.0 + for _, n := range args { + numerator += n.(float64) + } + return numerator / length, nil +} +func jpfCeil(arguments []interface{}) (interface{}, error) { + val := arguments[0].(float64) + return math.Ceil(val), nil +} +func jpfContains(arguments []interface{}) (interface{}, error) { + search := arguments[0] + el := arguments[1] + if searchStr, ok := search.(string); ok { + if elStr, ok := el.(string); ok { + return strings.Index(searchStr, elStr) != -1, nil + } + return false, nil + } + // Otherwise this is a generic contains for []interface{} + general := search.([]interface{}) + for _, item := range general { + if item == el { + return true, nil + } + } + return false, nil +} +func jpfEndsWith(arguments []interface{}) (interface{}, error) { + search := arguments[0].(string) + suffix := arguments[1].(string) + return strings.HasSuffix(search, suffix), nil +} +func jpfFloor(arguments []interface{}) (interface{}, error) { + val := arguments[0].(float64) + return math.Floor(val), nil +} +func jpfMap(arguments []interface{}) (interface{}, error) { + intr := arguments[0].(*treeInterpreter) + exp := arguments[1].(expRef) + node := exp.ref + arr := arguments[2].([]interface{}) + mapped := make([]interface{}, 0, len(arr)) + for _, value := range arr { + current, err := intr.Execute(node, value) + if err != nil { + return nil, err + } + mapped = append(mapped, current) + } + return mapped, nil +} +func jpfMax(arguments []interface{}) (interface{}, error) { + if items, ok := toArrayNum(arguments[0]); ok { + if len(items) == 0 { + return nil, nil + } + if len(items) == 1 { + return items[0], nil + } + best := items[0] + for _, item := range items[1:] { + if item > best { + best = item + } + } + return best, nil + } + // Otherwise we're dealing with a max() of strings. + items, _ := toArrayStr(arguments[0]) + if len(items) == 0 { + return nil, nil + } + if len(items) == 1 { + return items[0], nil + } + best := items[0] + for _, item := range items[1:] { + if item > best { + best = item + } + } + return best, nil +} +func jpfMerge(arguments []interface{}) (interface{}, error) { + final := make(map[string]interface{}) + for _, m := range arguments { + mapped := m.(map[string]interface{}) + for key, value := range mapped { + final[key] = value + } + } + return final, nil +} +func jpfMaxBy(arguments []interface{}) (interface{}, error) { + intr := arguments[0].(*treeInterpreter) + arr := arguments[1].([]interface{}) + exp := arguments[2].(expRef) + node := exp.ref + if len(arr) == 0 { + return nil, nil + } else if len(arr) == 1 { + return arr[0], nil + } + start, err := intr.Execute(node, arr[0]) + if err != nil { + return nil, err + } + switch t := start.(type) { + case float64: + bestVal := t + bestItem := arr[0] + for _, item := range arr[1:] { + result, err := intr.Execute(node, item) + if err != nil { + return nil, err + } + current, ok := result.(float64) + if !ok { + return nil, errors.New("invalid type, must be number") + } + if current > bestVal { + bestVal = current + bestItem = item + } + } + return bestItem, nil + case string: + bestVal := t + bestItem := arr[0] + for _, item := range arr[1:] { + result, err := intr.Execute(node, item) + if err != nil { + return nil, err + } + current, ok := result.(string) + if !ok { + return nil, errors.New("invalid type, must be string") + } + if current > bestVal { + bestVal = current + bestItem = item + } + } + return bestItem, nil + default: + return nil, errors.New("invalid type, must be number of string") + } +} +func jpfSum(arguments []interface{}) (interface{}, error) { + items, _ := toArrayNum(arguments[0]) + sum := 0.0 + for _, item := range items { + sum += item + } + return sum, nil +} + +func jpfMin(arguments []interface{}) (interface{}, error) { + if items, ok := toArrayNum(arguments[0]); ok { + if len(items) == 0 { + return nil, nil + } + if len(items) == 1 { + return items[0], nil + } + best := items[0] + for _, item := range items[1:] { + if item < best { + best = item + } + } + return best, nil + } + items, _ := toArrayStr(arguments[0]) + if len(items) == 0 { + return nil, nil + } + if len(items) == 1 { + return items[0], nil + } + best := items[0] + for _, item := range items[1:] { + if item < best { + best = item + } + } + return best, nil +} + +func jpfMinBy(arguments []interface{}) (interface{}, error) { + intr := arguments[0].(*treeInterpreter) + arr := arguments[1].([]interface{}) + exp := arguments[2].(expRef) + node := exp.ref + if len(arr) == 0 { + return nil, nil + } else if len(arr) == 1 { + return arr[0], nil + } + start, err := intr.Execute(node, arr[0]) + if err != nil { + return nil, err + } + if t, ok := start.(float64); ok { + bestVal := t + bestItem := arr[0] + for _, item := range arr[1:] { + result, err := intr.Execute(node, item) + if err != nil { + return nil, err + } + current, ok := result.(float64) + if !ok { + return nil, errors.New("invalid type, must be number") + } + if current < bestVal { + bestVal = current + bestItem = item + } + } + return bestItem, nil + } else if t, ok := start.(string); ok { + bestVal := t + bestItem := arr[0] + for _, item := range arr[1:] { + result, err := intr.Execute(node, item) + if err != nil { + return nil, err + } + current, ok := result.(string) + if !ok { + return nil, errors.New("invalid type, must be string") + } + if current < bestVal { + bestVal = current + bestItem = item + } + } + return bestItem, nil + } else { + return nil, errors.New("invalid type, must be number of string") + } +} +func jpfType(arguments []interface{}) (interface{}, error) { + arg := arguments[0] + if _, ok := arg.(float64); ok { + return "number", nil + } + if _, ok := arg.(string); ok { + return "string", nil + } + if _, ok := arg.([]interface{}); ok { + return "array", nil + } + if _, ok := arg.(map[string]interface{}); ok { + return "object", nil + } + if arg == nil { + return "null", nil + } + if arg == true || arg == false { + return "boolean", nil + } + return nil, errors.New("unknown type") +} +func jpfKeys(arguments []interface{}) (interface{}, error) { + arg := arguments[0].(map[string]interface{}) + collected := make([]interface{}, 0, len(arg)) + for key := range arg { + collected = append(collected, key) + } + return collected, nil +} +func jpfValues(arguments []interface{}) (interface{}, error) { + arg := arguments[0].(map[string]interface{}) + collected := make([]interface{}, 0, len(arg)) + for _, value := range arg { + collected = append(collected, value) + } + return collected, nil +} +func jpfSort(arguments []interface{}) (interface{}, error) { + if items, ok := toArrayNum(arguments[0]); ok { + d := sort.Float64Slice(items) + sort.Stable(d) + final := make([]interface{}, len(d)) + for i, val := range d { + final[i] = val + } + return final, nil + } + // Otherwise we're dealing with sort()'ing strings. + items, _ := toArrayStr(arguments[0]) + d := sort.StringSlice(items) + sort.Stable(d) + final := make([]interface{}, len(d)) + for i, val := range d { + final[i] = val + } + return final, nil +} +func jpfSortBy(arguments []interface{}) (interface{}, error) { + intr := arguments[0].(*treeInterpreter) + arr := arguments[1].([]interface{}) + exp := arguments[2].(expRef) + node := exp.ref + if len(arr) == 0 { + return arr, nil + } else if len(arr) == 1 { + return arr, nil + } + start, err := intr.Execute(node, arr[0]) + if err != nil { + return nil, err + } + if _, ok := start.(float64); ok { + sortable := &byExprFloat{intr, node, arr, false} + sort.Stable(sortable) + if sortable.hasError { + return nil, errors.New("error in sort_by comparison") + } + return arr, nil + } else if _, ok := start.(string); ok { + sortable := &byExprString{intr, node, arr, false} + sort.Stable(sortable) + if sortable.hasError { + return nil, errors.New("error in sort_by comparison") + } + return arr, nil + } else { + return nil, errors.New("invalid type, must be number of string") + } +} +func jpfJoin(arguments []interface{}) (interface{}, error) { + sep := arguments[0].(string) + // We can't just do arguments[1].([]string), we have to + // manually convert each item to a string. + arrayStr := []string{} + for _, item := range arguments[1].([]interface{}) { + arrayStr = append(arrayStr, item.(string)) + } + return strings.Join(arrayStr, sep), nil +} +func jpfReverse(arguments []interface{}) (interface{}, error) { + if s, ok := arguments[0].(string); ok { + r := []rune(s) + for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 { + r[i], r[j] = r[j], r[i] + } + return string(r), nil + } + items := arguments[0].([]interface{}) + length := len(items) + reversed := make([]interface{}, length) + for i, item := range items { + reversed[length-(i+1)] = item + } + return reversed, nil +} +func jpfToArray(arguments []interface{}) (interface{}, error) { + if _, ok := arguments[0].([]interface{}); ok { + return arguments[0], nil + } + return arguments[:1:1], nil +} +func jpfToString(arguments []interface{}) (interface{}, error) { + if v, ok := arguments[0].(string); ok { + return v, nil + } + result, err := json.Marshal(arguments[0]) + if err != nil { + return nil, err + } + return string(result), nil +} +func jpfToNumber(arguments []interface{}) (interface{}, error) { + arg := arguments[0] + if v, ok := arg.(float64); ok { + return v, nil + } + if v, ok := arg.(string); ok { + conv, err := strconv.ParseFloat(v, 64) + if err != nil { + return nil, nil + } + return conv, nil + } + if _, ok := arg.([]interface{}); ok { + return nil, nil + } + if _, ok := arg.(map[string]interface{}); ok { + return nil, nil + } + if arg == nil { + return nil, nil + } + if arg == true || arg == false { + return nil, nil + } + return nil, errors.New("unknown type") +} +func jpfNotNull(arguments []interface{}) (interface{}, error) { + for _, arg := range arguments { + if arg != nil { + return arg, nil + } + } + return nil, nil +} diff --git a/vendor/github.com/jmespath/go-jmespath/interpreter.go b/vendor/github.com/jmespath/go-jmespath/interpreter.go new file mode 100644 index 00000000..13c74604 --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/interpreter.go @@ -0,0 +1,418 @@ +package jmespath + +import ( + "errors" + "reflect" + "unicode" + "unicode/utf8" +) + +/* This is a tree based interpreter. It walks the AST and directly + interprets the AST to search through a JSON document. +*/ + +type treeInterpreter struct { + fCall *functionCaller +} + +func newInterpreter() *treeInterpreter { + interpreter := treeInterpreter{} + interpreter.fCall = newFunctionCaller() + return &interpreter +} + +type expRef struct { + ref ASTNode +} + +// Execute takes an ASTNode and input data and interprets the AST directly. +// It will produce the result of applying the JMESPath expression associated +// with the ASTNode to the input data "value". +func (intr *treeInterpreter) Execute(node ASTNode, value interface{}) (interface{}, error) { + switch node.nodeType { + case ASTComparator: + left, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, err + } + right, err := intr.Execute(node.children[1], value) + if err != nil { + return nil, err + } + switch node.value { + case tEQ: + return objsEqual(left, right), nil + case tNE: + return !objsEqual(left, right), nil + } + leftNum, ok := left.(float64) + if !ok { + return nil, nil + } + rightNum, ok := right.(float64) + if !ok { + return nil, nil + } + switch node.value { + case tGT: + return leftNum > rightNum, nil + case tGTE: + return leftNum >= rightNum, nil + case tLT: + return leftNum < rightNum, nil + case tLTE: + return leftNum <= rightNum, nil + } + case ASTExpRef: + return expRef{ref: node.children[0]}, nil + case ASTFunctionExpression: + resolvedArgs := []interface{}{} + for _, arg := range node.children { + current, err := intr.Execute(arg, value) + if err != nil { + return nil, err + } + resolvedArgs = append(resolvedArgs, current) + } + return intr.fCall.CallFunction(node.value.(string), resolvedArgs, intr) + case ASTField: + if m, ok := value.(map[string]interface{}); ok { + key := node.value.(string) + return m[key], nil + } + return intr.fieldFromStruct(node.value.(string), value) + case ASTFilterProjection: + left, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, nil + } + sliceType, ok := left.([]interface{}) + if !ok { + if isSliceType(left) { + return intr.filterProjectionWithReflection(node, left) + } + return nil, nil + } + compareNode := node.children[2] + collected := []interface{}{} + for _, element := range sliceType { + result, err := intr.Execute(compareNode, element) + if err != nil { + return nil, err + } + if !isFalse(result) { + current, err := intr.Execute(node.children[1], element) + if err != nil { + return nil, err + } + if current != nil { + collected = append(collected, current) + } + } + } + return collected, nil + case ASTFlatten: + left, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, nil + } + sliceType, ok := left.([]interface{}) + if !ok { + // If we can't type convert to []interface{}, there's + // a chance this could still work via reflection if we're + // dealing with user provided types. + if isSliceType(left) { + return intr.flattenWithReflection(left) + } + return nil, nil + } + flattened := []interface{}{} + for _, element := range sliceType { + if elementSlice, ok := element.([]interface{}); ok { + flattened = append(flattened, elementSlice...) + } else if isSliceType(element) { + reflectFlat := []interface{}{} + v := reflect.ValueOf(element) + for i := 0; i < v.Len(); i++ { + reflectFlat = append(reflectFlat, v.Index(i).Interface()) + } + flattened = append(flattened, reflectFlat...) + } else { + flattened = append(flattened, element) + } + } + return flattened, nil + case ASTIdentity, ASTCurrentNode: + return value, nil + case ASTIndex: + if sliceType, ok := value.([]interface{}); ok { + index := node.value.(int) + if index < 0 { + index += len(sliceType) + } + if index < len(sliceType) && index >= 0 { + return sliceType[index], nil + } + return nil, nil + } + // Otherwise try via reflection. + rv := reflect.ValueOf(value) + if rv.Kind() == reflect.Slice { + index := node.value.(int) + if index < 0 { + index += rv.Len() + } + if index < rv.Len() && index >= 0 { + v := rv.Index(index) + return v.Interface(), nil + } + } + return nil, nil + case ASTKeyValPair: + return intr.Execute(node.children[0], value) + case ASTLiteral: + return node.value, nil + case ASTMultiSelectHash: + if value == nil { + return nil, nil + } + collected := make(map[string]interface{}) + for _, child := range node.children { + current, err := intr.Execute(child, value) + if err != nil { + return nil, err + } + key := child.value.(string) + collected[key] = current + } + return collected, nil + case ASTMultiSelectList: + if value == nil { + return nil, nil + } + collected := []interface{}{} + for _, child := range node.children { + current, err := intr.Execute(child, value) + if err != nil { + return nil, err + } + collected = append(collected, current) + } + return collected, nil + case ASTOrExpression: + matched, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, err + } + if isFalse(matched) { + matched, err = intr.Execute(node.children[1], value) + if err != nil { + return nil, err + } + } + return matched, nil + case ASTAndExpression: + matched, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, err + } + if isFalse(matched) { + return matched, nil + } + return intr.Execute(node.children[1], value) + case ASTNotExpression: + matched, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, err + } + if isFalse(matched) { + return true, nil + } + return false, nil + case ASTPipe: + result := value + var err error + for _, child := range node.children { + result, err = intr.Execute(child, result) + if err != nil { + return nil, err + } + } + return result, nil + case ASTProjection: + left, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, err + } + sliceType, ok := left.([]interface{}) + if !ok { + if isSliceType(left) { + return intr.projectWithReflection(node, left) + } + return nil, nil + } + collected := []interface{}{} + var current interface{} + for _, element := range sliceType { + current, err = intr.Execute(node.children[1], element) + if err != nil { + return nil, err + } + if current != nil { + collected = append(collected, current) + } + } + return collected, nil + case ASTSubexpression, ASTIndexExpression: + left, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, err + } + return intr.Execute(node.children[1], left) + case ASTSlice: + sliceType, ok := value.([]interface{}) + if !ok { + if isSliceType(value) { + return intr.sliceWithReflection(node, value) + } + return nil, nil + } + parts := node.value.([]*int) + sliceParams := make([]sliceParam, 3) + for i, part := range parts { + if part != nil { + sliceParams[i].Specified = true + sliceParams[i].N = *part + } + } + return slice(sliceType, sliceParams) + case ASTValueProjection: + left, err := intr.Execute(node.children[0], value) + if err != nil { + return nil, nil + } + mapType, ok := left.(map[string]interface{}) + if !ok { + return nil, nil + } + values := make([]interface{}, len(mapType)) + for _, value := range mapType { + values = append(values, value) + } + collected := []interface{}{} + for _, element := range values { + current, err := intr.Execute(node.children[1], element) + if err != nil { + return nil, err + } + if current != nil { + collected = append(collected, current) + } + } + return collected, nil + } + return nil, errors.New("Unknown AST node: " + node.nodeType.String()) +} + +func (intr *treeInterpreter) fieldFromStruct(key string, value interface{}) (interface{}, error) { + rv := reflect.ValueOf(value) + first, n := utf8.DecodeRuneInString(key) + fieldName := string(unicode.ToUpper(first)) + key[n:] + if rv.Kind() == reflect.Struct { + v := rv.FieldByName(fieldName) + if !v.IsValid() { + return nil, nil + } + return v.Interface(), nil + } else if rv.Kind() == reflect.Ptr { + // Handle multiple levels of indirection? + if rv.IsNil() { + return nil, nil + } + rv = rv.Elem() + v := rv.FieldByName(fieldName) + if !v.IsValid() { + return nil, nil + } + return v.Interface(), nil + } + return nil, nil +} + +func (intr *treeInterpreter) flattenWithReflection(value interface{}) (interface{}, error) { + v := reflect.ValueOf(value) + flattened := []interface{}{} + for i := 0; i < v.Len(); i++ { + element := v.Index(i).Interface() + if reflect.TypeOf(element).Kind() == reflect.Slice { + // Then insert the contents of the element + // slice into the flattened slice, + // i.e flattened = append(flattened, mySlice...) + elementV := reflect.ValueOf(element) + for j := 0; j < elementV.Len(); j++ { + flattened = append( + flattened, elementV.Index(j).Interface()) + } + } else { + flattened = append(flattened, element) + } + } + return flattened, nil +} + +func (intr *treeInterpreter) sliceWithReflection(node ASTNode, value interface{}) (interface{}, error) { + v := reflect.ValueOf(value) + parts := node.value.([]*int) + sliceParams := make([]sliceParam, 3) + for i, part := range parts { + if part != nil { + sliceParams[i].Specified = true + sliceParams[i].N = *part + } + } + final := []interface{}{} + for i := 0; i < v.Len(); i++ { + element := v.Index(i).Interface() + final = append(final, element) + } + return slice(final, sliceParams) +} + +func (intr *treeInterpreter) filterProjectionWithReflection(node ASTNode, value interface{}) (interface{}, error) { + compareNode := node.children[2] + collected := []interface{}{} + v := reflect.ValueOf(value) + for i := 0; i < v.Len(); i++ { + element := v.Index(i).Interface() + result, err := intr.Execute(compareNode, element) + if err != nil { + return nil, err + } + if !isFalse(result) { + current, err := intr.Execute(node.children[1], element) + if err != nil { + return nil, err + } + if current != nil { + collected = append(collected, current) + } + } + } + return collected, nil +} + +func (intr *treeInterpreter) projectWithReflection(node ASTNode, value interface{}) (interface{}, error) { + collected := []interface{}{} + v := reflect.ValueOf(value) + for i := 0; i < v.Len(); i++ { + element := v.Index(i).Interface() + result, err := intr.Execute(node.children[1], element) + if err != nil { + return nil, err + } + if result != nil { + collected = append(collected, result) + } + } + return collected, nil +} diff --git a/vendor/github.com/jmespath/go-jmespath/lexer.go b/vendor/github.com/jmespath/go-jmespath/lexer.go new file mode 100644 index 00000000..817900c8 --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/lexer.go @@ -0,0 +1,420 @@ +package jmespath + +import ( + "bytes" + "encoding/json" + "fmt" + "strconv" + "strings" + "unicode/utf8" +) + +type token struct { + tokenType tokType + value string + position int + length int +} + +type tokType int + +const eof = -1 + +// Lexer contains information about the expression being tokenized. +type Lexer struct { + expression string // The expression provided by the user. + currentPos int // The current position in the string. + lastWidth int // The width of the current rune. This + buf bytes.Buffer // Internal buffer used for building up values. +} + +// SyntaxError is the main error used whenever a lexing or parsing error occurs. +type SyntaxError struct { + msg string // Error message displayed to user + Expression string // Expression that generated a SyntaxError + Offset int // The location in the string where the error occurred +} + +func (e SyntaxError) Error() string { + // In the future, it would be good to underline the specific + // location where the error occurred. + return "SyntaxError: " + e.msg +} + +// HighlightLocation will show where the syntax error occurred. +// It will place a "^" character on a line below the expression +// at the point where the syntax error occurred. +func (e SyntaxError) HighlightLocation() string { + return e.Expression + "\n" + strings.Repeat(" ", e.Offset) + "^" +} + +//go:generate stringer -type=tokType +const ( + tUnknown tokType = iota + tStar + tDot + tFilter + tFlatten + tLparen + tRparen + tLbracket + tRbracket + tLbrace + tRbrace + tOr + tPipe + tNumber + tUnquotedIdentifier + tQuotedIdentifier + tComma + tColon + tLT + tLTE + tGT + tGTE + tEQ + tNE + tJSONLiteral + tStringLiteral + tCurrent + tExpref + tAnd + tNot + tEOF +) + +var basicTokens = map[rune]tokType{ + '.': tDot, + '*': tStar, + ',': tComma, + ':': tColon, + '{': tLbrace, + '}': tRbrace, + ']': tRbracket, // tLbracket not included because it could be "[]" + '(': tLparen, + ')': tRparen, + '@': tCurrent, +} + +// Bit mask for [a-zA-Z_] shifted down 64 bits to fit in a single uint64. +// When using this bitmask just be sure to shift the rune down 64 bits +// before checking against identifierStartBits. +const identifierStartBits uint64 = 576460745995190270 + +// Bit mask for [a-zA-Z0-9], 128 bits -> 2 uint64s. +var identifierTrailingBits = [2]uint64{287948901175001088, 576460745995190270} + +var whiteSpace = map[rune]bool{ + ' ': true, '\t': true, '\n': true, '\r': true, +} + +func (t token) String() string { + return fmt.Sprintf("Token{%+v, %s, %d, %d}", + t.tokenType, t.value, t.position, t.length) +} + +// NewLexer creates a new JMESPath lexer. +func NewLexer() *Lexer { + lexer := Lexer{} + return &lexer +} + +func (lexer *Lexer) next() rune { + if lexer.currentPos >= len(lexer.expression) { + lexer.lastWidth = 0 + return eof + } + r, w := utf8.DecodeRuneInString(lexer.expression[lexer.currentPos:]) + lexer.lastWidth = w + lexer.currentPos += w + return r +} + +func (lexer *Lexer) back() { + lexer.currentPos -= lexer.lastWidth +} + +func (lexer *Lexer) peek() rune { + t := lexer.next() + lexer.back() + return t +} + +// tokenize takes an expression and returns corresponding tokens. +func (lexer *Lexer) tokenize(expression string) ([]token, error) { + var tokens []token + lexer.expression = expression + lexer.currentPos = 0 + lexer.lastWidth = 0 +loop: + for { + r := lexer.next() + if identifierStartBits&(1<<(uint64(r)-64)) > 0 { + t := lexer.consumeUnquotedIdentifier() + tokens = append(tokens, t) + } else if val, ok := basicTokens[r]; ok { + // Basic single char token. + t := token{ + tokenType: val, + value: string(r), + position: lexer.currentPos - lexer.lastWidth, + length: 1, + } + tokens = append(tokens, t) + } else if r == '-' || (r >= '0' && r <= '9') { + t := lexer.consumeNumber() + tokens = append(tokens, t) + } else if r == '[' { + t := lexer.consumeLBracket() + tokens = append(tokens, t) + } else if r == '"' { + t, err := lexer.consumeQuotedIdentifier() + if err != nil { + return tokens, err + } + tokens = append(tokens, t) + } else if r == '\'' { + t, err := lexer.consumeRawStringLiteral() + if err != nil { + return tokens, err + } + tokens = append(tokens, t) + } else if r == '`' { + t, err := lexer.consumeLiteral() + if err != nil { + return tokens, err + } + tokens = append(tokens, t) + } else if r == '|' { + t := lexer.matchOrElse(r, '|', tOr, tPipe) + tokens = append(tokens, t) + } else if r == '<' { + t := lexer.matchOrElse(r, '=', tLTE, tLT) + tokens = append(tokens, t) + } else if r == '>' { + t := lexer.matchOrElse(r, '=', tGTE, tGT) + tokens = append(tokens, t) + } else if r == '!' { + t := lexer.matchOrElse(r, '=', tNE, tNot) + tokens = append(tokens, t) + } else if r == '=' { + t := lexer.matchOrElse(r, '=', tEQ, tUnknown) + tokens = append(tokens, t) + } else if r == '&' { + t := lexer.matchOrElse(r, '&', tAnd, tExpref) + tokens = append(tokens, t) + } else if r == eof { + break loop + } else if _, ok := whiteSpace[r]; ok { + // Ignore whitespace + } else { + return tokens, lexer.syntaxError(fmt.Sprintf("Unknown char: %s", strconv.QuoteRuneToASCII(r))) + } + } + tokens = append(tokens, token{tEOF, "", len(lexer.expression), 0}) + return tokens, nil +} + +// Consume characters until the ending rune "r" is reached. +// If the end of the expression is reached before seeing the +// terminating rune "r", then an error is returned. +// If no error occurs then the matching substring is returned. +// The returned string will not include the ending rune. +func (lexer *Lexer) consumeUntil(end rune) (string, error) { + start := lexer.currentPos + current := lexer.next() + for current != end && current != eof { + if current == '\\' && lexer.peek() != eof { + lexer.next() + } + current = lexer.next() + } + if lexer.lastWidth == 0 { + // Then we hit an EOF so we never reached the closing + // delimiter. + return "", SyntaxError{ + msg: "Unclosed delimiter: " + string(end), + Expression: lexer.expression, + Offset: len(lexer.expression), + } + } + return lexer.expression[start : lexer.currentPos-lexer.lastWidth], nil +} + +func (lexer *Lexer) consumeLiteral() (token, error) { + start := lexer.currentPos + value, err := lexer.consumeUntil('`') + if err != nil { + return token{}, err + } + value = strings.Replace(value, "\\`", "`", -1) + return token{ + tokenType: tJSONLiteral, + value: value, + position: start, + length: len(value), + }, nil +} + +func (lexer *Lexer) consumeRawStringLiteral() (token, error) { + start := lexer.currentPos + currentIndex := start + current := lexer.next() + for current != '\'' && lexer.peek() != eof { + if current == '\\' && lexer.peek() == '\'' { + chunk := lexer.expression[currentIndex : lexer.currentPos-1] + lexer.buf.WriteString(chunk) + lexer.buf.WriteString("'") + lexer.next() + currentIndex = lexer.currentPos + } + current = lexer.next() + } + if lexer.lastWidth == 0 { + // Then we hit an EOF so we never reached the closing + // delimiter. + return token{}, SyntaxError{ + msg: "Unclosed delimiter: '", + Expression: lexer.expression, + Offset: len(lexer.expression), + } + } + if currentIndex < lexer.currentPos { + lexer.buf.WriteString(lexer.expression[currentIndex : lexer.currentPos-1]) + } + value := lexer.buf.String() + // Reset the buffer so it can reused again. + lexer.buf.Reset() + return token{ + tokenType: tStringLiteral, + value: value, + position: start, + length: len(value), + }, nil +} + +func (lexer *Lexer) syntaxError(msg string) SyntaxError { + return SyntaxError{ + msg: msg, + Expression: lexer.expression, + Offset: lexer.currentPos - 1, + } +} + +// Checks for a two char token, otherwise matches a single character +// token. This is used whenever a two char token overlaps a single +// char token, e.g. "||" -> tPipe, "|" -> tOr. +func (lexer *Lexer) matchOrElse(first rune, second rune, matchedType tokType, singleCharType tokType) token { + start := lexer.currentPos - lexer.lastWidth + nextRune := lexer.next() + var t token + if nextRune == second { + t = token{ + tokenType: matchedType, + value: string(first) + string(second), + position: start, + length: 2, + } + } else { + lexer.back() + t = token{ + tokenType: singleCharType, + value: string(first), + position: start, + length: 1, + } + } + return t +} + +func (lexer *Lexer) consumeLBracket() token { + // There's three options here: + // 1. A filter expression "[?" + // 2. A flatten operator "[]" + // 3. A bare rbracket "[" + start := lexer.currentPos - lexer.lastWidth + nextRune := lexer.next() + var t token + if nextRune == '?' { + t = token{ + tokenType: tFilter, + value: "[?", + position: start, + length: 2, + } + } else if nextRune == ']' { + t = token{ + tokenType: tFlatten, + value: "[]", + position: start, + length: 2, + } + } else { + t = token{ + tokenType: tLbracket, + value: "[", + position: start, + length: 1, + } + lexer.back() + } + return t +} + +func (lexer *Lexer) consumeQuotedIdentifier() (token, error) { + start := lexer.currentPos + value, err := lexer.consumeUntil('"') + if err != nil { + return token{}, err + } + var decoded string + asJSON := []byte("\"" + value + "\"") + if err := json.Unmarshal([]byte(asJSON), &decoded); err != nil { + return token{}, err + } + return token{ + tokenType: tQuotedIdentifier, + value: decoded, + position: start - 1, + length: len(decoded), + }, nil +} + +func (lexer *Lexer) consumeUnquotedIdentifier() token { + // Consume runes until we reach the end of an unquoted + // identifier. + start := lexer.currentPos - lexer.lastWidth + for { + r := lexer.next() + if r < 0 || r > 128 || identifierTrailingBits[uint64(r)/64]&(1<<(uint64(r)%64)) == 0 { + lexer.back() + break + } + } + value := lexer.expression[start:lexer.currentPos] + return token{ + tokenType: tUnquotedIdentifier, + value: value, + position: start, + length: lexer.currentPos - start, + } +} + +func (lexer *Lexer) consumeNumber() token { + // Consume runes until we reach something that's not a number. + start := lexer.currentPos - lexer.lastWidth + for { + r := lexer.next() + if r < '0' || r > '9' { + lexer.back() + break + } + } + value := lexer.expression[start:lexer.currentPos] + return token{ + tokenType: tNumber, + value: value, + position: start, + length: lexer.currentPos - start, + } +} diff --git a/vendor/github.com/jmespath/go-jmespath/parser.go b/vendor/github.com/jmespath/go-jmespath/parser.go new file mode 100644 index 00000000..1240a175 --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/parser.go @@ -0,0 +1,603 @@ +package jmespath + +import ( + "encoding/json" + "fmt" + "strconv" + "strings" +) + +type astNodeType int + +//go:generate stringer -type astNodeType +const ( + ASTEmpty astNodeType = iota + ASTComparator + ASTCurrentNode + ASTExpRef + ASTFunctionExpression + ASTField + ASTFilterProjection + ASTFlatten + ASTIdentity + ASTIndex + ASTIndexExpression + ASTKeyValPair + ASTLiteral + ASTMultiSelectHash + ASTMultiSelectList + ASTOrExpression + ASTAndExpression + ASTNotExpression + ASTPipe + ASTProjection + ASTSubexpression + ASTSlice + ASTValueProjection +) + +// ASTNode represents the abstract syntax tree of a JMESPath expression. +type ASTNode struct { + nodeType astNodeType + value interface{} + children []ASTNode +} + +func (node ASTNode) String() string { + return node.PrettyPrint(0) +} + +// PrettyPrint will pretty print the parsed AST. +// The AST is an implementation detail and this pretty print +// function is provided as a convenience method to help with +// debugging. You should not rely on its output as the internal +// structure of the AST may change at any time. +func (node ASTNode) PrettyPrint(indent int) string { + spaces := strings.Repeat(" ", indent) + output := fmt.Sprintf("%s%s {\n", spaces, node.nodeType) + nextIndent := indent + 2 + if node.value != nil { + if converted, ok := node.value.(fmt.Stringer); ok { + // Account for things like comparator nodes + // that are enums with a String() method. + output += fmt.Sprintf("%svalue: %s\n", strings.Repeat(" ", nextIndent), converted.String()) + } else { + output += fmt.Sprintf("%svalue: %#v\n", strings.Repeat(" ", nextIndent), node.value) + } + } + lastIndex := len(node.children) + if lastIndex > 0 { + output += fmt.Sprintf("%schildren: {\n", strings.Repeat(" ", nextIndent)) + childIndent := nextIndent + 2 + for _, elem := range node.children { + output += elem.PrettyPrint(childIndent) + } + } + output += fmt.Sprintf("%s}\n", spaces) + return output +} + +var bindingPowers = map[tokType]int{ + tEOF: 0, + tUnquotedIdentifier: 0, + tQuotedIdentifier: 0, + tRbracket: 0, + tRparen: 0, + tComma: 0, + tRbrace: 0, + tNumber: 0, + tCurrent: 0, + tExpref: 0, + tColon: 0, + tPipe: 1, + tOr: 2, + tAnd: 3, + tEQ: 5, + tLT: 5, + tLTE: 5, + tGT: 5, + tGTE: 5, + tNE: 5, + tFlatten: 9, + tStar: 20, + tFilter: 21, + tDot: 40, + tNot: 45, + tLbrace: 50, + tLbracket: 55, + tLparen: 60, +} + +// Parser holds state about the current expression being parsed. +type Parser struct { + expression string + tokens []token + index int +} + +// NewParser creates a new JMESPath parser. +func NewParser() *Parser { + p := Parser{} + return &p +} + +// Parse will compile a JMESPath expression. +func (p *Parser) Parse(expression string) (ASTNode, error) { + lexer := NewLexer() + p.expression = expression + p.index = 0 + tokens, err := lexer.tokenize(expression) + if err != nil { + return ASTNode{}, err + } + p.tokens = tokens + parsed, err := p.parseExpression(0) + if err != nil { + return ASTNode{}, err + } + if p.current() != tEOF { + return ASTNode{}, p.syntaxError(fmt.Sprintf( + "Unexpected token at the end of the expresssion: %s", p.current())) + } + return parsed, nil +} + +func (p *Parser) parseExpression(bindingPower int) (ASTNode, error) { + var err error + leftToken := p.lookaheadToken(0) + p.advance() + leftNode, err := p.nud(leftToken) + if err != nil { + return ASTNode{}, err + } + currentToken := p.current() + for bindingPower < bindingPowers[currentToken] { + p.advance() + leftNode, err = p.led(currentToken, leftNode) + if err != nil { + return ASTNode{}, err + } + currentToken = p.current() + } + return leftNode, nil +} + +func (p *Parser) parseIndexExpression() (ASTNode, error) { + if p.lookahead(0) == tColon || p.lookahead(1) == tColon { + return p.parseSliceExpression() + } + indexStr := p.lookaheadToken(0).value + parsedInt, err := strconv.Atoi(indexStr) + if err != nil { + return ASTNode{}, err + } + indexNode := ASTNode{nodeType: ASTIndex, value: parsedInt} + p.advance() + if err := p.match(tRbracket); err != nil { + return ASTNode{}, err + } + return indexNode, nil +} + +func (p *Parser) parseSliceExpression() (ASTNode, error) { + parts := []*int{nil, nil, nil} + index := 0 + current := p.current() + for current != tRbracket && index < 3 { + if current == tColon { + index++ + p.advance() + } else if current == tNumber { + parsedInt, err := strconv.Atoi(p.lookaheadToken(0).value) + if err != nil { + return ASTNode{}, err + } + parts[index] = &parsedInt + p.advance() + } else { + return ASTNode{}, p.syntaxError( + "Expected tColon or tNumber" + ", received: " + p.current().String()) + } + current = p.current() + } + if err := p.match(tRbracket); err != nil { + return ASTNode{}, err + } + return ASTNode{ + nodeType: ASTSlice, + value: parts, + }, nil +} + +func (p *Parser) match(tokenType tokType) error { + if p.current() == tokenType { + p.advance() + return nil + } + return p.syntaxError("Expected " + tokenType.String() + ", received: " + p.current().String()) +} + +func (p *Parser) led(tokenType tokType, node ASTNode) (ASTNode, error) { + switch tokenType { + case tDot: + if p.current() != tStar { + right, err := p.parseDotRHS(bindingPowers[tDot]) + return ASTNode{ + nodeType: ASTSubexpression, + children: []ASTNode{node, right}, + }, err + } + p.advance() + right, err := p.parseProjectionRHS(bindingPowers[tDot]) + return ASTNode{ + nodeType: ASTValueProjection, + children: []ASTNode{node, right}, + }, err + case tPipe: + right, err := p.parseExpression(bindingPowers[tPipe]) + return ASTNode{nodeType: ASTPipe, children: []ASTNode{node, right}}, err + case tOr: + right, err := p.parseExpression(bindingPowers[tOr]) + return ASTNode{nodeType: ASTOrExpression, children: []ASTNode{node, right}}, err + case tAnd: + right, err := p.parseExpression(bindingPowers[tAnd]) + return ASTNode{nodeType: ASTAndExpression, children: []ASTNode{node, right}}, err + case tLparen: + name := node.value + var args []ASTNode + for p.current() != tRparen { + expression, err := p.parseExpression(0) + if err != nil { + return ASTNode{}, err + } + if p.current() == tComma { + if err := p.match(tComma); err != nil { + return ASTNode{}, err + } + } + args = append(args, expression) + } + if err := p.match(tRparen); err != nil { + return ASTNode{}, err + } + return ASTNode{ + nodeType: ASTFunctionExpression, + value: name, + children: args, + }, nil + case tFilter: + return p.parseFilter(node) + case tFlatten: + left := ASTNode{nodeType: ASTFlatten, children: []ASTNode{node}} + right, err := p.parseProjectionRHS(bindingPowers[tFlatten]) + return ASTNode{ + nodeType: ASTProjection, + children: []ASTNode{left, right}, + }, err + case tEQ, tNE, tGT, tGTE, tLT, tLTE: + right, err := p.parseExpression(bindingPowers[tokenType]) + if err != nil { + return ASTNode{}, err + } + return ASTNode{ + nodeType: ASTComparator, + value: tokenType, + children: []ASTNode{node, right}, + }, nil + case tLbracket: + tokenType := p.current() + var right ASTNode + var err error + if tokenType == tNumber || tokenType == tColon { + right, err = p.parseIndexExpression() + if err != nil { + return ASTNode{}, err + } + return p.projectIfSlice(node, right) + } + // Otherwise this is a projection. + if err := p.match(tStar); err != nil { + return ASTNode{}, err + } + if err := p.match(tRbracket); err != nil { + return ASTNode{}, err + } + right, err = p.parseProjectionRHS(bindingPowers[tStar]) + if err != nil { + return ASTNode{}, err + } + return ASTNode{ + nodeType: ASTProjection, + children: []ASTNode{node, right}, + }, nil + } + return ASTNode{}, p.syntaxError("Unexpected token: " + tokenType.String()) +} + +func (p *Parser) nud(token token) (ASTNode, error) { + switch token.tokenType { + case tJSONLiteral: + var parsed interface{} + err := json.Unmarshal([]byte(token.value), &parsed) + if err != nil { + return ASTNode{}, err + } + return ASTNode{nodeType: ASTLiteral, value: parsed}, nil + case tStringLiteral: + return ASTNode{nodeType: ASTLiteral, value: token.value}, nil + case tUnquotedIdentifier: + return ASTNode{ + nodeType: ASTField, + value: token.value, + }, nil + case tQuotedIdentifier: + node := ASTNode{nodeType: ASTField, value: token.value} + if p.current() == tLparen { + return ASTNode{}, p.syntaxErrorToken("Can't have quoted identifier as function name.", token) + } + return node, nil + case tStar: + left := ASTNode{nodeType: ASTIdentity} + var right ASTNode + var err error + if p.current() == tRbracket { + right = ASTNode{nodeType: ASTIdentity} + } else { + right, err = p.parseProjectionRHS(bindingPowers[tStar]) + } + return ASTNode{nodeType: ASTValueProjection, children: []ASTNode{left, right}}, err + case tFilter: + return p.parseFilter(ASTNode{nodeType: ASTIdentity}) + case tLbrace: + return p.parseMultiSelectHash() + case tFlatten: + left := ASTNode{ + nodeType: ASTFlatten, + children: []ASTNode{{nodeType: ASTIdentity}}, + } + right, err := p.parseProjectionRHS(bindingPowers[tFlatten]) + if err != nil { + return ASTNode{}, err + } + return ASTNode{nodeType: ASTProjection, children: []ASTNode{left, right}}, nil + case tLbracket: + tokenType := p.current() + //var right ASTNode + if tokenType == tNumber || tokenType == tColon { + right, err := p.parseIndexExpression() + if err != nil { + return ASTNode{}, nil + } + return p.projectIfSlice(ASTNode{nodeType: ASTIdentity}, right) + } else if tokenType == tStar && p.lookahead(1) == tRbracket { + p.advance() + p.advance() + right, err := p.parseProjectionRHS(bindingPowers[tStar]) + if err != nil { + return ASTNode{}, err + } + return ASTNode{ + nodeType: ASTProjection, + children: []ASTNode{{nodeType: ASTIdentity}, right}, + }, nil + } else { + return p.parseMultiSelectList() + } + case tCurrent: + return ASTNode{nodeType: ASTCurrentNode}, nil + case tExpref: + expression, err := p.parseExpression(bindingPowers[tExpref]) + if err != nil { + return ASTNode{}, err + } + return ASTNode{nodeType: ASTExpRef, children: []ASTNode{expression}}, nil + case tNot: + expression, err := p.parseExpression(bindingPowers[tNot]) + if err != nil { + return ASTNode{}, err + } + return ASTNode{nodeType: ASTNotExpression, children: []ASTNode{expression}}, nil + case tLparen: + expression, err := p.parseExpression(0) + if err != nil { + return ASTNode{}, err + } + if err := p.match(tRparen); err != nil { + return ASTNode{}, err + } + return expression, nil + case tEOF: + return ASTNode{}, p.syntaxErrorToken("Incomplete expression", token) + } + + return ASTNode{}, p.syntaxErrorToken("Invalid token: "+token.tokenType.String(), token) +} + +func (p *Parser) parseMultiSelectList() (ASTNode, error) { + var expressions []ASTNode + for { + expression, err := p.parseExpression(0) + if err != nil { + return ASTNode{}, err + } + expressions = append(expressions, expression) + if p.current() == tRbracket { + break + } + err = p.match(tComma) + if err != nil { + return ASTNode{}, err + } + } + err := p.match(tRbracket) + if err != nil { + return ASTNode{}, err + } + return ASTNode{ + nodeType: ASTMultiSelectList, + children: expressions, + }, nil +} + +func (p *Parser) parseMultiSelectHash() (ASTNode, error) { + var children []ASTNode + for { + keyToken := p.lookaheadToken(0) + if err := p.match(tUnquotedIdentifier); err != nil { + if err := p.match(tQuotedIdentifier); err != nil { + return ASTNode{}, p.syntaxError("Expected tQuotedIdentifier or tUnquotedIdentifier") + } + } + keyName := keyToken.value + err := p.match(tColon) + if err != nil { + return ASTNode{}, err + } + value, err := p.parseExpression(0) + if err != nil { + return ASTNode{}, err + } + node := ASTNode{ + nodeType: ASTKeyValPair, + value: keyName, + children: []ASTNode{value}, + } + children = append(children, node) + if p.current() == tComma { + err := p.match(tComma) + if err != nil { + return ASTNode{}, nil + } + } else if p.current() == tRbrace { + err := p.match(tRbrace) + if err != nil { + return ASTNode{}, nil + } + break + } + } + return ASTNode{ + nodeType: ASTMultiSelectHash, + children: children, + }, nil +} + +func (p *Parser) projectIfSlice(left ASTNode, right ASTNode) (ASTNode, error) { + indexExpr := ASTNode{ + nodeType: ASTIndexExpression, + children: []ASTNode{left, right}, + } + if right.nodeType == ASTSlice { + right, err := p.parseProjectionRHS(bindingPowers[tStar]) + return ASTNode{ + nodeType: ASTProjection, + children: []ASTNode{indexExpr, right}, + }, err + } + return indexExpr, nil +} +func (p *Parser) parseFilter(node ASTNode) (ASTNode, error) { + var right, condition ASTNode + var err error + condition, err = p.parseExpression(0) + if err != nil { + return ASTNode{}, err + } + if err := p.match(tRbracket); err != nil { + return ASTNode{}, err + } + if p.current() == tFlatten { + right = ASTNode{nodeType: ASTIdentity} + } else { + right, err = p.parseProjectionRHS(bindingPowers[tFilter]) + if err != nil { + return ASTNode{}, err + } + } + + return ASTNode{ + nodeType: ASTFilterProjection, + children: []ASTNode{node, right, condition}, + }, nil +} + +func (p *Parser) parseDotRHS(bindingPower int) (ASTNode, error) { + lookahead := p.current() + if tokensOneOf([]tokType{tQuotedIdentifier, tUnquotedIdentifier, tStar}, lookahead) { + return p.parseExpression(bindingPower) + } else if lookahead == tLbracket { + if err := p.match(tLbracket); err != nil { + return ASTNode{}, err + } + return p.parseMultiSelectList() + } else if lookahead == tLbrace { + if err := p.match(tLbrace); err != nil { + return ASTNode{}, err + } + return p.parseMultiSelectHash() + } + return ASTNode{}, p.syntaxError("Expected identifier, lbracket, or lbrace") +} + +func (p *Parser) parseProjectionRHS(bindingPower int) (ASTNode, error) { + current := p.current() + if bindingPowers[current] < 10 { + return ASTNode{nodeType: ASTIdentity}, nil + } else if current == tLbracket { + return p.parseExpression(bindingPower) + } else if current == tFilter { + return p.parseExpression(bindingPower) + } else if current == tDot { + err := p.match(tDot) + if err != nil { + return ASTNode{}, err + } + return p.parseDotRHS(bindingPower) + } else { + return ASTNode{}, p.syntaxError("Error") + } +} + +func (p *Parser) lookahead(number int) tokType { + return p.lookaheadToken(number).tokenType +} + +func (p *Parser) current() tokType { + return p.lookahead(0) +} + +func (p *Parser) lookaheadToken(number int) token { + return p.tokens[p.index+number] +} + +func (p *Parser) advance() { + p.index++ +} + +func tokensOneOf(elements []tokType, token tokType) bool { + for _, elem := range elements { + if elem == token { + return true + } + } + return false +} + +func (p *Parser) syntaxError(msg string) SyntaxError { + return SyntaxError{ + msg: msg, + Expression: p.expression, + Offset: p.lookaheadToken(0).position, + } +} + +// Create a SyntaxError based on the provided token. +// This differs from syntaxError() which creates a SyntaxError +// based on the current lookahead token. +func (p *Parser) syntaxErrorToken(msg string, t token) SyntaxError { + return SyntaxError{ + msg: msg, + Expression: p.expression, + Offset: t.position, + } +} diff --git a/vendor/github.com/jmespath/go-jmespath/toktype_string.go b/vendor/github.com/jmespath/go-jmespath/toktype_string.go new file mode 100644 index 00000000..dae79cbd --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/toktype_string.go @@ -0,0 +1,16 @@ +// generated by stringer -type=tokType; DO NOT EDIT + +package jmespath + +import "fmt" + +const _tokType_name = "tUnknowntStartDottFiltertFlattentLparentRparentLbrackettRbrackettLbracetRbracetOrtPipetNumbertUnquotedIdentifiertQuotedIdentifiertCommatColontLTtLTEtGTtGTEtEQtNEtJSONLiteraltStringLiteraltCurrenttExpreftAndtNottEOF" + +var _tokType_index = [...]uint8{0, 8, 13, 17, 24, 32, 39, 46, 55, 64, 71, 78, 81, 86, 93, 112, 129, 135, 141, 144, 148, 151, 155, 158, 161, 173, 187, 195, 202, 206, 210, 214} + +func (i tokType) String() string { + if i < 0 || i >= tokType(len(_tokType_index)-1) { + return fmt.Sprintf("tokType(%d)", i) + } + return _tokType_name[_tokType_index[i]:_tokType_index[i+1]] +} diff --git a/vendor/github.com/jmespath/go-jmespath/util.go b/vendor/github.com/jmespath/go-jmespath/util.go new file mode 100644 index 00000000..ddc1b7d7 --- /dev/null +++ b/vendor/github.com/jmespath/go-jmespath/util.go @@ -0,0 +1,185 @@ +package jmespath + +import ( + "errors" + "reflect" +) + +// IsFalse determines if an object is false based on the JMESPath spec. +// JMESPath defines false values to be any of: +// - An empty string array, or hash. +// - The boolean value false. +// - nil +func isFalse(value interface{}) bool { + switch v := value.(type) { + case bool: + return !v + case []interface{}: + return len(v) == 0 + case map[string]interface{}: + return len(v) == 0 + case string: + return len(v) == 0 + case nil: + return true + } + // Try the reflection cases before returning false. + rv := reflect.ValueOf(value) + switch rv.Kind() { + case reflect.Struct: + // A struct type will never be false, even if + // all of its values are the zero type. + return false + case reflect.Slice, reflect.Map: + return rv.Len() == 0 + case reflect.Ptr: + if rv.IsNil() { + return true + } + // If it's a pointer type, we'll try to deref the pointer + // and evaluate the pointer value for isFalse. + element := rv.Elem() + return isFalse(element.Interface()) + } + return false +} + +// ObjsEqual is a generic object equality check. +// It will take two arbitrary objects and recursively determine +// if they are equal. +func objsEqual(left interface{}, right interface{}) bool { + return reflect.DeepEqual(left, right) +} + +// SliceParam refers to a single part of a slice. +// A slice consists of a start, a stop, and a step, similar to +// python slices. +type sliceParam struct { + N int + Specified bool +} + +// Slice supports [start:stop:step] style slicing that's supported in JMESPath. +func slice(slice []interface{}, parts []sliceParam) ([]interface{}, error) { + computed, err := computeSliceParams(len(slice), parts) + if err != nil { + return nil, err + } + start, stop, step := computed[0], computed[1], computed[2] + result := []interface{}{} + if step > 0 { + for i := start; i < stop; i += step { + result = append(result, slice[i]) + } + } else { + for i := start; i > stop; i += step { + result = append(result, slice[i]) + } + } + return result, nil +} + +func computeSliceParams(length int, parts []sliceParam) ([]int, error) { + var start, stop, step int + if !parts[2].Specified { + step = 1 + } else if parts[2].N == 0 { + return nil, errors.New("Invalid slice, step cannot be 0") + } else { + step = parts[2].N + } + var stepValueNegative bool + if step < 0 { + stepValueNegative = true + } else { + stepValueNegative = false + } + + if !parts[0].Specified { + if stepValueNegative { + start = length - 1 + } else { + start = 0 + } + } else { + start = capSlice(length, parts[0].N, step) + } + + if !parts[1].Specified { + if stepValueNegative { + stop = -1 + } else { + stop = length + } + } else { + stop = capSlice(length, parts[1].N, step) + } + return []int{start, stop, step}, nil +} + +func capSlice(length int, actual int, step int) int { + if actual < 0 { + actual += length + if actual < 0 { + if step < 0 { + actual = -1 + } else { + actual = 0 + } + } + } else if actual >= length { + if step < 0 { + actual = length - 1 + } else { + actual = length + } + } + return actual +} + +// ToArrayNum converts an empty interface type to a slice of float64. +// If any element in the array cannot be converted, then nil is returned +// along with a second value of false. +func toArrayNum(data interface{}) ([]float64, bool) { + // Is there a better way to do this with reflect? + if d, ok := data.([]interface{}); ok { + result := make([]float64, len(d)) + for i, el := range d { + item, ok := el.(float64) + if !ok { + return nil, false + } + result[i] = item + } + return result, true + } + return nil, false +} + +// ToArrayStr converts an empty interface type to a slice of strings. +// If any element in the array cannot be converted, then nil is returned +// along with a second value of false. If the input data could be entirely +// converted, then the converted data, along with a second value of true, +// will be returned. +func toArrayStr(data interface{}) ([]string, bool) { + // Is there a better way to do this with reflect? + if d, ok := data.([]interface{}); ok { + result := make([]string, len(d)) + for i, el := range d { + item, ok := el.(string) + if !ok { + return nil, false + } + result[i] = item + } + return result, true + } + return nil, false +} + +func isSliceType(v interface{}) bool { + if v == nil { + return false + } + return reflect.TypeOf(v).Kind() == reflect.Slice +} diff --git a/vendor/github.com/koding/cache/LICENCE b/vendor/github.com/koding/cache/LICENCE new file mode 100644 index 00000000..afc9059b --- /dev/null +++ b/vendor/github.com/koding/cache/LICENCE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2016 Koding, Inc. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/koding/cache/cache.go b/vendor/github.com/koding/cache/cache.go new file mode 100644 index 00000000..73ca6920 --- /dev/null +++ b/vendor/github.com/koding/cache/cache.go @@ -0,0 +1,15 @@ +package cache + +// Cache is the contract for all of the cache backends that are supported by +// this package +type Cache interface { + // Get returns single item from the backend if the requested item is not + // found, returns NotFound err + Get(key string) (interface{}, error) + + // Set sets a single item to the backend + Set(key string, value interface{}) error + + // Delete deletes single item from backend + Delete(key string) error +} diff --git a/vendor/github.com/koding/cache/doc.go b/vendor/github.com/koding/cache/doc.go new file mode 100644 index 00000000..e174ffb1 --- /dev/null +++ b/vendor/github.com/koding/cache/doc.go @@ -0,0 +1,14 @@ +// Package cache provides basic caching mechanisms for Go(lang) projects. +// +// Currently supported caching algorithms: +// MemoryNoTS : provides a non-thread safe in-memory caching system +// Memory : provides a thread safe in-memory caching system, built on top of MemoryNoTS cache +// LRUNoTS : provides a non-thread safe, fixed size in-memory caching system, built on top of MemoryNoTS cache +// LRU : provides a thread safe, fixed size in-memory caching system, built on top of LRUNoTS cache +// MemoryTTL : provides a thread safe, expiring in-memory caching system, built on top of MemoryNoTS cache +// ShardedNoTS : provides a non-thread safe sharded cache system, built on top of a cache interface +// ShardedTTL : provides a thread safe, expiring in-memory sharded cache system, built on top of ShardedNoTS over MemoryNoTS +// LFUNoTS : provides a non-thread safe, fixed size in-memory caching system, built on top of MemoryNoTS cache +// LFU : provides a thread safe, fixed size in-memory caching system, built on top of LFUNoTS cache +// +package cache diff --git a/vendor/github.com/koding/cache/errors.go b/vendor/github.com/koding/cache/errors.go new file mode 100644 index 00000000..9ffe436c --- /dev/null +++ b/vendor/github.com/koding/cache/errors.go @@ -0,0 +1,8 @@ +package cache + +import "errors" + +var ( + // ErrNotFound holds exported `not found error` for not found items + ErrNotFound = errors.New("not found") +) diff --git a/vendor/github.com/koding/cache/lfu.go b/vendor/github.com/koding/cache/lfu.go new file mode 100644 index 00000000..4d2d830a --- /dev/null +++ b/vendor/github.com/koding/cache/lfu.go @@ -0,0 +1,49 @@ +package cache + +import "sync" + +// LFU holds the Least frequently used cache values +type LFU struct { + // Mutex is used for handling the concurrent + // read/write requests for cache + sync.Mutex + + // cache holds the all cache values + cache Cache +} + +// NewLFU creates a thread-safe LFU cache +func NewLFU(size int) Cache { + return &LRU{ + cache: NewLFUNoTS(size), + } +} + +// Get returns the value of a given key if it exists, every get item will be +// increased for every usage +func (l *LFU) Get(key string) (interface{}, error) { + l.Lock() + defer l.Unlock() + + return l.cache.Get(key) +} + +// Set sets or overrides the given key with the given value, every set item will +// be increased as usage. +// when the cache is full, least frequently used items will be evicted from +// linked list +func (l *LFU) Set(key string, val interface{}) error { + l.Lock() + defer l.Unlock() + + return l.cache.Set(key, val) +} + +// Delete deletes the given key-value pair from cache, this function doesnt +// return an error if item is not in the cache +func (l *LFU) Delete(key string) error { + l.Lock() + defer l.Unlock() + + return l.cache.Delete(key) +} diff --git a/vendor/github.com/koding/cache/lfu_nots.go b/vendor/github.com/koding/cache/lfu_nots.go new file mode 100644 index 00000000..3e957ba4 --- /dev/null +++ b/vendor/github.com/koding/cache/lfu_nots.go @@ -0,0 +1,226 @@ +package cache + +import "container/list" + +// LFUNoTS holds the cache struct +type LFUNoTS struct { + // list holds all items in a linked list + frequencyList *list.List + + // holds the all cache values + cache Cache + + // size holds the limit of the LFU cache + size int + + // currentSize holds the current item size in the list + // after each adding of item, currentSize will be increased + currentSize int +} + +type cacheItem struct { + // key of cache value + k string + + // value of cache value + v interface{} + + // holds the frequency elements + // it holds the element's usage as count + // if cacheItems is used 4 times (with set or get operations) + // the freqElement's frequency counter will be 4 + // it holds entry struct inside Value of list.Element + freqElement *list.Element +} + +// NewLFUNoTS creates a new LFU cache struct for further cache operations. Size +// is used for limiting the upper bound of the cache +func NewLFUNoTS(size int) Cache { + if size < 1 { + panic("invalid cache size") + } + + return &LFUNoTS{ + frequencyList: list.New(), + cache: NewMemoryNoTS(), + size: size, + currentSize: 0, + } +} + +// Get gets value of cache item +// then increments the usage of the item +func (l *LFUNoTS) Get(key string) (interface{}, error) { + res, err := l.cache.Get(key) + if err != nil { + return nil, err + } + + ci := res.(*cacheItem) + + // increase usage of cache item + l.incr(ci) + return ci.v, nil +} + +// Set sets a new key-value pair +// Set increments the key usage count too +// +// eg: +// cache.Set("test_key","2") +// cache.Set("test_key","1") +// if you try to set a value into same key +// its usage count will be increased +// and usage count of "test_key" will be 2 in this example +func (l *LFUNoTS) Set(key string, value interface{}) error { + return l.set(key, value) +} + +// Delete deletes the key and its dependencies +func (l *LFUNoTS) Delete(key string) error { + res, err := l.cache.Get(key) + if err != nil && err != ErrNotFound { + return err + } + + // we dont need to delete if already doesn't exist + if err == ErrNotFound { + return nil + } + + ci := res.(*cacheItem) + + l.remove(ci, ci.freqElement) + l.currentSize-- + return l.cache.Delete(key) +} + +// set sets a new key-value pair +func (l *LFUNoTS) set(key string, value interface{}) error { + res, err := l.cache.Get(key) + if err != nil && err != ErrNotFound { + return err + } + + if err == ErrNotFound { + //create new cache item + ci := newCacheItem(key, value) + + // if cache size si reached to max size + // then first remove lfu item from the list + if l.currentSize >= l.size { + // then evict some data from head of linked list. + l.evict(l.frequencyList.Front()) + } + + l.cache.Set(key, ci) + l.incr(ci) + + } else { + //update existing one + val := res.(*cacheItem) + val.v = value + l.cache.Set(key, val) + l.incr(res.(*cacheItem)) + } + + return nil +} + +// entry holds the frequency node informations +type entry struct { + // freqCount holds the frequency number + freqCount int + + // itemCount holds the items how many exist in list + listEntry map[*cacheItem]struct{} +} + +// incr increments the usage of cache items +// incrementing will be used in 'Get' & 'Set' functions +// whenever these functions are used, usage count of any key +// will be increased +func (l *LFUNoTS) incr(ci *cacheItem) { + var nextValue int + var nextPosition *list.Element + // update existing one + if ci.freqElement != nil { + nextValue = ci.freqElement.Value.(*entry).freqCount + 1 + // replace the position of frequency element + nextPosition = ci.freqElement.Next() + } else { + // create new frequency element for cache item + // ci.freqElement is nil so next value of freq will be 1 + nextValue = 1 + // we created new element and its position will be head of linked list + nextPosition = l.frequencyList.Front() + l.currentSize++ + } + + // we need to check position first, otherwise it will panic if we try to fetch value of entry + if nextPosition == nil || nextPosition.Value.(*entry).freqCount != nextValue { + // create new entry node for linked list + entry := newEntry(nextValue) + if ci.freqElement == nil { + nextPosition = l.frequencyList.PushFront(entry) + } else { + nextPosition = l.frequencyList.InsertAfter(entry, ci.freqElement) + } + } + + nextPosition.Value.(*entry).listEntry[ci] = struct{}{} + ci.freqElement = nextPosition + + // we have moved the cache item to the next position, + // then we need to remove old position of the cacheItem from the list + // then we deleted previous position of cacheItem + if ci.freqElement.Prev() != nil { + l.remove(ci, ci.freqElement.Prev()) + } +} + +// remove removes the cache item from the cache list +// after deleting key from the list, if its linked list has no any item no longer +// then that linked list elemnet will be removed from the list too +func (l *LFUNoTS) remove(ci *cacheItem, position *list.Element) { + entry := position.Value.(*entry).listEntry + delete(entry, ci) + if len(entry) == 0 { + l.frequencyList.Remove(position) + } +} + +// evict deletes the element from list with given linked list element +func (l *LFUNoTS) evict(e *list.Element) error { + // ne need to return err if list element is already nil + if e == nil { + return nil + } + + // remove the first item of the linked list + for entry := range e.Value.(*entry).listEntry { + l.cache.Delete(entry.k) + l.remove(entry, e) + l.currentSize-- + break + } + + return nil +} + +// newEntry creates a new entry with frequency count +func newEntry(freqCount int) *entry { + return &entry{ + freqCount: freqCount, + listEntry: make(map[*cacheItem]struct{}), + } +} + +// newCacheItem creates a new cache item with key and value +func newCacheItem(key string, value interface{}) *cacheItem { + return &cacheItem{ + k: key, + v: value, + } + +} diff --git a/vendor/github.com/koding/cache/lru.go b/vendor/github.com/koding/cache/lru.go new file mode 100644 index 00000000..2eefe5c4 --- /dev/null +++ b/vendor/github.com/koding/cache/lru.go @@ -0,0 +1,51 @@ +package cache + +import "sync" + +// LRU Discards the least recently used items first. This algorithm +// requires keeping track of what was used when. +type LRU struct { + // Mutex is used for handling the concurrent + // read/write requests for cache + sync.Mutex + + // cache holds the all cache values + cache Cache +} + +// NewLRU creates a thread-safe LRU cache +func NewLRU(size int) Cache { + return &LRU{ + cache: NewLRUNoTS(size), + } +} + +// Get returns the value of a given key if it exists, every get item will be +// moved to the head of the linked list for keeping track of least recent used +// item +func (l *LRU) Get(key string) (interface{}, error) { + l.Lock() + defer l.Unlock() + + return l.cache.Get(key) +} + +// Set sets or overrides the given key with the given value, every set item will +// be moved or prepended to the head of the linked list for keeping track of +// least recent used item. When the cache is full, last item of the linked list +// will be evicted from the cache +func (l *LRU) Set(key string, val interface{}) error { + l.Lock() + defer l.Unlock() + + return l.cache.Set(key, val) +} + +// Delete deletes the given key-value pair from cache, this function doesnt +// return an error if item is not in the cache +func (l *LRU) Delete(key string) error { + l.Lock() + defer l.Unlock() + + return l.cache.Delete(key) +} diff --git a/vendor/github.com/koding/cache/lru_nots.go b/vendor/github.com/koding/cache/lru_nots.go new file mode 100644 index 00000000..a6eebcd3 --- /dev/null +++ b/vendor/github.com/koding/cache/lru_nots.go @@ -0,0 +1,122 @@ +package cache + +import ( + "container/list" +) + +// LRUNoTS Discards the least recently used items first. This algorithm +// requires keeping track of what was used when. +type LRUNoTS struct { + // list holds all items in a linked list, for finding the `tail` of the list + list *list.List + + // cache holds the all cache values + cache Cache + + // size holds the limit of the LRU cache + size int +} + +// kv is an helper struct for keeping track of the key for the list item. Only +// place where we need the key of a value is while removing the last item from +// linked list, for other cases, all operations alread have the key +type kv struct { + k string + v interface{} +} + +// NewLRUNoTS creates a new LRU cache struct for further cache operations. Size +// is used for limiting the upper bound of the cache +func NewLRUNoTS(size int) Cache { + if size < 1 { + panic("invalid cache size") + } + + return &LRUNoTS{ + list: list.New(), + cache: NewMemoryNoTS(), + size: size, + } +} + +// Get returns the value of a given key if it exists, every get item will be +// moved to the head of the linked list for keeping track of least recent used +// item +func (l *LRUNoTS) Get(key string) (interface{}, error) { + res, err := l.cache.Get(key) + if err != nil { + return nil, err + } + + elem := res.(*list.Element) + // move found item to the head + l.list.MoveToFront(elem) + + return elem.Value.(*kv).v, nil +} + +// Set sets or overrides the given key with the given value, every set item will +// be moved or prepended to the head of the linked list for keeping track of +// least recent used item. When the cache is full, last item of the linked list +// will be evicted from the cache +func (l *LRUNoTS) Set(key string, val interface{}) error { + // try to get item + res, err := l.cache.Get(key) + if err != nil && err != ErrNotFound { + return err + } + + var elem *list.Element + + // if elem is not in the cache, push it to front of the list + if err == ErrNotFound { + elem = l.list.PushFront(&kv{k: key, v: val}) + } else { + // if elem is in the cache, update the data and move it the front + elem = res.(*list.Element) + + // update the data + elem.Value.(*kv).v = val + + // item already exists, so move it to the front of the list + l.list.MoveToFront(elem) + } + + // in any case, set the item to the cache + err = l.cache.Set(key, elem) + if err != nil { + return err + } + + // if the cache is full, evict last entry + if l.list.Len() > l.size { + // remove last element from cache + return l.removeElem(l.list.Back()) + } + + return nil +} + +// Delete deletes the given key-value pair from cache, this function doesnt +// return an error if item is not in the cache +func (l *LRUNoTS) Delete(key string) error { + res, err := l.cache.Get(key) + if err != nil && err != ErrNotFound { + return err + } + + // item already deleted + if err == ErrNotFound { + // surpress not found errors + return nil + } + + elem := res.(*list.Element) + + return l.removeElem(elem) +} + +func (l *LRUNoTS) removeElem(e *list.Element) error { + l.list.Remove(e) + return l.cache.Delete(e.Value.(*kv).k) +} diff --git a/vendor/github.com/koding/cache/memory.go b/vendor/github.com/koding/cache/memory.go new file mode 100644 index 00000000..3a6eb867 --- /dev/null +++ b/vendor/github.com/koding/cache/memory.go @@ -0,0 +1,46 @@ +package cache + +import "sync" + +// Memory provides an inmemory caching mechanism +type Memory struct { + // Mutex is used for handling the concurrent + // read/write requests for cache + sync.Mutex + + // cache holds the cache data + cache Cache +} + +// NewMemory creates an inmemory cache system +// Which everytime will return the true value about a cache hit +func NewMemory() Cache { + return &Memory{ + cache: NewMemoryNoTS(), + } +} + +// Get returns the value of a given key if it exists +func (r *Memory) Get(key string) (interface{}, error) { + r.Lock() + defer r.Unlock() + + return r.cache.Get(key) +} + +// Set sets a value to the cache or overrides existing one with the given value +func (r *Memory) Set(key string, value interface{}) error { + r.Lock() + defer r.Unlock() + + return r.cache.Set(key, value) +} + +// Delete deletes the given key-value pair from cache, this function doesnt +// return an error if item is not in the cache +func (r *Memory) Delete(key string) error { + r.Lock() + defer r.Unlock() + + return r.cache.Delete(key) +} diff --git a/vendor/github.com/koding/cache/memory_nots.go b/vendor/github.com/koding/cache/memory_nots.go new file mode 100644 index 00000000..dff8c725 --- /dev/null +++ b/vendor/github.com/koding/cache/memory_nots.go @@ -0,0 +1,45 @@ +package cache + +// MemoryNoTS provides a non-thread safe caching mechanism +type MemoryNoTS struct { + // items holds the cache data + items map[string]interface{} +} + +// NewMemoryNoTS creates MemoryNoTS struct +func NewMemoryNoTS() *MemoryNoTS { + return &MemoryNoTS{ + items: map[string]interface{}{}, + } +} + +// NewMemNoTSCache is a helper method to return a Cache interface, so callers +// don't have to typecast +func NewMemNoTSCache() Cache { + return NewMemoryNoTS() +} + +// Get returns a value of a given key if it exists +// and valid for the time being +func (r *MemoryNoTS) Get(key string) (interface{}, error) { + value, ok := r.items[key] + if !ok { + return nil, ErrNotFound + } + + return value, nil +} + +// Set will persist a value to the cache or +// override existing one with the new one +func (r *MemoryNoTS) Set(key string, value interface{}) error { + r.items[key] = value + return nil +} + +// Delete deletes a given key, it doesnt return error if the item is not in the +// system +func (r *MemoryNoTS) Delete(key string) error { + delete(r.items, key) + return nil +} diff --git a/vendor/github.com/koding/cache/memory_ttl.go b/vendor/github.com/koding/cache/memory_ttl.go new file mode 100644 index 00000000..756956d4 --- /dev/null +++ b/vendor/github.com/koding/cache/memory_ttl.go @@ -0,0 +1,160 @@ +package cache + +import ( + "sync" + "time" +) + +var zeroTTL = time.Duration(0) + +// MemoryTTL holds the required variables to compose an in memory cache system +// which also provides expiring key mechanism +type MemoryTTL struct { + // Mutex is used for handling the concurrent + // read/write requests for cache + sync.RWMutex + + // cache holds the cache data + cache *MemoryNoTS + + // setAts holds the time that related item's set at + setAts map[string]time.Time + + // ttl is a duration for a cache key to expire + ttl time.Duration + + // gcTicker controls gc intervals + gcTicker *time.Ticker + + // done controls sweeping goroutine lifetime + done chan struct{} +} + +// NewMemoryWithTTL creates an inmemory cache system +// Which everytime will return the true values about a cache hit +// and never will leak memory +// ttl is used for expiration of a key from cache +func NewMemoryWithTTL(ttl time.Duration) *MemoryTTL { + return &MemoryTTL{ + cache: NewMemoryNoTS(), + setAts: map[string]time.Time{}, + ttl: ttl, + } +} + +// StartGC starts the garbage collection process in a go routine +func (r *MemoryTTL) StartGC(gcInterval time.Duration) { + if gcInterval <= 0 { + return + } + + ticker := time.NewTicker(gcInterval) + done := make(chan struct{}) + + r.Lock() + r.gcTicker = ticker + r.done = done + r.Unlock() + + go func() { + for { + select { + case <-ticker.C: + now := time.Now() + + r.Lock() + for key := range r.cache.items { + if !r.isValidTime(key, now) { + r.delete(key) + } + } + r.Unlock() + case <-done: + return + } + } + }() +} + +// StopGC stops sweeping goroutine. +func (r *MemoryTTL) StopGC() { + if r.gcTicker != nil { + r.Lock() + r.gcTicker.Stop() + r.gcTicker = nil + close(r.done) + r.done = nil + r.Unlock() + } +} + +// Get returns a value of a given key if it exists +// and valid for the time being +func (r *MemoryTTL) Get(key string) (interface{}, error) { + r.RLock() + + for !r.isValid(key) { + r.RUnlock() + // Need write lock to delete key, so need to unlock, relock and recheck + r.Lock() + if !r.isValid(key) { + r.delete(key) + r.Unlock() + return nil, ErrNotFound + } + r.Unlock() + // Could become invalid again in this window + r.RLock() + } + + defer r.RUnlock() + + value, err := r.cache.Get(key) + if err != nil { + return nil, err + } + + return value, nil +} + +// Set will persist a value to the cache or +// override existing one with the new one +func (r *MemoryTTL) Set(key string, value interface{}) error { + r.Lock() + defer r.Unlock() + + r.cache.Set(key, value) + r.setAts[key] = time.Now() + return nil +} + +// Delete deletes a given key if exists +func (r *MemoryTTL) Delete(key string) error { + r.Lock() + defer r.Unlock() + + r.delete(key) + return nil +} + +func (r *MemoryTTL) delete(key string) { + r.cache.Delete(key) + delete(r.setAts, key) +} + +func (r *MemoryTTL) isValid(key string) bool { + return r.isValidTime(key, time.Now()) +} + +func (r *MemoryTTL) isValidTime(key string, t time.Time) bool { + setAt, ok := r.setAts[key] + if !ok { + return false + } + + if r.ttl == zeroTTL { + return true + } + + return setAt.Add(r.ttl).After(t) +} diff --git a/vendor/github.com/koding/cache/mongo_cache.go b/vendor/github.com/koding/cache/mongo_cache.go new file mode 100644 index 00000000..9610697f --- /dev/null +++ b/vendor/github.com/koding/cache/mongo_cache.go @@ -0,0 +1,227 @@ +package cache + +import ( + "fmt" + "sync" + "time" + + mgo "gopkg.in/mgo.v2" +) + +const ( + defaultExpireDuration = time.Minute + defaultCollectionName = "jCache" + defaultGCInterval = time.Minute + indexExpireAt = "expireAt" +) + +// MongoCache holds the cache values that will be stored in mongoDB +type MongoCache struct { + // mongeSession specifies the mongoDB connection + mongeSession *mgo.Session + + // CollectionName speficies the optional collection name for mongoDB + // if CollectionName is not set, then default value will be set + CollectionName string + + // ttl is a duration for a cache key to expire + TTL time.Duration + + // GCInterval specifies the time duration for garbage collector time interval + GCInterval time.Duration + + // GCStart starts the garbage collector and deletes the + // expired keys from mongo with given time interval + GCStart bool + + // gcTicker controls gc intervals + gcTicker *time.Ticker + + // done controls sweeping goroutine lifetime + done chan struct{} + + // Mutex is used for handling the concurrent + // read/write requests for cache + sync.RWMutex +} + +// Option sets the options specified. +type Option func(*MongoCache) + +// NewMongoCacheWithTTL creates a caching layer backed by mongo. TTL's are +// managed either by a background cleaner or document is removed on the Get +// operation. Mongo TTL indexes are not utilized since there can be multiple +// systems using the same collection with different TTL values. +// +// The responsibility of stopping the GC process belongs to the user. +// +// Session is not closed while stopping the GC. +// +// This self-referential function satisfy you to avoid passing +// nil value to the function as parameter +// e.g (usage) : +// configure with defaults, just call; +// NewMongoCacheWithTTL(session) +// +// configure ttl duration with; +// NewMongoCacheWithTTL(session, func(m *MongoCache) { +// m.TTL = 2 * time.Minute +// }) +// or +// NewMongoCacheWithTTL(session, SetTTL(time.Minute * 2)) +// +// configure collection name with; +// NewMongoCacheWithTTL(session, func(m *MongoCache) { +// m.CollectionName = "MongoCacheCollectionName" +// }) +func NewMongoCacheWithTTL(session *mgo.Session, configs ...Option) *MongoCache { + if session == nil { + panic("session must be set") + } + + mc := &MongoCache{ + mongeSession: session, + TTL: defaultExpireDuration, + CollectionName: defaultCollectionName, + GCInterval: defaultGCInterval, + GCStart: false, + } + + for _, configFunc := range configs { + configFunc(mc) + } + + if mc.GCStart { + mc.StartGC(mc.GCInterval) + } + + return mc +} + +// MustEnsureIndexExpireAt ensures the expireAt index +// usage: +// NewMongoCacheWithTTL(mongoSession, MustEnsureIndexExpireAt()) +func MustEnsureIndexExpireAt() Option { + return func(m *MongoCache) { + if err := m.EnsureIndex(); err != nil { + panic(fmt.Sprintf("index must ensure %q", err)) + } + } +} + +// StartGC enables the garbage collector in MongoCache struct +// usage: +// NewMongoCacheWithTTL(mongoSession, StartGC()) +func StartGC() Option { + return func(m *MongoCache) { + m.GCStart = true + } +} + +// SetTTL sets the ttl duration in MongoCache as option +// usage: +// NewMongoCacheWithTTL(mongoSession, SetTTL(time*Minute)) +func SetTTL(duration time.Duration) Option { + return func(m *MongoCache) { + m.TTL = duration + } +} + +// SetGCInterval sets the garbage collector interval in MongoCache struct as option +// usage: +// NewMongoCacheWithTTL(mongoSession, SetGCInterval(time*Minute)) +func SetGCInterval(duration time.Duration) Option { + return func(m *MongoCache) { + m.GCInterval = duration + } +} + +// SetCollectionName sets the collection name for mongoDB in MongoCache struct as option +// usage: +// NewMongoCacheWithTTL(mongoSession, SetCollectionName("mongoCollName")) +func SetCollectionName(collName string) Option { + return func(m *MongoCache) { + m.CollectionName = collName + } +} + +// Get returns a value of a given key if it exists +func (m *MongoCache) Get(key string) (interface{}, error) { + data, err := m.get(key) + if err == mgo.ErrNotFound { + return nil, ErrNotFound + } + + if err != nil { + return nil, err + } + + return data.Value, nil +} + +// Set will persist a value to the cache or override existing one with the new +// one +func (m *MongoCache) Set(key string, value interface{}) error { + return m.set(key, m.TTL, value) +} + +// SetEx will persist a value to the cache or override existing one with the new +// one with ttl duration +func (m *MongoCache) SetEx(key string, duration time.Duration, value interface{}) error { + return m.set(key, duration, value) +} + +// Delete deletes a given key if exists +func (m *MongoCache) Delete(key string) error { + return m.delete(key) +} + +// EnsureIndex ensures the index with expireAt key +func (m *MongoCache) EnsureIndex() error { + query := func(c *mgo.Collection) error { + return c.EnsureIndexKey(indexExpireAt) + } + + return m.run(m.CollectionName, query) +} + +// StartGC starts the garbage collector with given time interval The +// expired data will be checked & deleted with given interval time +func (m *MongoCache) StartGC(gcInterval time.Duration) { + if gcInterval <= 0 { + return + } + + ticker := time.NewTicker(gcInterval) + done := make(chan struct{}) + + m.Lock() + m.gcTicker = ticker + m.done = done + m.Unlock() + + go func() { + for { + select { + case <-ticker.C: + m.Lock() + m.deleteExpiredKeys() + m.Unlock() + case <-done: + return + } + } + }() +} + +// StopGC stops sweeping goroutine. +func (m *MongoCache) StopGC() { + if m.gcTicker != nil { + m.Lock() + m.gcTicker.Stop() + m.gcTicker = nil + close(m.done) + m.done = nil + m.Unlock() + } +} diff --git a/vendor/github.com/koding/cache/mongo_model.go b/vendor/github.com/koding/cache/mongo_model.go new file mode 100644 index 00000000..9945be69 --- /dev/null +++ b/vendor/github.com/koding/cache/mongo_model.go @@ -0,0 +1,81 @@ +package cache + +import ( + "time" + + mgo "gopkg.in/mgo.v2" + "gopkg.in/mgo.v2/bson" +) + +// Document holds the key-value pair for mongo cache +type Document struct { + Key string `bson:"_id" json:"_id"` + Value interface{} `bson:"value" json:"value"` + ExpireAt time.Time `bson:"expireAt" json:"expireAt"` +} + +// getKey fetches the key with its key +func (m *MongoCache) get(key string) (*Document, error) { + keyValue := new(Document) + + query := func(c *mgo.Collection) error { + return c.Find(bson.M{ + "_id": key, + "expireAt": bson.M{ + "$gt": time.Now().UTC(), + }}).One(&keyValue) + } + + err := m.run(m.CollectionName, query) + if err != nil { + return nil, err + } + + return keyValue, nil +} + +func (m *MongoCache) set(key string, duration time.Duration, value interface{}) error { + update := bson.M{ + "_id": key, + "value": value, + "expireAt": time.Now().Add(duration), + } + + query := func(c *mgo.Collection) error { + _, err := c.UpsertId(key, update) + return err + } + + return m.run(m.CollectionName, query) +} + +// deleteKey removes the key-value from mongoDB +func (m *MongoCache) delete(key string) error { + query := func(c *mgo.Collection) error { + err := c.RemoveId(key) + return err + } + + return m.run(m.CollectionName, query) +} + +func (m *MongoCache) deleteExpiredKeys() error { + var selector = bson.M{"expireAt": bson.M{ + "$lte": time.Now().UTC(), + }} + + query := func(c *mgo.Collection) error { + _, err := c.RemoveAll(selector) + return err + } + + return m.run(m.CollectionName, query) +} + +func (m *MongoCache) run(collection string, s func(*mgo.Collection) error) error { + session := m.mongeSession.Copy() + defer session.Close() + + c := session.DB("").C(collection) + return s(c) +} diff --git a/vendor/github.com/koding/cache/sharded_cache.go b/vendor/github.com/koding/cache/sharded_cache.go new file mode 100644 index 00000000..414c81ea --- /dev/null +++ b/vendor/github.com/koding/cache/sharded_cache.go @@ -0,0 +1,18 @@ +package cache + +// ShardedCache is the contract for all of the sharded cache backends that are supported by +// this package +type ShardedCache interface { + // Get returns single item from the backend if the requested item is not + // found, returns NotFound err + Get(shardID, key string) (interface{}, error) + + // Set sets a single item to the backend + Set(shardID, key string, value interface{}) error + + // Delete deletes single item from backend + Delete(shardID, key string) error + + // Deletes all items in that shard + DeleteShard(shardID string) error +} diff --git a/vendor/github.com/koding/cache/sharded_nots.go b/vendor/github.com/koding/cache/sharded_nots.go new file mode 100644 index 00000000..0675edad --- /dev/null +++ b/vendor/github.com/koding/cache/sharded_nots.go @@ -0,0 +1,67 @@ +package cache + +// ShardedNoTS ; the concept behind this storage is that each cache entry is +// associated with a tenantID and this enables fast purging for just that +// tenantID +type ShardedNoTS struct { + cache map[string]Cache + itemCount map[string]int + constructor func() Cache +} + +// NewShardedNoTS inits ShardedNoTS struct +func NewShardedNoTS(c func() Cache) *ShardedNoTS { + return &ShardedNoTS{ + constructor: c, + cache: make(map[string]Cache), + itemCount: make(map[string]int), + } +} + +// Get returns a value of a given key if it exists +// and valid for the time being +func (l *ShardedNoTS) Get(tenantID, key string) (interface{}, error) { + cache, ok := l.cache[tenantID] + if !ok { + return nil, ErrNotFound + } + + return cache.Get(key) +} + +// Set will persist a value to the cache or override existing one with the new +// one +func (l *ShardedNoTS) Set(tenantID, key string, val interface{}) error { + _, ok := l.cache[tenantID] + if !ok { + l.cache[tenantID] = l.constructor() + l.itemCount[tenantID] = 0 + } + + l.itemCount[tenantID]++ + return l.cache[tenantID].Set(key, val) +} + +// Delete deletes a given key +func (l *ShardedNoTS) Delete(tenantID, key string) error { + _, ok := l.cache[tenantID] + if !ok { + return nil + } + + l.itemCount[tenantID]-- + + if l.itemCount[tenantID] == 0 { + return l.DeleteShard(tenantID) + } + + return l.cache[tenantID].Delete(key) +} + +// DeleteShard deletes the keys inside from maps of cache & itemCount +func (l *ShardedNoTS) DeleteShard(tenantID string) error { + delete(l.cache, tenantID) + delete(l.itemCount, tenantID) + + return nil +} diff --git a/vendor/github.com/koding/cache/sharded_ttl.go b/vendor/github.com/koding/cache/sharded_ttl.go new file mode 100644 index 00000000..8ccfbbb1 --- /dev/null +++ b/vendor/github.com/koding/cache/sharded_ttl.go @@ -0,0 +1,148 @@ +package cache + +import ( + "sync" + "time" +) + +// ShardedTTL holds the required variables to compose an in memory sharded cache system +// which also provides expiring key mechanism +type ShardedTTL struct { + // Mutex is used for handling the concurrent + // read/write requests for cache + sync.Mutex + + // cache holds the cache data + cache ShardedCache + + // setAts holds the time that related item's set at, indexed by tenantID + setAts map[string]map[string]time.Time + + // ttl is a duration for a cache key to expire + ttl time.Duration + + // gcInterval is a duration for garbage collection + gcInterval time.Duration +} + +// NewShardedCacheWithTTL creates a sharded cache system with TTL based on specified Cache constructor +// Which everytime will return the true values about a cache hit +// and never will leak memory +// ttl is used for expiration of a key from cache +func NewShardedCacheWithTTL(ttl time.Duration, f func() Cache) *ShardedTTL { + return &ShardedTTL{ + cache: NewShardedNoTS(f), + setAts: map[string]map[string]time.Time{}, + ttl: ttl, + } +} + +// NewShardedWithTTL creates an in-memory sharded cache system +// ttl is used for expiration of a key from cache +func NewShardedWithTTL(ttl time.Duration) *ShardedTTL { + return NewShardedCacheWithTTL(ttl, NewMemNoTSCache) +} + +// StartGC starts the garbage collection process in a go routine +func (r *ShardedTTL) StartGC(gcInterval time.Duration) { + r.gcInterval = gcInterval + go func() { + for _ = range time.Tick(gcInterval) { + r.Lock() + for tenantID := range r.setAts { + for key := range r.setAts[tenantID] { + if !r.isValid(tenantID, key) { + r.delete(tenantID, key) + } + } + } + r.Unlock() + } + }() +} + +// Get returns a value of a given key if it exists +// and valid for the time being +func (r *ShardedTTL) Get(tenantID, key string) (interface{}, error) { + r.Lock() + defer r.Unlock() + + if !r.isValid(tenantID, key) { + r.delete(tenantID, key) + return nil, ErrNotFound + } + + value, err := r.cache.Get(tenantID, key) + if err != nil { + return nil, err + } + + return value, nil +} + +// Set will persist a value to the cache or +// override existing one with the new one +func (r *ShardedTTL) Set(tenantID, key string, value interface{}) error { + r.Lock() + defer r.Unlock() + + r.cache.Set(tenantID, key, value) + _, ok := r.setAts[tenantID] + if !ok { + r.setAts[tenantID] = make(map[string]time.Time) + } + r.setAts[tenantID][key] = time.Now() + return nil +} + +// Delete deletes a given key if exists +func (r *ShardedTTL) Delete(tenantID, key string) error { + r.Lock() + defer r.Unlock() + + r.delete(tenantID, key) + return nil +} + +func (r *ShardedTTL) delete(tenantID, key string) { + _, ok := r.setAts[tenantID] + if !ok { + return + } + r.cache.Delete(tenantID, key) + delete(r.setAts[tenantID], key) + if len(r.setAts[tenantID]) == 0 { + delete(r.setAts, tenantID) + } +} + +func (r *ShardedTTL) isValid(tenantID, key string) bool { + + _, ok := r.setAts[tenantID] + if !ok { + return false + } + setAt, ok := r.setAts[tenantID][key] + if !ok { + return false + } + if r.ttl == zeroTTL { + return true + } + + return setAt.Add(r.ttl).After(time.Now()) +} + +// DeleteShard deletes with given tenantID without key +func (r *ShardedTTL) DeleteShard(tenantID string) error { + r.Lock() + defer r.Unlock() + + _, ok := r.setAts[tenantID] + if ok { + for key := range r.setAts[tenantID] { + r.delete(tenantID, key) + } + } + return nil +} diff --git a/vendor/github.com/matttproud/golang_protobuf_extensions/LICENSE b/vendor/github.com/matttproud/golang_protobuf_extensions/LICENSE new file mode 100644 index 00000000..8dada3ed --- /dev/null +++ b/vendor/github.com/matttproud/golang_protobuf_extensions/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/matttproud/golang_protobuf_extensions/NOTICE b/vendor/github.com/matttproud/golang_protobuf_extensions/NOTICE new file mode 100644 index 00000000..5d8cb5b7 --- /dev/null +++ b/vendor/github.com/matttproud/golang_protobuf_extensions/NOTICE @@ -0,0 +1 @@ +Copyright 2012 Matt T. Proud (matt.proud@gmail.com) diff --git a/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/decode.go b/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/decode.go new file mode 100644 index 00000000..258c0636 --- /dev/null +++ b/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/decode.go @@ -0,0 +1,75 @@ +// Copyright 2013 Matt T. Proud +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pbutil + +import ( + "encoding/binary" + "errors" + "io" + + "github.com/golang/protobuf/proto" +) + +var errInvalidVarint = errors.New("invalid varint32 encountered") + +// ReadDelimited decodes a message from the provided length-delimited stream, +// where the length is encoded as 32-bit varint prefix to the message body. +// It returns the total number of bytes read and any applicable error. This is +// roughly equivalent to the companion Java API's +// MessageLite#parseDelimitedFrom. As per the reader contract, this function +// calls r.Read repeatedly as required until exactly one message including its +// prefix is read and decoded (or an error has occurred). The function never +// reads more bytes from the stream than required. The function never returns +// an error if a message has been read and decoded correctly, even if the end +// of the stream has been reached in doing so. In that case, any subsequent +// calls return (0, io.EOF). +func ReadDelimited(r io.Reader, m proto.Message) (n int, err error) { + // Per AbstractParser#parsePartialDelimitedFrom with + // CodedInputStream#readRawVarint32. + var headerBuf [binary.MaxVarintLen32]byte + var bytesRead, varIntBytes int + var messageLength uint64 + for varIntBytes == 0 { // i.e. no varint has been decoded yet. + if bytesRead >= len(headerBuf) { + return bytesRead, errInvalidVarint + } + // We have to read byte by byte here to avoid reading more bytes + // than required. Each read byte is appended to what we have + // read before. + newBytesRead, err := r.Read(headerBuf[bytesRead : bytesRead+1]) + if newBytesRead == 0 { + if err != nil { + return bytesRead, err + } + // A Reader should not return (0, nil), but if it does, + // it should be treated as no-op (according to the + // Reader contract). So let's go on... + continue + } + bytesRead += newBytesRead + // Now present everything read so far to the varint decoder and + // see if a varint can be decoded already. + messageLength, varIntBytes = proto.DecodeVarint(headerBuf[:bytesRead]) + } + + messageBuf := make([]byte, messageLength) + newBytesRead, err := io.ReadFull(r, messageBuf) + bytesRead += newBytesRead + if err != nil { + return bytesRead, err + } + + return bytesRead, proto.Unmarshal(messageBuf, m) +} diff --git a/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/doc.go b/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/doc.go new file mode 100644 index 00000000..c318385c --- /dev/null +++ b/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/doc.go @@ -0,0 +1,16 @@ +// Copyright 2013 Matt T. Proud +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package pbutil provides record length-delimited Protocol Buffer streaming. +package pbutil diff --git a/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/encode.go b/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/encode.go new file mode 100644 index 00000000..8fb59ad2 --- /dev/null +++ b/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil/encode.go @@ -0,0 +1,46 @@ +// Copyright 2013 Matt T. Proud +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pbutil + +import ( + "encoding/binary" + "io" + + "github.com/golang/protobuf/proto" +) + +// WriteDelimited encodes and dumps a message to the provided writer prefixed +// with a 32-bit varint indicating the length of the encoded message, producing +// a length-delimited record stream, which can be used to chain together +// encoded messages of the same type together in a file. It returns the total +// number of bytes written and any applicable error. This is roughly +// equivalent to the companion Java API's MessageLite#writeDelimitedTo. +func WriteDelimited(w io.Writer, m proto.Message) (n int, err error) { + buffer, err := proto.Marshal(m) + if err != nil { + return 0, err + } + + var buf [binary.MaxVarintLen32]byte + encodedLength := binary.PutUvarint(buf[:], uint64(len(buffer))) + + sync, err := w.Write(buf[:encodedLength]) + if err != nil { + return sync, err + } + + n, err = w.Write(buffer) + return n + sync, err +} diff --git a/vendor/github.com/pmezard/go-difflib/LICENSE b/vendor/github.com/pmezard/go-difflib/LICENSE new file mode 100644 index 00000000..c67dad61 --- /dev/null +++ b/vendor/github.com/pmezard/go-difflib/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2013, Patrick Mezard +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright +notice, this list of conditions and the following disclaimer in the +documentation and/or other materials provided with the distribution. + The names of its contributors may not be used to endorse or promote +products derived from this software without specific prior written +permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED +TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/pmezard/go-difflib/difflib/difflib.go b/vendor/github.com/pmezard/go-difflib/difflib/difflib.go new file mode 100644 index 00000000..003e99fa --- /dev/null +++ b/vendor/github.com/pmezard/go-difflib/difflib/difflib.go @@ -0,0 +1,772 @@ +// Package difflib is a partial port of Python difflib module. +// +// It provides tools to compare sequences of strings and generate textual diffs. +// +// The following class and functions have been ported: +// +// - SequenceMatcher +// +// - unified_diff +// +// - context_diff +// +// Getting unified diffs was the main goal of the port. Keep in mind this code +// is mostly suitable to output text differences in a human friendly way, there +// are no guarantees generated diffs are consumable by patch(1). +package difflib + +import ( + "bufio" + "bytes" + "fmt" + "io" + "strings" +) + +func min(a, b int) int { + if a < b { + return a + } + return b +} + +func max(a, b int) int { + if a > b { + return a + } + return b +} + +func calculateRatio(matches, length int) float64 { + if length > 0 { + return 2.0 * float64(matches) / float64(length) + } + return 1.0 +} + +type Match struct { + A int + B int + Size int +} + +type OpCode struct { + Tag byte + I1 int + I2 int + J1 int + J2 int +} + +// SequenceMatcher compares sequence of strings. The basic +// algorithm predates, and is a little fancier than, an algorithm +// published in the late 1980's by Ratcliff and Obershelp under the +// hyperbolic name "gestalt pattern matching". The basic idea is to find +// the longest contiguous matching subsequence that contains no "junk" +// elements (R-O doesn't address junk). The same idea is then applied +// recursively to the pieces of the sequences to the left and to the right +// of the matching subsequence. This does not yield minimal edit +// sequences, but does tend to yield matches that "look right" to people. +// +// SequenceMatcher tries to compute a "human-friendly diff" between two +// sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the +// longest *contiguous* & junk-free matching subsequence. That's what +// catches peoples' eyes. The Windows(tm) windiff has another interesting +// notion, pairing up elements that appear uniquely in each sequence. +// That, and the method here, appear to yield more intuitive difference +// reports than does diff. This method appears to be the least vulnerable +// to synching up on blocks of "junk lines", though (like blank lines in +// ordinary text files, or maybe "

" lines in HTML files). That may be +// because this is the only method of the 3 that has a *concept* of +// "junk" . +// +// Timing: Basic R-O is cubic time worst case and quadratic time expected +// case. SequenceMatcher is quadratic time for the worst case and has +// expected-case behavior dependent in a complicated way on how many +// elements the sequences have in common; best case time is linear. +type SequenceMatcher struct { + a []string + b []string + b2j map[string][]int + IsJunk func(string) bool + autoJunk bool + bJunk map[string]struct{} + matchingBlocks []Match + fullBCount map[string]int + bPopular map[string]struct{} + opCodes []OpCode +} + +func NewMatcher(a, b []string) *SequenceMatcher { + m := SequenceMatcher{autoJunk: true} + m.SetSeqs(a, b) + return &m +} + +func NewMatcherWithJunk(a, b []string, autoJunk bool, + isJunk func(string) bool) *SequenceMatcher { + + m := SequenceMatcher{IsJunk: isJunk, autoJunk: autoJunk} + m.SetSeqs(a, b) + return &m +} + +// Set two sequences to be compared. +func (m *SequenceMatcher) SetSeqs(a, b []string) { + m.SetSeq1(a) + m.SetSeq2(b) +} + +// Set the first sequence to be compared. The second sequence to be compared is +// not changed. +// +// SequenceMatcher computes and caches detailed information about the second +// sequence, so if you want to compare one sequence S against many sequences, +// use .SetSeq2(s) once and call .SetSeq1(x) repeatedly for each of the other +// sequences. +// +// See also SetSeqs() and SetSeq2(). +func (m *SequenceMatcher) SetSeq1(a []string) { + if &a == &m.a { + return + } + m.a = a + m.matchingBlocks = nil + m.opCodes = nil +} + +// Set the second sequence to be compared. The first sequence to be compared is +// not changed. +func (m *SequenceMatcher) SetSeq2(b []string) { + if &b == &m.b { + return + } + m.b = b + m.matchingBlocks = nil + m.opCodes = nil + m.fullBCount = nil + m.chainB() +} + +func (m *SequenceMatcher) chainB() { + // Populate line -> index mapping + b2j := map[string][]int{} + for i, s := range m.b { + indices := b2j[s] + indices = append(indices, i) + b2j[s] = indices + } + + // Purge junk elements + m.bJunk = map[string]struct{}{} + if m.IsJunk != nil { + junk := m.bJunk + for s, _ := range b2j { + if m.IsJunk(s) { + junk[s] = struct{}{} + } + } + for s, _ := range junk { + delete(b2j, s) + } + } + + // Purge remaining popular elements + popular := map[string]struct{}{} + n := len(m.b) + if m.autoJunk && n >= 200 { + ntest := n/100 + 1 + for s, indices := range b2j { + if len(indices) > ntest { + popular[s] = struct{}{} + } + } + for s, _ := range popular { + delete(b2j, s) + } + } + m.bPopular = popular + m.b2j = b2j +} + +func (m *SequenceMatcher) isBJunk(s string) bool { + _, ok := m.bJunk[s] + return ok +} + +// Find longest matching block in a[alo:ahi] and b[blo:bhi]. +// +// If IsJunk is not defined: +// +// Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where +// alo <= i <= i+k <= ahi +// blo <= j <= j+k <= bhi +// and for all (i',j',k') meeting those conditions, +// k >= k' +// i <= i' +// and if i == i', j <= j' +// +// In other words, of all maximal matching blocks, return one that +// starts earliest in a, and of all those maximal matching blocks that +// start earliest in a, return the one that starts earliest in b. +// +// If IsJunk is defined, first the longest matching block is +// determined as above, but with the additional restriction that no +// junk element appears in the block. Then that block is extended as +// far as possible by matching (only) junk elements on both sides. So +// the resulting block never matches on junk except as identical junk +// happens to be adjacent to an "interesting" match. +// +// If no blocks match, return (alo, blo, 0). +func (m *SequenceMatcher) findLongestMatch(alo, ahi, blo, bhi int) Match { + // CAUTION: stripping common prefix or suffix would be incorrect. + // E.g., + // ab + // acab + // Longest matching block is "ab", but if common prefix is + // stripped, it's "a" (tied with "b"). UNIX(tm) diff does so + // strip, so ends up claiming that ab is changed to acab by + // inserting "ca" in the middle. That's minimal but unintuitive: + // "it's obvious" that someone inserted "ac" at the front. + // Windiff ends up at the same place as diff, but by pairing up + // the unique 'b's and then matching the first two 'a's. + besti, bestj, bestsize := alo, blo, 0 + + // find longest junk-free match + // during an iteration of the loop, j2len[j] = length of longest + // junk-free match ending with a[i-1] and b[j] + j2len := map[int]int{} + for i := alo; i != ahi; i++ { + // look at all instances of a[i] in b; note that because + // b2j has no junk keys, the loop is skipped if a[i] is junk + newj2len := map[int]int{} + for _, j := range m.b2j[m.a[i]] { + // a[i] matches b[j] + if j < blo { + continue + } + if j >= bhi { + break + } + k := j2len[j-1] + 1 + newj2len[j] = k + if k > bestsize { + besti, bestj, bestsize = i-k+1, j-k+1, k + } + } + j2len = newj2len + } + + // Extend the best by non-junk elements on each end. In particular, + // "popular" non-junk elements aren't in b2j, which greatly speeds + // the inner loop above, but also means "the best" match so far + // doesn't contain any junk *or* popular non-junk elements. + for besti > alo && bestj > blo && !m.isBJunk(m.b[bestj-1]) && + m.a[besti-1] == m.b[bestj-1] { + besti, bestj, bestsize = besti-1, bestj-1, bestsize+1 + } + for besti+bestsize < ahi && bestj+bestsize < bhi && + !m.isBJunk(m.b[bestj+bestsize]) && + m.a[besti+bestsize] == m.b[bestj+bestsize] { + bestsize += 1 + } + + // Now that we have a wholly interesting match (albeit possibly + // empty!), we may as well suck up the matching junk on each + // side of it too. Can't think of a good reason not to, and it + // saves post-processing the (possibly considerable) expense of + // figuring out what to do with it. In the case of an empty + // interesting match, this is clearly the right thing to do, + // because no other kind of match is possible in the regions. + for besti > alo && bestj > blo && m.isBJunk(m.b[bestj-1]) && + m.a[besti-1] == m.b[bestj-1] { + besti, bestj, bestsize = besti-1, bestj-1, bestsize+1 + } + for besti+bestsize < ahi && bestj+bestsize < bhi && + m.isBJunk(m.b[bestj+bestsize]) && + m.a[besti+bestsize] == m.b[bestj+bestsize] { + bestsize += 1 + } + + return Match{A: besti, B: bestj, Size: bestsize} +} + +// Return list of triples describing matching subsequences. +// +// Each triple is of the form (i, j, n), and means that +// a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in +// i and in j. It's also guaranteed that if (i, j, n) and (i', j', n') are +// adjacent triples in the list, and the second is not the last triple in the +// list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe +// adjacent equal blocks. +// +// The last triple is a dummy, (len(a), len(b), 0), and is the only +// triple with n==0. +func (m *SequenceMatcher) GetMatchingBlocks() []Match { + if m.matchingBlocks != nil { + return m.matchingBlocks + } + + var matchBlocks func(alo, ahi, blo, bhi int, matched []Match) []Match + matchBlocks = func(alo, ahi, blo, bhi int, matched []Match) []Match { + match := m.findLongestMatch(alo, ahi, blo, bhi) + i, j, k := match.A, match.B, match.Size + if match.Size > 0 { + if alo < i && blo < j { + matched = matchBlocks(alo, i, blo, j, matched) + } + matched = append(matched, match) + if i+k < ahi && j+k < bhi { + matched = matchBlocks(i+k, ahi, j+k, bhi, matched) + } + } + return matched + } + matched := matchBlocks(0, len(m.a), 0, len(m.b), nil) + + // It's possible that we have adjacent equal blocks in the + // matching_blocks list now. + nonAdjacent := []Match{} + i1, j1, k1 := 0, 0, 0 + for _, b := range matched { + // Is this block adjacent to i1, j1, k1? + i2, j2, k2 := b.A, b.B, b.Size + if i1+k1 == i2 && j1+k1 == j2 { + // Yes, so collapse them -- this just increases the length of + // the first block by the length of the second, and the first + // block so lengthened remains the block to compare against. + k1 += k2 + } else { + // Not adjacent. Remember the first block (k1==0 means it's + // the dummy we started with), and make the second block the + // new block to compare against. + if k1 > 0 { + nonAdjacent = append(nonAdjacent, Match{i1, j1, k1}) + } + i1, j1, k1 = i2, j2, k2 + } + } + if k1 > 0 { + nonAdjacent = append(nonAdjacent, Match{i1, j1, k1}) + } + + nonAdjacent = append(nonAdjacent, Match{len(m.a), len(m.b), 0}) + m.matchingBlocks = nonAdjacent + return m.matchingBlocks +} + +// Return list of 5-tuples describing how to turn a into b. +// +// Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple +// has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the +// tuple preceding it, and likewise for j1 == the previous j2. +// +// The tags are characters, with these meanings: +// +// 'r' (replace): a[i1:i2] should be replaced by b[j1:j2] +// +// 'd' (delete): a[i1:i2] should be deleted, j1==j2 in this case. +// +// 'i' (insert): b[j1:j2] should be inserted at a[i1:i1], i1==i2 in this case. +// +// 'e' (equal): a[i1:i2] == b[j1:j2] +func (m *SequenceMatcher) GetOpCodes() []OpCode { + if m.opCodes != nil { + return m.opCodes + } + i, j := 0, 0 + matching := m.GetMatchingBlocks() + opCodes := make([]OpCode, 0, len(matching)) + for _, m := range matching { + // invariant: we've pumped out correct diffs to change + // a[:i] into b[:j], and the next matching block is + // a[ai:ai+size] == b[bj:bj+size]. So we need to pump + // out a diff to change a[i:ai] into b[j:bj], pump out + // the matching block, and move (i,j) beyond the match + ai, bj, size := m.A, m.B, m.Size + tag := byte(0) + if i < ai && j < bj { + tag = 'r' + } else if i < ai { + tag = 'd' + } else if j < bj { + tag = 'i' + } + if tag > 0 { + opCodes = append(opCodes, OpCode{tag, i, ai, j, bj}) + } + i, j = ai+size, bj+size + // the list of matching blocks is terminated by a + // sentinel with size 0 + if size > 0 { + opCodes = append(opCodes, OpCode{'e', ai, i, bj, j}) + } + } + m.opCodes = opCodes + return m.opCodes +} + +// Isolate change clusters by eliminating ranges with no changes. +// +// Return a generator of groups with up to n lines of context. +// Each group is in the same format as returned by GetOpCodes(). +func (m *SequenceMatcher) GetGroupedOpCodes(n int) [][]OpCode { + if n < 0 { + n = 3 + } + codes := m.GetOpCodes() + if len(codes) == 0 { + codes = []OpCode{OpCode{'e', 0, 1, 0, 1}} + } + // Fixup leading and trailing groups if they show no changes. + if codes[0].Tag == 'e' { + c := codes[0] + i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2 + codes[0] = OpCode{c.Tag, max(i1, i2-n), i2, max(j1, j2-n), j2} + } + if codes[len(codes)-1].Tag == 'e' { + c := codes[len(codes)-1] + i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2 + codes[len(codes)-1] = OpCode{c.Tag, i1, min(i2, i1+n), j1, min(j2, j1+n)} + } + nn := n + n + groups := [][]OpCode{} + group := []OpCode{} + for _, c := range codes { + i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2 + // End the current group and start a new one whenever + // there is a large range with no changes. + if c.Tag == 'e' && i2-i1 > nn { + group = append(group, OpCode{c.Tag, i1, min(i2, i1+n), + j1, min(j2, j1+n)}) + groups = append(groups, group) + group = []OpCode{} + i1, j1 = max(i1, i2-n), max(j1, j2-n) + } + group = append(group, OpCode{c.Tag, i1, i2, j1, j2}) + } + if len(group) > 0 && !(len(group) == 1 && group[0].Tag == 'e') { + groups = append(groups, group) + } + return groups +} + +// Return a measure of the sequences' similarity (float in [0,1]). +// +// Where T is the total number of elements in both sequences, and +// M is the number of matches, this is 2.0*M / T. +// Note that this is 1 if the sequences are identical, and 0 if +// they have nothing in common. +// +// .Ratio() is expensive to compute if you haven't already computed +// .GetMatchingBlocks() or .GetOpCodes(), in which case you may +// want to try .QuickRatio() or .RealQuickRation() first to get an +// upper bound. +func (m *SequenceMatcher) Ratio() float64 { + matches := 0 + for _, m := range m.GetMatchingBlocks() { + matches += m.Size + } + return calculateRatio(matches, len(m.a)+len(m.b)) +} + +// Return an upper bound on ratio() relatively quickly. +// +// This isn't defined beyond that it is an upper bound on .Ratio(), and +// is faster to compute. +func (m *SequenceMatcher) QuickRatio() float64 { + // viewing a and b as multisets, set matches to the cardinality + // of their intersection; this counts the number of matches + // without regard to order, so is clearly an upper bound + if m.fullBCount == nil { + m.fullBCount = map[string]int{} + for _, s := range m.b { + m.fullBCount[s] = m.fullBCount[s] + 1 + } + } + + // avail[x] is the number of times x appears in 'b' less the + // number of times we've seen it in 'a' so far ... kinda + avail := map[string]int{} + matches := 0 + for _, s := range m.a { + n, ok := avail[s] + if !ok { + n = m.fullBCount[s] + } + avail[s] = n - 1 + if n > 0 { + matches += 1 + } + } + return calculateRatio(matches, len(m.a)+len(m.b)) +} + +// Return an upper bound on ratio() very quickly. +// +// This isn't defined beyond that it is an upper bound on .Ratio(), and +// is faster to compute than either .Ratio() or .QuickRatio(). +func (m *SequenceMatcher) RealQuickRatio() float64 { + la, lb := len(m.a), len(m.b) + return calculateRatio(min(la, lb), la+lb) +} + +// Convert range to the "ed" format +func formatRangeUnified(start, stop int) string { + // Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning := start + 1 // lines start numbering with one + length := stop - start + if length == 1 { + return fmt.Sprintf("%d", beginning) + } + if length == 0 { + beginning -= 1 // empty ranges begin at line just before the range + } + return fmt.Sprintf("%d,%d", beginning, length) +} + +// Unified diff parameters +type UnifiedDiff struct { + A []string // First sequence lines + FromFile string // First file name + FromDate string // First file time + B []string // Second sequence lines + ToFile string // Second file name + ToDate string // Second file time + Eol string // Headers end of line, defaults to LF + Context int // Number of context lines +} + +// Compare two sequences of lines; generate the delta as a unified diff. +// +// Unified diffs are a compact way of showing line changes and a few +// lines of context. The number of context lines is set by 'n' which +// defaults to three. +// +// By default, the diff control lines (those with ---, +++, or @@) are +// created with a trailing newline. This is helpful so that inputs +// created from file.readlines() result in diffs that are suitable for +// file.writelines() since both the inputs and outputs have trailing +// newlines. +// +// For inputs that do not have trailing newlines, set the lineterm +// argument to "" so that the output will be uniformly newline free. +// +// The unidiff format normally has a header for filenames and modification +// times. Any or all of these may be specified using strings for +// 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. +// The modification times are normally expressed in the ISO 8601 format. +func WriteUnifiedDiff(writer io.Writer, diff UnifiedDiff) error { + buf := bufio.NewWriter(writer) + defer buf.Flush() + wf := func(format string, args ...interface{}) error { + _, err := buf.WriteString(fmt.Sprintf(format, args...)) + return err + } + ws := func(s string) error { + _, err := buf.WriteString(s) + return err + } + + if len(diff.Eol) == 0 { + diff.Eol = "\n" + } + + started := false + m := NewMatcher(diff.A, diff.B) + for _, g := range m.GetGroupedOpCodes(diff.Context) { + if !started { + started = true + fromDate := "" + if len(diff.FromDate) > 0 { + fromDate = "\t" + diff.FromDate + } + toDate := "" + if len(diff.ToDate) > 0 { + toDate = "\t" + diff.ToDate + } + if diff.FromFile != "" || diff.ToFile != "" { + err := wf("--- %s%s%s", diff.FromFile, fromDate, diff.Eol) + if err != nil { + return err + } + err = wf("+++ %s%s%s", diff.ToFile, toDate, diff.Eol) + if err != nil { + return err + } + } + } + first, last := g[0], g[len(g)-1] + range1 := formatRangeUnified(first.I1, last.I2) + range2 := formatRangeUnified(first.J1, last.J2) + if err := wf("@@ -%s +%s @@%s", range1, range2, diff.Eol); err != nil { + return err + } + for _, c := range g { + i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2 + if c.Tag == 'e' { + for _, line := range diff.A[i1:i2] { + if err := ws(" " + line); err != nil { + return err + } + } + continue + } + if c.Tag == 'r' || c.Tag == 'd' { + for _, line := range diff.A[i1:i2] { + if err := ws("-" + line); err != nil { + return err + } + } + } + if c.Tag == 'r' || c.Tag == 'i' { + for _, line := range diff.B[j1:j2] { + if err := ws("+" + line); err != nil { + return err + } + } + } + } + } + return nil +} + +// Like WriteUnifiedDiff but returns the diff a string. +func GetUnifiedDiffString(diff UnifiedDiff) (string, error) { + w := &bytes.Buffer{} + err := WriteUnifiedDiff(w, diff) + return string(w.Bytes()), err +} + +// Convert range to the "ed" format. +func formatRangeContext(start, stop int) string { + // Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning := start + 1 // lines start numbering with one + length := stop - start + if length == 0 { + beginning -= 1 // empty ranges begin at line just before the range + } + if length <= 1 { + return fmt.Sprintf("%d", beginning) + } + return fmt.Sprintf("%d,%d", beginning, beginning+length-1) +} + +type ContextDiff UnifiedDiff + +// Compare two sequences of lines; generate the delta as a context diff. +// +// Context diffs are a compact way of showing line changes and a few +// lines of context. The number of context lines is set by diff.Context +// which defaults to three. +// +// By default, the diff control lines (those with *** or ---) are +// created with a trailing newline. +// +// For inputs that do not have trailing newlines, set the diff.Eol +// argument to "" so that the output will be uniformly newline free. +// +// The context diff format normally has a header for filenames and +// modification times. Any or all of these may be specified using +// strings for diff.FromFile, diff.ToFile, diff.FromDate, diff.ToDate. +// The modification times are normally expressed in the ISO 8601 format. +// If not specified, the strings default to blanks. +func WriteContextDiff(writer io.Writer, diff ContextDiff) error { + buf := bufio.NewWriter(writer) + defer buf.Flush() + var diffErr error + wf := func(format string, args ...interface{}) { + _, err := buf.WriteString(fmt.Sprintf(format, args...)) + if diffErr == nil && err != nil { + diffErr = err + } + } + ws := func(s string) { + _, err := buf.WriteString(s) + if diffErr == nil && err != nil { + diffErr = err + } + } + + if len(diff.Eol) == 0 { + diff.Eol = "\n" + } + + prefix := map[byte]string{ + 'i': "+ ", + 'd': "- ", + 'r': "! ", + 'e': " ", + } + + started := false + m := NewMatcher(diff.A, diff.B) + for _, g := range m.GetGroupedOpCodes(diff.Context) { + if !started { + started = true + fromDate := "" + if len(diff.FromDate) > 0 { + fromDate = "\t" + diff.FromDate + } + toDate := "" + if len(diff.ToDate) > 0 { + toDate = "\t" + diff.ToDate + } + if diff.FromFile != "" || diff.ToFile != "" { + wf("*** %s%s%s", diff.FromFile, fromDate, diff.Eol) + wf("--- %s%s%s", diff.ToFile, toDate, diff.Eol) + } + } + + first, last := g[0], g[len(g)-1] + ws("***************" + diff.Eol) + + range1 := formatRangeContext(first.I1, last.I2) + wf("*** %s ****%s", range1, diff.Eol) + for _, c := range g { + if c.Tag == 'r' || c.Tag == 'd' { + for _, cc := range g { + if cc.Tag == 'i' { + continue + } + for _, line := range diff.A[cc.I1:cc.I2] { + ws(prefix[cc.Tag] + line) + } + } + break + } + } + + range2 := formatRangeContext(first.J1, last.J2) + wf("--- %s ----%s", range2, diff.Eol) + for _, c := range g { + if c.Tag == 'r' || c.Tag == 'i' { + for _, cc := range g { + if cc.Tag == 'd' { + continue + } + for _, line := range diff.B[cc.J1:cc.J2] { + ws(prefix[cc.Tag] + line) + } + } + break + } + } + } + return diffErr +} + +// Like WriteContextDiff but returns the diff a string. +func GetContextDiffString(diff ContextDiff) (string, error) { + w := &bytes.Buffer{} + err := WriteContextDiff(w, diff) + return string(w.Bytes()), err +} + +// Split a string on "\n" while preserving them. The output can be used +// as input for UnifiedDiff and ContextDiff structures. +func SplitLines(s string) []string { + lines := strings.SplitAfter(s, "\n") + lines[len(lines)-1] += "\n" + return lines +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/LICENSE b/vendor/github.com/pmorie/go-open-service-broker-client/LICENSE new file mode 100644 index 00000000..c4ea8b6f --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2014 Red Hat, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/bind.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/bind.go new file mode 100644 index 00000000..574c7380 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/bind.go @@ -0,0 +1,141 @@ +package v2 + +import ( + "fmt" + "net/http" + + "github.com/golang/glog" +) + +// internal message body types + +type bindRequestBody struct { + ServiceID string `json:"service_id"` + PlanID string `json:"plan_id"` + Parameters map[string]interface{} `json:"parameters,omitempty"` + BindResource map[string]interface{} `json:"bind_resource,omitempty"` + Context map[string]interface{} `json:"context,omitempty"` +} + +type bindSuccessResponseBody struct { + Credentials map[string]interface{} `json:"credentials"` + SyslogDrainURL *string `json:"syslog_drain_url"` + RouteServiceURL *string `json:"route_service_url"` + VolumeMounts []interface{} `json:"volume_mounts"` + Operation *string `json:"operation"` +} + +const ( + bindResourceAppGUIDKey = "app_guid" + bindResourceRouteKey = "route" +) + +func (c *client) Bind(r *BindRequest) (*BindResponse, error) { + if r.AcceptsIncomplete { + if err := c.validateAlphaAPIMethodsAllowed(); err != nil { + return nil, AsyncBindingOperationsNotAllowedError{ + reason: err.Error(), + } + } + } + + if err := validateBindRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(bindingURLFmt, c.URL, r.InstanceID, r.BindingID) + + params := map[string]string{} + if r.AcceptsIncomplete { + params[AcceptsIncomplete] = "true" + } + + requestBody := &bindRequestBody{ + ServiceID: r.ServiceID, + PlanID: r.PlanID, + Parameters: r.Parameters, + } + + if c.APIVersion.AtLeast(Version2_13()) { + requestBody.Context = r.Context + } + + if r.BindResource != nil { + requestBody.BindResource = map[string]interface{}{} + if r.BindResource.AppGUID != nil { + requestBody.BindResource[bindResourceAppGUIDKey] = *r.BindResource.AppGUID + } + if r.BindResource.Route != nil { + requestBody.BindResource[bindResourceRouteKey] = *r.BindResource.AppGUID + } + } + + response, err := c.prepareAndDo(http.MethodPut, fullURL, params, requestBody, r.OriginatingIdentity) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK, http.StatusCreated: + userResponse := &BindResponse{} + if err := c.unmarshalResponse(response, userResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + return userResponse, nil + case http.StatusAccepted: + if !r.AcceptsIncomplete { + return nil, c.handleFailureResponse(response) + } + + responseBodyObj := &bindSuccessResponseBody{} + if err := c.unmarshalResponse(response, responseBodyObj); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + var opPtr *OperationKey + if responseBodyObj.Operation != nil { + opStr := *responseBodyObj.Operation + op := OperationKey(opStr) + opPtr = &op + } + + userResponse := &BindResponse{ + Credentials: responseBodyObj.Credentials, + SyslogDrainURL: responseBodyObj.SyslogDrainURL, + RouteServiceURL: responseBodyObj.RouteServiceURL, + VolumeMounts: responseBodyObj.VolumeMounts, + OperationKey: opPtr, + } + if response.StatusCode == http.StatusAccepted { + if c.Verbose { + glog.Infof("broker %q: received asynchronous response", c.Name) + } + userResponse.Async = true + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func validateBindRequest(request *BindRequest) error { + if request.BindingID == "" { + return required("bindingID") + } + + if request.InstanceID == "" { + return required("instanceID") + } + + if request.ServiceID == "" { + return required("serviceID") + } + + if request.PlanID == "" { + return required("planID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/client.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/client.go new file mode 100644 index 00000000..829e906f --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/client.go @@ -0,0 +1,283 @@ +package v2 + +import ( + "bytes" + "crypto/tls" + "crypto/x509" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "io" + "io/ioutil" + "net/http" + "strings" + "time" + + "github.com/golang/glog" +) + +const ( + // APIVersionHeader is the header value associated with the version of the Open + // Service Broker API version. + APIVersionHeader = "X-Broker-API-Version" + // OriginatingIdentityHeader is the header associated with originating + // identity. + OriginatingIdentityHeader = "X-Broker-API-Originating-Identity" + + catalogURL = "%s/v2/catalog" + serviceInstanceURLFmt = "%s/v2/service_instances/%s" + lastOperationURLFmt = "%s/v2/service_instances/%s/last_operation" + bindingLastOperationURLFmt = "%s/v2/service_instances/%s/service_bindings/%s/last_operation" + bindingURLFmt = "%s/v2/service_instances/%s/service_bindings/%s" +) + +// NewClient is a CreateFunc for creating a new functional Client and +// implements the CreateFunc interface. +func NewClient(config *ClientConfiguration) (Client, error) { + httpClient := &http.Client{ + Timeout: time.Duration(config.TimeoutSeconds) * time.Second, + } + transport := &http.Transport{} + if config.TLSConfig != nil { + transport.TLSClientConfig = config.TLSConfig + } else { + transport.TLSClientConfig = &tls.Config{} + } + if config.Insecure { + transport.TLSClientConfig.InsecureSkipVerify = true + } + if len(config.CAData) != 0 { + if transport.TLSClientConfig.RootCAs == nil { + transport.TLSClientConfig.RootCAs = x509.NewCertPool() + } + transport.TLSClientConfig.RootCAs.AppendCertsFromPEM(config.CAData) + } + if transport.TLSClientConfig.InsecureSkipVerify && transport.TLSClientConfig.RootCAs != nil { + return nil, errors.New("Cannot specify root CAs and to skip TLS verification") + } + httpClient.Transport = transport + + c := &client{ + Name: config.Name, + URL: strings.TrimRight(config.URL, "/"), + APIVersion: config.APIVersion, + EnableAlphaFeatures: config.EnableAlphaFeatures, + Verbose: config.Verbose, + httpClient: httpClient, + } + c.doRequestFunc = c.doRequest + + if config.AuthConfig != nil { + if config.AuthConfig.BasicAuthConfig == nil && config.AuthConfig.BearerConfig == nil { + return nil, errors.New("Non-nil AuthConfig cannot be empty") + } + if config.AuthConfig.BasicAuthConfig != nil && config.AuthConfig.BearerConfig != nil { + return nil, errors.New("Only one AuthConfig implementation must be set at a time") + } + + c.AuthConfig = config.AuthConfig + } + + return c, nil +} + +var _ CreateFunc = NewClient + +type doRequestFunc func(request *http.Request) (*http.Response, error) + +// client provides a functional implementation of the Client interface. +type client struct { + Name string + URL string + APIVersion APIVersion + AuthConfig *AuthConfig + EnableAlphaFeatures bool + Verbose bool + + httpClient *http.Client + doRequestFunc doRequestFunc +} + +var _ Client = &client{} + +// This file contains shared methods used by each interface method of the +// Client interface. Individual interface methods are in the following files: +// +// GetCatalog: get_catalog.go +// ProvisionInstance: provision_instance.go +// UpdateInstance: update_instance.go +// DeprovisionInstance: deprovision_instance.go +// PollLastOperation: poll_last_operation.go +// Bind: bind.go +// Unbind: unbind.go + +const ( + contentType = "Content-Type" + jsonType = "application/json" +) + +// prepareAndDo prepares a request for the given method, URL, and +// message body, and executes the request, returning an http.Response or an +// error. Errors returned from this function represent http-layer errors and +// not errors in the Open Service Broker API. +func (c *client) prepareAndDo(method, URL string, params map[string]string, body interface{}, originatingIdentity *OriginatingIdentity) (*http.Response, error) { + var bodyReader io.Reader + + if body != nil { + bodyBytes, err := json.Marshal(body) + if err != nil { + return nil, err + } + + bodyReader = bytes.NewReader(bodyBytes) + } + + request, err := http.NewRequest(method, URL, bodyReader) + if err != nil { + return nil, err + } + + request.Header.Set(APIVersionHeader, c.APIVersion.HeaderValue()) + if bodyReader != nil { + request.Header.Set(contentType, jsonType) + } + + if c.AuthConfig != nil { + if c.AuthConfig.BasicAuthConfig != nil { + basicAuth := c.AuthConfig.BasicAuthConfig + request.SetBasicAuth(basicAuth.Username, basicAuth.Password) + } else if c.AuthConfig.BearerConfig != nil { + bearer := c.AuthConfig.BearerConfig + request.Header.Set("Authorization", "Bearer "+bearer.Token) + } + } + + if c.APIVersion.AtLeast(Version2_13()) && originatingIdentity != nil { + headerValue, err := buildOriginatingIdentityHeaderValue(originatingIdentity) + if err != nil { + return nil, err + } + request.Header.Set(OriginatingIdentityHeader, headerValue) + } + + if params != nil { + q := request.URL.Query() + for k, v := range params { + q.Set(k, v) + } + request.URL.RawQuery = q.Encode() + } + + if c.Verbose { + glog.Infof("broker %q: doing request to %q", c.Name, URL) + } + + return c.doRequestFunc(request) +} + +func (c *client) doRequest(request *http.Request) (*http.Response, error) { + return c.httpClient.Do(request) +} + +// unmarshalResponse unmartials the response body of the given response into +// the given object or returns an error. +func (c *client) unmarshalResponse(response *http.Response, obj interface{}) error { + body, err := ioutil.ReadAll(response.Body) + if err != nil { + return err + } + + if c.Verbose { + glog.Infof("broker %q: response body: %v, type: %T", c.Name, string(body), obj) + } + + err = json.Unmarshal(body, obj) + if err != nil { + return err + } + + return nil +} + +// handleFailureResponse returns an HTTPStatusCodeError for the given +// response. +func (c *client) handleFailureResponse(response *http.Response) error { + glog.Info("handling failure responses") + + httpErr := HTTPStatusCodeError{ + StatusCode: response.StatusCode, + } + + brokerResponse := make(map[string]interface{}) + if err := c.unmarshalResponse(response, &brokerResponse); err != nil { + httpErr.ResponseError = err + return httpErr + } + + if errorMessage, ok := brokerResponse["error"].(string); ok { + httpErr.ErrorMessage = &errorMessage + } + + if description, ok := brokerResponse["description"].(string); ok { + httpErr.Description = &description + } + + return httpErr +} + +func buildOriginatingIdentityHeaderValue(i *OriginatingIdentity) (string, error) { + if i == nil { + return "", nil + } + if i.Platform == "" { + return "", errors.New("originating identity platform must not be empty") + } + if i.Value == "" { + return "", errors.New("originating identity value must not be empty") + } + if err := isValidJSON(i.Value); err != nil { + return "", fmt.Errorf("originating identity value must be valid JSON: %v", err) + } + encodedValue := base64.StdEncoding.EncodeToString([]byte(i.Value)) + headerValue := fmt.Sprintf("%v %v", i.Platform, encodedValue) + return headerValue, nil +} + +func isValidJSON(s string) error { + var js json.RawMessage + return json.Unmarshal([]byte(s), &js) +} + +// validateAlphaAPIMethodsAllowed returns an error if alpha API methods are not +// allowed for this client. +func (c *client) validateAlphaAPIMethodsAllowed() error { + if !c.EnableAlphaFeatures { + return AlphaAPIMethodsNotAllowedError{ + reason: fmt.Sprintf("alpha features must be enabled"), + } + } + + if !c.APIVersion.AtLeast(LatestAPIVersion()) { + return AlphaAPIMethodsNotAllowedError{ + reason: fmt.Sprintf( + "must have latest API Version. Current: %s, Expected: %s", + c.APIVersion.label, + LatestAPIVersion().label, + ), + } + } + + return nil +} + +// internal message body types + +type asyncSuccessResponseBody struct { + Operation *string `json:"operation"` +} + +type failureResponseBody struct { + Err *string `json:"error,omitempty"` + Description *string `json:"description,omitempty"` +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/constants.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/constants.go new file mode 100644 index 00000000..ed249a79 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/constants.go @@ -0,0 +1,33 @@ +package v2 + +const ( + // AcceptsIncomplete is the name of a query parameter that indicates that + // the client allows a request to complete asynchronously. + AcceptsIncomplete = "accepts_incomplete" + + // VarKeyInstanceID is the name to use for a mux var representing an + // instance ID. + VarKeyInstanceID = "instance_id" + + // VarKeyBindingID is the name to use for a mux var representing a binding + // ID. + VarKeyBindingID = "binding_id" + + // VarKeyServiceID is the name to use for a mux var representing a service ID. + VarKeyServiceID = "service_id" + + // VarKeyPlanID is the name to use for a mux var representing a plan ID. + VarKeyPlanID = "plan_id" + + // VarKeyOperation is the name to use for a mux var representing an + // operation. + VarKeyOperation = "operation" + + // PlatformKubernetes is the name for Kubernetes in the Platform field of + // OriginatingIdentity. + PlatformKubernetes = "kubernetes" + + // PlatformCloudFoundry is the name for Cloud Foundry in the Platform field + // of OriginatingIdentity. + PlatformCloudFoundry = "cloudfoundry" +) diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/deprovision_instance.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/deprovision_instance.go new file mode 100644 index 00000000..730149ba --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/deprovision_instance.go @@ -0,0 +1,75 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +func (c *client) DeprovisionInstance(r *DeprovisionRequest) (*DeprovisionResponse, error) { + if err := validateDeprovisionRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(serviceInstanceURLFmt, c.URL, r.InstanceID) + + params := map[string]string{ + VarKeyServiceID: string(r.ServiceID), + VarKeyPlanID: string(r.PlanID), + } + if r.AcceptsIncomplete { + params[AcceptsIncomplete] = "true" + } + + response, err := c.prepareAndDo(http.MethodDelete, fullURL, params, nil, r.OriginatingIdentity) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK, http.StatusGone: + return &DeprovisionResponse{}, nil + case http.StatusAccepted: + if !r.AcceptsIncomplete { + // If the client did not signify that it could handle asynchronous + // operations, a '202 Accepted' response should be treated as an error. + return nil, c.handleFailureResponse(response) + } + + responseBodyObj := &asyncSuccessResponseBody{} + if err := c.unmarshalResponse(response, responseBodyObj); err != nil { + return nil, err + } + + var opPtr *OperationKey + if responseBodyObj.Operation != nil { + opStr := *responseBodyObj.Operation + op := OperationKey(opStr) + opPtr = &op + } + + userResponse := &DeprovisionResponse{ + Async: true, + OperationKey: opPtr, + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func validateDeprovisionRequest(request *DeprovisionRequest) error { + if request.InstanceID == "" { + return required("instanceID") + } + + if request.ServiceID == "" { + return required("serviceID") + } + + if request.PlanID == "" { + return required("planID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/doc.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/doc.go new file mode 100644 index 00000000..166eeaa0 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/doc.go @@ -0,0 +1,3 @@ +// Package v2 contains a client for working with service brokers implementing +// v2 of the Open Service Broker API. +package v2 diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/errors.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/errors.go new file mode 100644 index 00000000..bf5944c3 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/errors.go @@ -0,0 +1,187 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +// HTTPStatusCodeError is an error type that provides additional information +// based on the Open Service Broker API conventions for returning information +// about errors. If the response body provided by the broker to any client +// operation is malformed, an error of this type will be returned with the +// ResponseError field set to the unmarshalling error. +// +// These errors may optionally provide a machine-readable error message and +// human-readable description. +// +// The IsHTTPError method checks whether an error is of this type. +// +// Checks for important errors in the API specification are implemented as +// utility methods: +// +// - IsGoneError +// - IsConflictError +// - IsAsyncRequiredError +// - IsAppGUIDRequiredError +type HTTPStatusCodeError struct { + // StatusCode is the HTTP status code returned by the broker. + StatusCode int + // ErrorMessage is a machine-readable error string that may be returned by + // the broker. + ErrorMessage *string + // Description is a human-readable description of the error that may be + // returned by the broker. + Description *string + // ResponseError is set to the error that occurred when unmarshalling a + // response body from the broker. + ResponseError error +} + +func (e HTTPStatusCodeError) Error() string { + errorMessage := "" + description := "" + + if e.ErrorMessage != nil { + errorMessage = *e.ErrorMessage + } + if e.Description != nil { + description = *e.Description + } + return fmt.Sprintf("Status: %v; ErrorMessage: %v; Description: %v; ResponseError: %v", e.StatusCode, errorMessage, description, e.ResponseError) +} + +// IsHTTPError returns whether the error represents an HTTPStatusCodeError. A +// client method returning an HTTP error indicates that the broker returned an +// error code and a correctly formed response body. +func IsHTTPError(err error) (*HTTPStatusCodeError, bool) { + statusCodeError, ok := err.(HTTPStatusCodeError) + if ok { + return &statusCodeError, ok + } + + statusCodeErrorPointer, ok := err.(*HTTPStatusCodeError) + if ok { + return statusCodeErrorPointer, ok + } + + return nil, ok +} + +// IsGoneError returns whether the error represents an HTTP GONE status. +func IsGoneError(err error) bool { + statusCodeError, ok := err.(HTTPStatusCodeError) + if !ok { + return false + } + + return statusCodeError.StatusCode == http.StatusGone +} + +// IsConflictError returns whether the error represents a conflict. +func IsConflictError(err error) bool { + statusCodeError, ok := err.(HTTPStatusCodeError) + if !ok { + return false + } + + return statusCodeError.StatusCode == http.StatusConflict +} + +// Constants are used to check for "Async" and "RequiresApp" errors and their messages +const ( + AsyncErrorMessage = "AsyncRequired" + AsyncErrorDescription = "This service plan requires client support for asynchronous service operations." + AppGUIDRequiredErrorMessage = "RequiresApp" + AppGUIDRequiredErrorDescription = "This service supports generation of credentials through binding an application only." +) + +// IsAsyncRequiredError returns whether the error corresponds to the +// conventional way of indicating that a service requires asynchronous +// operations to perform an action. +func IsAsyncRequiredError(err error) bool { + statusCodeError, ok := err.(HTTPStatusCodeError) + if !ok { + return false + } + + if statusCodeError.StatusCode != http.StatusUnprocessableEntity { + return false + } + + if statusCodeError.ErrorMessage == nil || statusCodeError.Description == nil { + return false + } + + if *statusCodeError.ErrorMessage != AsyncErrorMessage { + return false + } + + return *statusCodeError.Description == AsyncErrorDescription +} + +// IsAppGUIDRequiredError returns whether the error corresponds to the +// conventional way of indicating that a service only supports credential-type +// bindings. +func IsAppGUIDRequiredError(err error) bool { + statusCodeError, ok := err.(HTTPStatusCodeError) + if !ok { + return false + } + + if statusCodeError.StatusCode != http.StatusUnprocessableEntity { + return false + } + + if statusCodeError.ErrorMessage == nil || statusCodeError.Description == nil { + return false + } + + if *statusCodeError.ErrorMessage != AppGUIDRequiredErrorMessage { + return false + } + + return *statusCodeError.Description == AppGUIDRequiredErrorDescription +} + +// AlphaAPIMethodsNotAllowedError is an error type signifying that alpha API +// methods are not allowed for this client's API Version. +type AlphaAPIMethodsNotAllowedError struct { + reason string +} + +func (e AlphaAPIMethodsNotAllowedError) Error() string { + return fmt.Sprintf( + "alpha API methods not allowed: %s", + e.reason, + ) +} + +// GetBindingNotAllowedError is an error type signifying that doing a GET to +// fetch a binding is not allowed for this client. +type GetBindingNotAllowedError struct { + reason string +} + +func (e GetBindingNotAllowedError) Error() string { + return fmt.Sprintf( + "GetBinding not allowed: %s", + e.reason, + ) +} + +// AsyncBindingOperationsNotAllowedError is an error type signifying that asynchronous +// binding operations (bind/unbind/poll) are not allowed for this client. +type AsyncBindingOperationsNotAllowedError struct { + reason string +} + +func (e AsyncBindingOperationsNotAllowedError) Error() string { + return fmt.Sprintf("Asynchronous binding operations are not allowed: %s", e.reason) +} + +// IsAsyncBindingOperationsNotAllowedError returns whether the error represents asynchronous +// binding operations (bind/unbind/poll) not being allowed for this client. +func IsAsyncBindingOperationsNotAllowedError(err error) bool { + _, ok := err.(AsyncBindingOperationsNotAllowedError) + return ok +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/get_binding.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/get_binding.go new file mode 100644 index 00000000..ef413e45 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/get_binding.go @@ -0,0 +1,33 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +func (c *client) GetBinding(r *GetBindingRequest) (*GetBindingResponse, error) { + if err := c.validateAlphaAPIMethodsAllowed(); err != nil { + return nil, GetBindingNotAllowedError{ + reason: err.Error(), + } + } + + fullURL := fmt.Sprintf(bindingURLFmt, c.URL, r.InstanceID, r.BindingID) + + response, err := c.prepareAndDo(http.MethodGet, fullURL, nil /* params */, nil /* request body */, nil /* originating identity */) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK: + userResponse := &GetBindingResponse{} + if err := c.unmarshalResponse(response, userResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/get_catalog.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/get_catalog.go new file mode 100644 index 00000000..b4d87029 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/get_catalog.go @@ -0,0 +1,52 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +func (c *client) GetCatalog() (*CatalogResponse, error) { + fullURL := fmt.Sprintf(catalogURL, c.URL) + + response, err := c.prepareAndDo(http.MethodGet, fullURL, nil /* params */, nil /* request body */, nil /* originating identity */) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK: + catalogResponse := &CatalogResponse{} + if err := c.unmarshalResponse(response, catalogResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + if !c.APIVersion.AtLeast(Version2_13()) { + for ii := range catalogResponse.Services { + for jj := range catalogResponse.Services[ii].Plans { + catalogResponse.Services[ii].Plans[jj].Schemas = nil + } + } + } else if !c.EnableAlphaFeatures { + for ii := range catalogResponse.Services { + for jj := range catalogResponse.Services[ii].Plans { + schemas := catalogResponse.Services[ii].Plans[jj].Schemas + if schemas != nil { + if schemas.ServiceBinding != nil { + removeResponseSchema(schemas.ServiceBinding.Create) + } + } + } + } + } + + return catalogResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func removeResponseSchema(p *RequestResponseSchema) { + if p != nil { + p.Response = nil + } +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/interface.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/interface.go new file mode 100644 index 00000000..16868909 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/interface.go @@ -0,0 +1,194 @@ +package v2 + +import ( + "crypto/tls" +) + +// AuthConfig is a union-type representing the possible auth configurations a +// client may use to authenticate to a broker. Currently, only basic auth is +// supported. +type AuthConfig struct { + BasicAuthConfig *BasicAuthConfig + BearerConfig *BearerConfig +} + +// BasicAuthConfig represents a set of basic auth credentials. +type BasicAuthConfig struct { + // Username is the basic auth username. + Username string + // Password is the basic auth password. + Password string +} + +// BearerConfig represents bearer token credentials. +type BearerConfig struct { + // Token is the bearer token. + Token string +} + +// ClientConfiguration represents the configuration of a Client. +type ClientConfiguration struct { + // Name is the name to use for this client in log messages. Using the + // logical name of the Broker this client is for is recommended. + Name string + // URL is the URL to use to contact the broker. + URL string + // APIVersion is the APIVersion to use for this client. API features + // adopted after the 2.11 version of the API will only be sent if + // APIVersion is an API version that supports them. + APIVersion APIVersion + // AuthInfo is the auth configuration the client should use to authenticate + // to the broker. + AuthConfig *AuthConfig + // TLSConfig is the TLS configuration to use when communicating with the + // broker. + TLSConfig *tls.Config + // Insecure represents whether the 'InsecureSkipVerify' TLS configuration + // field should be set. If the TLSConfig field is set and this field is + // set to true, it overrides the value in the TLSConfig field. + Insecure bool + // TimeoutSeconds is the length of the timeout of any request to the + // broker, in seconds. + TimeoutSeconds int + // EnableAlphaFeatures controls whether alpha features in the Open Service + // Broker API are enabled in a client. Features are considered to be + // alpha if they have been accepted into the Open Service Broker API but + // not released in a version of the API specification. Features are + // indicated as being alpha when the client API fields they represent + // begin with the 'Alpha' prefix. + // + // If alpha features are not enabled, the client will not send or return + // any request parameters or request or response fields that correspond to + // alpha features. + EnableAlphaFeatures bool + // CAData holds PEM-encoded bytes (typically read from a root certificates bundle). + // This CA certificate will be added to any specified in TLSConfig.RootCAs. + CAData []byte + // Verbose is whether the client will log to glog. + Verbose bool +} + +// DefaultClientConfiguration returns a default ClientConfiguration: +// +// - latest API version +// - 60 second timeout (referenced as a typical timeout in the Open Service +// Broker API spec) +// - alpha features disabled +func DefaultClientConfiguration() *ClientConfiguration { + return &ClientConfiguration{ + APIVersion: LatestAPIVersion(), + TimeoutSeconds: 60, + EnableAlphaFeatures: false, + } +} + +// Client defines the interface to the v2 Open Service Broker client. The +// logical lifecycle of client operations is: +// +// 1. Get the broker's catalog of services with the GetCatalog method +// 2. Provision a new instance of a service with the ProvisionInstance method +// 3. Update the parameters or plan of an instance with the UpdateInstance method +// 4. Deprovision an instance with the DeprovisionInstance method +// +// Some services and plans support binding from an instance of the service to +// an application. The logical lifecycle of a binding is: +// +// 1. Create a new binding to an instance of a service with the Bind method +// 2. Delete a binding to an instance with the Unbind method +type Client interface { + // GetCatalog returns information about the services the broker offers and + // their plans or an error. GetCatalog calls GET on the Broker's catalog + // endpoint (/v2/catalog). + GetCatalog() (*CatalogResponse, error) + // ProvisionInstance requests that a new instance of a service be + // provisioned and returns information about the instance or an error. + // ProvisionInstance does a PUT on the Broker's endpoint for the requested + // instance ID (/v2/service_instances/instance-id). + // + // If the AcceptsIncomplete field of the request is set to true, the + // broker may complete the request asynchronously. Callers should check + // the value of the Async field on the response and check the operation + // status using PollLastOperation if the Async field is true. + ProvisionInstance(r *ProvisionRequest) (*ProvisionResponse, error) + // UpdateInstance requests that an instances plan or parameters be updated + // and returns information about asynchronous responses or an error. + // UpdateInstance does a PATCH on the Broker's endpoint for the requested + // instance ID (/v2/service_instances/instance-id). + // + // If the AcceptsIncomplete field of the request is set to true, the + // broker may complete the request asynchronously. Callers should check + // the value of the Async field on the response and check the operation + // status using PollLastOperation if the Async field is true. + UpdateInstance(r *UpdateInstanceRequest) (*UpdateInstanceResponse, error) + // DeprovisionInstance requests that an instances plan or parameters be + // updated and returns information about asynchronous responses or an + // error. DeprovisionInstance does a DELETE on the Broker's endpoint for + // the requested instance ID (/v2/service_instances/instance-id). + // + // If the AcceptsIncomplete field of the request is set to true, the + // broker may complete the request asynchronously. Callers should check + // the value of the Async field on the response and check the operation + // status using PollLastOperation if the Async field is true. Note that + // there are special semantics for PollLastOperation when checking the + // status of deprovision operations; see the doc for that method. + DeprovisionInstance(r *DeprovisionRequest) (*DeprovisionResponse, error) + // PollLastOperation sends a request to query the last operation for a + // service instance to the broker and returns information about the + // operation or an error. PollLastOperation does a GET on the broker's + // last operation endpoint for the requested instance ID + // (/v2/service_instances/instance-id/last_operation). + // + // Callers should periodically call PollLastOperation until they receive a + // success response. PollLastOperation may return an HTTP GONE error for + // asynchronous deprovisions. This is a valid response for async + // operations and means that the instance has been successfully + // deprovisioned. When calling PollLastOperation to check the status of + // an asynchronous deprovision, callers check the status of an + // asynchronous deprovision, callers should test the value of the returned + // error with IsGoneError. + PollLastOperation(r *LastOperationRequest) (*LastOperationResponse, error) + // PollBindingLastOperation is an ALPHA API method and may change. + // Alpha features must be enabled and the client must be using the + // latest API Version in order to use this method. + // + // PollBindingLastOperation sends a request to query the last operation + // for a service binding to the broker and returns information about the + // operation or an error. PollBindingLastOperation does a GET on the broker's + // last operation endpoint for the requested binding ID + // (/v2/service_instances/instance-id/service_bindings/binding-id/last_operation). + // + // Callers should periodically call PollBindingLastOperation until they + // receive a success response. PollBindingLastOperation may return an + // HTTP GONE error for asynchronous unbinding. This is a valid response + // for async operations and means that the binding has been successfully + // deleted. When calling PollBindingLastOperation to check the status of + // an asynchronous unbind, callers should test the value of the returned + // error with IsGoneError. + PollBindingLastOperation(r *BindingLastOperationRequest) (*LastOperationResponse, error) + // Bind requests a new binding between a service instance and an + // application and returns information about the binding or an error. Bind + // does a PUT on the Broker's endpoint for the requested instance and + // binding IDs (/v2/service_instances/instance-id/service_bindings/binding-id). + Bind(r *BindRequest) (*BindResponse, error) + // Bind requests that a binding between a service instance and an + // application be deleted and returns information about the binding or an + // error. Unbind does a DELETE on the Broker's endpoint for the requested + // instance and binding IDs (/v2/service_instances/instance-id/service_bindings/binding-id). + Unbind(r *UnbindRequest) (*UnbindResponse, error) + // GetBinding is an ALPHA API method and may change. Alpha features must + // be enabled and the client must be using the latest API Version in + // order to use this method. + // + // GetBinding returns configuration and credential information + // about an existing binding. GetBindings calls GET on the Broker's + // binding endpoint + // (/v2/service_instances/instance-id/service_bindings/binding-id) + GetBinding(r *GetBindingRequest) (*GetBindingResponse, error) +} + +// CreateFunc allows control over which implementation of a Client is +// returned. Users of the Client interface may need to create clients for +// multiple brokers in a way that makes normal dependency injection +// prohibitive. In order to make such code testable, users of the API can +// inject a CreateFunc, and use the CreateFunc from the fake package in tests. +type CreateFunc func(*ClientConfiguration) (Client, error) diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/poll_binding_last_operation.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/poll_binding_last_operation.go new file mode 100644 index 00000000..dfd39a4a --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/poll_binding_last_operation.go @@ -0,0 +1,62 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +func (c *client) PollBindingLastOperation(r *BindingLastOperationRequest) (*LastOperationResponse, error) { + if err := c.validateAlphaAPIMethodsAllowed(); err != nil { + return nil, AsyncBindingOperationsNotAllowedError{ + reason: err.Error(), + } + } + + if err := validateBindingLastOperationRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(bindingLastOperationURLFmt, c.URL, r.InstanceID, r.BindingID) + params := map[string]string{} + + if r.ServiceID != nil { + params[VarKeyServiceID] = *r.ServiceID + } + if r.PlanID != nil { + params[VarKeyPlanID] = *r.PlanID + } + if r.OperationKey != nil { + op := *r.OperationKey + opStr := string(op) + params[VarKeyOperation] = opStr + } + + response, err := c.prepareAndDo(http.MethodGet, fullURL, params, nil /* request body */, r.OriginatingIdentity) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK: + userResponse := &LastOperationResponse{} + if err := c.unmarshalResponse(response, userResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func validateBindingLastOperationRequest(request *BindingLastOperationRequest) error { + if request.InstanceID == "" { + return required("instanceID") + } + + if request.BindingID == "" { + return required("bindingID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/poll_last_operation.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/poll_last_operation.go new file mode 100644 index 00000000..27e19ffa --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/poll_last_operation.go @@ -0,0 +1,52 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +func (c *client) PollLastOperation(r *LastOperationRequest) (*LastOperationResponse, error) { + if err := validateLastOperationRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(lastOperationURLFmt, c.URL, r.InstanceID) + params := map[string]string{} + + if r.ServiceID != nil { + params[VarKeyServiceID] = *r.ServiceID + } + if r.PlanID != nil { + params[VarKeyPlanID] = *r.PlanID + } + if r.OperationKey != nil { + op := *r.OperationKey + opStr := string(op) + params[VarKeyOperation] = opStr + } + + response, err := c.prepareAndDo(http.MethodGet, fullURL, params, nil /* request body */, r.OriginatingIdentity) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK: + userResponse := &LastOperationResponse{} + if err := c.unmarshalResponse(response, userResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func validateLastOperationRequest(request *LastOperationRequest) error { + if request.InstanceID == "" { + return required("instanceID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/provision_instance.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/provision_instance.go new file mode 100644 index 00000000..3fe75776 --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/provision_instance.go @@ -0,0 +1,128 @@ +package v2 + +import ( + "fmt" + "net/http" + + "github.com/golang/glog" +) + +// internal message body types + +type provisionRequestBody struct { + ServiceID string `json:"service_id"` + PlanID string `json:"plan_id"` + OrganizationGUID string `json:"organization_guid"` + SpaceGUID string `json:"space_guid"` + Parameters map[string]interface{} `json:"parameters,omitempty"` + Context map[string]interface{} `json:"context,omitempty"` +} + +type provisionSuccessResponseBody struct { + DashboardURL *string `json:"dashboard_url"` + Operation *string `json:"operation"` +} + +func (c *client) ProvisionInstance(r *ProvisionRequest) (*ProvisionResponse, error) { + if err := validateProvisionRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(serviceInstanceURLFmt, c.URL, r.InstanceID) + + params := map[string]string{} + if r.AcceptsIncomplete { + params[AcceptsIncomplete] = "true" + } + + requestBody := &provisionRequestBody{ + ServiceID: r.ServiceID, + PlanID: r.PlanID, + OrganizationGUID: r.OrganizationGUID, + SpaceGUID: r.SpaceGUID, + Parameters: r.Parameters, + } + + if c.APIVersion.AtLeast(Version2_12()) { + requestBody.Context = r.Context + } + + response, err := c.prepareAndDo(http.MethodPut, fullURL, params, requestBody, r.OriginatingIdentity) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusCreated, http.StatusOK: + userResponse := &ProvisionResponse{} + if err := c.unmarshalResponse(response, userResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + if !c.APIVersion.AtLeast(Version2_13()) || !c.EnableAlphaFeatures { + userResponse.ExtensionAPIs = nil + } + + return userResponse, nil + case http.StatusAccepted: + if !r.AcceptsIncomplete { + // If the client did not signify that it could handle asynchronous + // operations, a '202 Accepted' response should be treated as an error. + return nil, c.handleFailureResponse(response) + } + + responseBodyObj := &provisionSuccessResponseBody{} + if err := c.unmarshalResponse(response, responseBodyObj); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + var opPtr *OperationKey + if responseBodyObj.Operation != nil { + opStr := *responseBodyObj.Operation + op := OperationKey(opStr) + opPtr = &op + } + + userResponse := &ProvisionResponse{ + Async: true, + DashboardURL: responseBodyObj.DashboardURL, + OperationKey: opPtr, + } + + if c.Verbose { + glog.Infof("broker %q: received asynchronous response", c.Name) + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func required(name string) error { + return fmt.Errorf("%v is required", name) +} + +func validateProvisionRequest(request *ProvisionRequest) error { + if request.InstanceID == "" { + return required("instanceID") + } + + if request.ServiceID == "" { + return required("serviceID") + } + + if request.PlanID == "" { + return required("planID") + } + + if request.OrganizationGUID == "" { + return required("organizationGUID") + } + + if request.SpaceGUID == "" { + return required("spaceGUID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/types.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/types.go new file mode 100644 index 00000000..fca37b0f --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/types.go @@ -0,0 +1,558 @@ +package v2 + +// This file contains the user-facing types used for the Open Service Broker +// client. + +// Service is an available service listed in a broker's catalog. +type Service struct { + // ID is a globally unique ID that identifies the service. + ID string `json:"id"` + // Name is the service's display name. + Name string `json:"name"` + // Description is a brief description of the service, suitable for + // printing by a CLI. + Description string `json:"description"` + // A list of 'tags' describing different classification referents or + // attributes of the service. CF-specific. + Tags []string `json:"tags,omitempty"` + // A list of permissions the user must give instances of this service. + // CF-specific. Current valid values are: + // + // - syslog_drain + // - route_forwarding + // - volume_mount + // + // See the Open Service Broker API spec for information on permissions. + Requires []string `json:"requires,omitempty"` + // Bindable represents whether a service is bindable. May be overridden + // on a per-plan basis by the Plan.Bindable field. + Bindable bool `json:"bindable"` + // BindingsRetrievable is ALPHA and may change or disappear at any time. + // BindingsRetrievable will only be provided if alpha features are + // enabled. + // + // BindingsRetrievable represents whether fetching a service binding via + // a GET on the binding resource's endpoint + // (/v2/service_instances/instance-id/service_bindings/binding-id) is + // supported for all plans. + BindingsRetrievable bool `json:"bindings_retrievable,omitempty"` + // PlanUpdatable represents whether instances of this service may be + // updated to a different plan. The serialized form 'plan_updateable' is + // a mistake that has become written into the API for backward + // compatibility reasons and is intentional. Optional; defaults to false. + PlanUpdatable *bool `json:"plan_updateable,omitempty"` + // Plans is the list of the Plans for a service. Plans represent + // different tiers. + Plans []Plan `json:"plans"` + // DashboardClient holds information about the OAuth SSO for the service's + // dashboard. Optional. + DashboardClient *DashboardClient `json:"dashboard_client,omitempty"` + // Metadata is a blob of information about the plan, meant to be user- + // facing content and display instructions. Metadata may contain + // platform-conventional values. Optional. + Metadata map[string]interface{} `json:"metadata,omitempty"` +} + +// DashboardClient contains information about the OAuth SSO +// flow for a Service's dashboard. +type DashboardClient struct { + // ID is the ID to use for the dashboard SSO OAuth client for this + // service. + ID string `json:"id"` + // Secret is a secret for the dashboard SSO OAuth client. + Secret string `json:"secret"` + // RedirectURI is the redirect URI that should be used to obtain an OAuth + // token. + RedirectURI string `json:"redirect_uri"` +} + +// Plan is a plan (or tier) within a service offering. +type Plan struct { + // ID is a globally unique ID that identifies the plan. + ID string `json:"id"` + // Name is the plan's display name. + Name string `json:"name"` + // Description is a brief description of the plan, suitable for + // printing by a CLI. + Description string `json:"description"` + // Free indicates whether the plan is available without charge. Optional; + // defaults to true. + Free *bool `json:"free,omitempty"` + // Bindable indicates whether the plan is bindable and overrides the value + // of the Service.Bindable field if set. Optional, defaults to unset. + Bindable *bool `json:"bindable,omitempty"` + // Metadata is a blob of information about the plan, meant to be user- + // facing content and display instructions. Metadata may contain + // platform-conventional values. Optional. + Metadata map[string]interface{} `json:"metadata,omitempty"` + // Schemas requires a client API version >=2.13. + // + // Schemas is a set of optional JSONSchemas that describe + // the expected parameters for creation and update of instances and + // creation of bindings. + Schemas *Schemas `json:"schemas,omitempty"` +} + +// Schemas requires a client API version >=2.13. +// +// Schemas is a set of optional JSONSchemas that describe +// the expected parameters for creation and update of instances and +// creation of bindings. +type Schemas struct { + ServiceInstance *ServiceInstanceSchema `json:"service_instance,omitempty"` + ServiceBinding *ServiceBindingSchema `json:"service_binding,omitempty"` +} + +// ServiceInstanceSchema requires a client API version >=2.13. +// +// ServiceInstanceSchema represents a plan's schemas for creation and +// update of an API resource. +type ServiceInstanceSchema struct { + Create *InputParametersSchema `json:"create,omitempty"` + Update *InputParametersSchema `json:"update,omitempty"` +} + +// ServiceBindingSchema requires a client API version >=2.13. +// +// ServiceBindingSchema represents a plan's schemas for the parameters +// accepted for binding creation. +type ServiceBindingSchema struct { + Create *RequestResponseSchema `json:"create,omitempty"` +} + +// InputParametersSchema requires a client API version >=2.13. +// +// InputParametersSchema represents a schema for input parameters for creation or +// update of an API resource. +type InputParametersSchema struct { + // The schema definition for the input parameters. Each input parameter + // is expressed as a property within a JSON object. + Parameters interface{} `json:"parameters,omitempty"` +} + +// RequestResponseSchema requires a client API version >=2.14. +// +// RequestResponseSchema contains a schema for input parameters for creation or +// update of an API resource, and a schema for the credentials returned by the +// broker +type RequestResponseSchema struct { + InputParametersSchema + // The schema definition for the broker's response to the bind request. + Response interface{} `json:"response,omitempty"` +} + +// OriginatingIdentity requires a client API version >=2.13. +// +// OriginatingIdentity is used to pass to the broker service an identity from +// the platform +type OriginatingIdentity struct { + // The name of the platform to which the user belongs + Platform string + // A serialized JSON object that describes the user in a way that makes + // sense to the platform + Value string +} + +// CatalogResponse is sent as the response to catalog requests. +type CatalogResponse struct { + Services []Service `json:"services"` +} + +// ProvisionRequest encompasses the request and body parameters +type ProvisionRequest struct { + // InstanceID is the ID of the new instance to provision. The Open + // Service Broker API specification recommends using a GUID for this + // field. + InstanceID string `json:"instance_id"` + // AcceptsIncomplete indicates whether the client can accept asynchronous + // provisioning. If the broker cannot fulfill a request synchronously and + // AcceptsIncomplete is set to false, the broker will reject the request. + // A broker may choose to response to a request with AcceptsIncomplete set + // to true either synchronously or asynchronously. + AcceptsIncomplete bool `json:"accepts_incomplete"` + // ServiceID is the ID of the service to provision a new instance of. + ServiceID string `json:"service_id"` + // PlanID is the ID of the plan to use for the new instance. + PlanID string `json:"plan_id"` + // OrganizationGUID is the platform GUID for the organization under which + // the service is to be provisioned. CF-specific. + OrganizationGUID string `json:"organization_guid"` + // SpaceGUID is the identifier for the project space within the platform + // organization. CF-specific. + SpaceGUID string `json:"space_guid"` + // Parameters is a set of configuration options for the service instance. + // Optional. + Parameters map[string]interface{} `json:"parameters,omitempty"` + // Context requires a client API version >= 2.12. + // + // Context is platform-specific contextual information under which the + // service instance is to be provisioned. + Context map[string]interface{} `json:"context,omitempty"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// ProvisionResponse is sent in response to a provision call +type ProvisionResponse struct { + // Async indicates whether the broker is handling the provision request + // asynchronously. + Async bool `json:"async"` + // DashboardURL is the URL of a web-based management user interface for + // the service instance. + DashboardURL *string `json:"dashboard_url,omitempty"` + // OperationKey is an extra identifier supplied by the broker to identify + // asynchronous operations. + OperationKey *OperationKey `json:"operation,omitempty"` + // ExtensionAPIs is a list of extension APIs for this instance. + // + // ExtensionsAPI is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + ExtensionAPIs []ExtensionAPI `json:"extension_apis,omitempty"` +} + +// ExtensionAPI contains information about an API endpoint that describes +// extension operations on a ServiceInstance. +// +// ExtensionAPI is an ALPHA API attribute and may change. Alpha +// features must be enabled and the client must be using the +// latest API Version in order to use this. +type ExtensionAPI struct { + // DiscoveryURL is a URI pointing to a valid OpenAPI 3.0+ document + // describing the API extension(s) to the Open Service Broker API including, + // endpoints, parameters, authentication mechanism and any other detail the + // platform needs for invocation. The location of the API extension + // endpoint(s) can be local to the Service Broker or on a remote server. If + // local to the Service Broker the same authentication method for normal + // Service Broker calls must be used. + DiscoveryURL string `json:"discovery_url,omitempty"` + // ServerURL is a URI pointing to a remote server where API extensions will + // run. This URI will be used as the basepath for the paths objects + // described by the `discovery_url` OpenAPI document. If ServerURL is + // missing, it means that the paths are invoked relative to the service + // broker URL. + ServerURL string `json:"server_url,omitempty"` + // Credentials is a set of authentication details for running any of the + // extension API calls, especially for those running on remote servers. + // + // The information in Credentials should be treated as SECRET. + Credentials map[string]interface{} `json:"credentials,omitempty"` + // AdheresTo is a URI refering to a specification detailing the interface + // the OpenAPI document hosted at the `discovery_url` adheres to. + AdheresTo string `json:"adheres_to,omitempty"` +} + +// OperationKey is an extra identifier from the broker in order to provide extra +// identifiers for asynchronous operations. +type OperationKey string + +// UpdateInstanceRequest is the user-facing object that represents a request +// to update an instance's plan or parameters. +type UpdateInstanceRequest struct { + // InstanceID is the ID of the instance to update. + InstanceID string `json:"instance_id"` + // AcceptsIncomplete indicates whether the client can accept asynchronous + // updating of an instance. If the broker cannot fulfill a request + // synchronously and AcceptsIncomplete is set to false, the broker will reject + // the request. A broker may choose to response to a request with + // AcceptsIncomplete set to true either synchronously or asynchronously. + AcceptsIncomplete bool `json:"accepts_incomplete"` + // ServiceID is the ID of the service the instance is provisioned from. + ServiceID string `json:"service_id"` + // PlanID is the ID the plan to update the instance to. The service must + // support plan updates. If unspecified, indicates that the client does + // not wish to update the plan of the instance. + PlanID *string `json:"plan_id,omitempty"` + // Parameters is a set of configuration options for the instance. If + // unset, indicates that the client does not wish to update the parameters + // for an instance. + Parameters map[string]interface{} `json:"parameters,omitempty"` + // Previous values contains information about the service instance prior to the update. + PreviousValues *PreviousValues `json:"previous_values,omitempty"` + // Context requires a client API version >= 2.12. + // + // Context is platform-specific contextual information under which the + // service instance was created. + Context map[string]interface{} `json:"context,omitempty"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// PreviousValues represents information about the service instance prior to the update. +type PreviousValues struct { + // ID of the plan prior to the update. If present, MUST be a non-empty string. + PlanID string `json:"plan_id,omitempty"` + // Deprecated; determined to be unnecessary as the value is immutable. ID of the service + // for the service instance. If present, MUST be a non-empty string. + ServiceID string `json:"service_id,omitempty"` + // Deprecated; Organization for the service instance MUST be provided by platforms in the + // top-level field context. ID of the organization specified for the service instance. + // If present, MUST be a non-empty string. + OrgID string `json:"organization_id,omitempty"` + // Deprecated; Space for the service instance MUST be provided by platforms in the top-level + // field context. ID of the space specified for the service instance. If present, MUST be + // a non-empty string. + SpaceID string `json:"space_id,omitempty"` +} + +// UpdateInstanceResponse represents a broker's response to an update instance +// request. +type UpdateInstanceResponse struct { + // Async indicates whether the broker is handling the update request + // asynchronously. + Async bool `json:"async"` + // DashboardURL is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the latest + // API Version in order to use this. + // + // DashboardURL is the URL of a web-based management user interface for + // the service instance. + DashboardURL *string `json:"dashboard_url,omitempty"` + // OperationKey is an extra identifier supplied by the broker to identify + // asynchronous operations. + OperationKey *OperationKey `json:"operation,omitempty"` +} + +// DeprovisionRequest represents a request to deprovision an instance of a +// service. +type DeprovisionRequest struct { + // InstanceID is the ID of the instance to deprovision. + InstanceID string `json:"instance_id"` + // AcceptsIncomplete indicates whether the client can accept asynchronous + // deprovisioning. If the broker cannot fulfill a request synchronously and + // AcceptsIncomplete is set to false, the broker will reject the request. + // A broker may choose to response to a request with AcceptsIncomplete set + // to true either synchronously or asynchronously. + AcceptsIncomplete bool `json:"accepts_incomplete"` + // ServiceID is the ID of the service the instance is provisioned from. + ServiceID string `json:"service_id"` + // PlanID is the ID of the plan the instance is provisioned from. + PlanID string `json:"plan_id"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// DeprovisionResponse represents a broker's response to a deprovision request. +type DeprovisionResponse struct { + // Async indicates whether the broker is handling the deprovision request + // asynchronously. + Async bool `json:"async"` + // OperationKey is an extra identifier supplied by the broker to identify + // asynchronous operations. + OperationKey *OperationKey `json:"operation,omitempty"` +} + +// LastOperationRequest represents a request to a broker to give the state of +// the action it is completing asynchronously. +type LastOperationRequest struct { + // InstanceID is the instance of the service to query the last operation + // for. + InstanceID string `json:"instance_id"` + // ServiceID is the ID of the service the instance is provisioned from. + // Optional, but recommended. + ServiceID *string `json:"service_id,omitempty"` + // PlanID is the ID of the plan the instance is provisioned from. + // Optional, but recommended. + PlanID *string `json:"plan_id,omitempty"` + // OperationKey is the operation key provided by the broker in the + // response to the initial request. Optional, but must be sent if + // supplied in the response to the original request. + OperationKey *OperationKey `json:"operation,omitempty"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// BindingLastOperationRequest represents a request to a broker to give the +// state of the action on a binding it is completing asynchronously. +type BindingLastOperationRequest struct { + // InstanceID is the instance of the service to query the last operation + // for. + InstanceID string `json:"instance_id"` + // BindingID is the binding to query the last operation for. + BindingID string `json:"binding_id"` + // ServiceID is the ID of the service the instance is provisioned from. + // Optional, but recommended. + ServiceID *string `json:"service_id,omitempty"` + // PlanID is the ID of the plan the instance is provisioned from. + // Optional, but recommended. + PlanID *string `json:"plan_id,omitempty"` + // OperationKey is the operation key provided by the broker in the + // response to the initial request. Optional, but must be sent if + // supplied in the response to the original request. + OperationKey *OperationKey `json:"operation,omitempty"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// LastOperationResponse represents the broker response with the state of a +// discrete action that the broker is completing asynchronously. +type LastOperationResponse struct { + // State is the state of the queried operation. + State LastOperationState `json:"state"` + // Description is a message from the broker describing the current state + // of the operation. + Description *string `json:"description,omitempty"` +} + +// LastOperationState is a typedef representing the state of an ongoing +// operation for an instance. +type LastOperationState string + +// Defines the possible states of an asynchronous request to a broker. +const ( + StateInProgress LastOperationState = "in progress" + StateSucceeded LastOperationState = "succeeded" + StateFailed LastOperationState = "failed" +) + +// BindRequest represents a request to create a new binding to an instance of +// a service. +type BindRequest struct { + // BindingID is the ID of the new binding to create. The Open Service + // Broker API specification recommends using a GUID for this field. + BindingID string `json:"binding_id"` + // InstanceID is the ID of the instance to bind to. + InstanceID string `json:"instance_id"` + // AcceptsIncomplete is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + // + // AcceptsIncomplete indicates whether the client can accept asynchronous + // binding. If the broker cannot fulfill a request synchronously and + // AcceptsIncomplete is set to false, the broker will reject the request. + // A broker may choose to response to a request with AcceptsIncomplete set + // to true either synchronously or asynchronously. + AcceptsIncomplete bool `json:"accepts_incomplete"` + // ServiceID is the ID of the service the instance was provisioned from. + ServiceID string `json:"service_id"` + // PlanID is the ID of the plan the instance was provisioned from. + PlanID string `json:"plan_id"` + // Deprecated; use bind_resource.app_guid to send this value instead. + AppGUID *string `json:"app_guid,omitempty"` + // BindResource holds extra information about a binding. Optional, but + // it's complicated. TODO: clarify + BindResource *BindResource `json:"bind_resource,omitempty"` + // Parameters is configuration parameters for the binding. Optional. + Parameters map[string]interface{} `json:"parameters,omitempty"` + // Context requires a client API version >= 2.13. + // + // Context is platform-specific contextual information under which the + // service binding is to be created. + Context map[string]interface{} `json:"context,omitempty"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// BindResource contains data for platform resources associated with a +// binding. +type BindResource struct { + AppGUID *string `json:"appGuid,omitempty"` + Route *string `json:"route,omitempty"` +} + +// BindResponse represents a broker's response to a BindRequest. +type BindResponse struct { + // Async is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + // + // Async indicates whether the broker is handling the bind request + // asynchronously. + Async bool `json:"async"` + // Credentials is a free-form hash of credentials that can be used by + // applications or users to access the service. + Credentials map[string]interface{} `json:"credentials,omitempty"` + // SyslogDrainURl is a URL to which logs must be streamed. CF-specific. + // May only be supplied by a service that declares a requirement for the + // 'syslog_drain' permission. + SyslogDrainURL *string `json:"syslog_drain_url,omitempty"` + // RouteServiceURL is a URL to which the platform must proxy requests to + // the application the binding is for. CF-specific. May only be supplied + // by a service that declares a requirement for the 'route_service' + // permission. + RouteServiceURL *string `json:"route_service_url,omitempty"` + // VolumeMounts is an array of configuration string for mounting volumes. + // CF-specific. May only be supplied by a service that declares a + // requirement for the 'volume_mount' permission. + VolumeMounts []interface{} `json:"volume_mounts,omitempty"` + // OperationKey is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + // + // OperationKey is an extra identifier supplied by the broker to identify + // asynchronous operations. + OperationKey *OperationKey `json:"operation,omitempty"` +} + +// UnbindRequest represents a request to unbind a particular binding. +type UnbindRequest struct { + // InstanceID is the ID of the instance the binding is for. + InstanceID string `json:"instance_id"` + // BindingID is the ID of the binding to delete. + BindingID string `json:"binding_id"` + // AcceptsIncomplete is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + // + // AcceptsIncomplete indicates whether the client can accept asynchronous + // unbinding. If the broker cannot fulfill a request synchronously and + // AcceptsIncomplete is set to false, the broker will reject the request. + // A broker may choose to response to a request with AcceptsIncomplete set + // to true either synchronously or asynchronously. + AcceptsIncomplete bool `json:"accepts_incomplete"` + // ServiceID is the ID of the service the instance was provisioned from. + ServiceID string `json:"service_id"` + // PlanID is the ID of the plan the instance was provisioned from. + PlanID string `json:"plan_id"` + // OriginatingIdentity is the identity on the platform of the user making this request. + OriginatingIdentity *OriginatingIdentity `json:"originatingIdentity,omitempty"` +} + +// UnbindResponse represents a broker's response to an UnbindRequest. +type UnbindResponse struct { + // Async is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + // + // Async indicates whether the broker is handling the unbind request + // asynchronously. + Async bool `json:"async"` + // OperationKey is an ALPHA API attribute and may change. Alpha + // features must be enabled and the client must be using the + // latest API Version in order to use this. + // + // OperationKey is an extra identifier supplied by the broker to identify + // asynchronous operations. + OperationKey *OperationKey `json:"operation,omitempty"` +} + +// GetBindingRequest represents a request to do a GET on a particular binding. +type GetBindingRequest struct { + // InstanceID is the ID of the instance the binding is for. + InstanceID string `json:"instance_id"` + // BindingID is the ID of the binding to delete. + BindingID string `json:"binding_id"` +} + +// GetBindingResponse is sent as the response to doing a GET on a particular +// binding. +type GetBindingResponse struct { + // Credentials is a free-form hash of credentials that can be used by + // applications or users to access the service. + Credentials map[string]interface{} `json:"credentials,omitempty"` + // SyslogDrainURl is a URL to which logs must be streamed. CF-specific. + // May only be supplied by a service that declares a requirement for the + // 'syslog_drain' permission. + SyslogDrainURL *string `json:"syslog_drain_url,omitempty"` + // RouteServiceURL is a URL to which the platform must proxy requests to + // the application the binding is for. CF-specific. May only be supplied + // by a service that declares a requirement for the 'route_service' + // permission. + RouteServiceURL *string `json:"route_service_url,omitempty"` + // VolumeMounts is an array of configuration string for mounting volumes. + // CF-specific. May only be supplied by a service that declares a + // requirement for the 'volume_mount' permission. + VolumeMounts []interface{} `json:"volume_mounts,omitempty"` + // Parameters is configuration parameters for the binding. + Parameters map[string]interface{} `json:"parameters,omitempty"` +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/unbind.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/unbind.go new file mode 100644 index 00000000..30a0998b --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/unbind.go @@ -0,0 +1,99 @@ +package v2 + +import ( + "fmt" + "net/http" + + "github.com/golang/glog" +) + +type unbindSuccessResponseBody struct { + Operation *string `json:"operation"` +} + +func (c *client) Unbind(r *UnbindRequest) (*UnbindResponse, error) { + if r.AcceptsIncomplete { + if err := c.validateAlphaAPIMethodsAllowed(); err != nil { + return nil, AsyncBindingOperationsNotAllowedError{ + reason: err.Error(), + } + } + } + + if err := validateUnbindRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(bindingURLFmt, c.URL, r.InstanceID, r.BindingID) + params := map[string]string{} + params[VarKeyServiceID] = r.ServiceID + params[VarKeyPlanID] = r.PlanID + if r.AcceptsIncomplete { + params[AcceptsIncomplete] = "true" + } + + response, err := c.prepareAndDo(http.MethodDelete, fullURL, params, nil, r.OriginatingIdentity) + if err != nil { + return nil, err + } + + switch response.StatusCode { + case http.StatusOK, http.StatusGone: + userResponse := &UnbindResponse{} + if err := c.unmarshalResponse(response, userResponse); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + return userResponse, nil + case http.StatusAccepted: + if !r.AcceptsIncomplete { + return nil, c.handleFailureResponse(response) + } + + responseBodyObj := &unbindSuccessResponseBody{} + if err := c.unmarshalResponse(response, responseBodyObj); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + var opPtr *OperationKey + if responseBodyObj.Operation != nil { + opStr := *responseBodyObj.Operation + op := OperationKey(opStr) + opPtr = &op + } + + userResponse := &UnbindResponse{ + OperationKey: opPtr, + } + if response.StatusCode == http.StatusAccepted { + if c.Verbose { + glog.Infof("broker %q: received asynchronous response", c.Name) + } + userResponse.Async = true + } + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func validateUnbindRequest(request *UnbindRequest) error { + if request.BindingID == "" { + return required("bindingID") + } + + if request.InstanceID == "" { + return required("instanceID") + } + + if request.ServiceID == "" { + return required("serviceID") + } + + if request.PlanID == "" { + return required("planID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/update_instance.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/update_instance.go new file mode 100644 index 00000000..4863587e --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/update_instance.go @@ -0,0 +1,110 @@ +package v2 + +import ( + "fmt" + "net/http" +) + +// internal message body types + +type updateInstanceRequestBody struct { + ServiceID string `json:"service_id"` + PlanID *string `json:"plan_id,omitempty"` + Parameters map[string]interface{} `json:"parameters,omitempty"` + Context map[string]interface{} `json:"context,omitempty"` + PreviousValues *PreviousValues `json:"previous_values,omitempty"` +} + +type updateInstanceResponseBody struct { + DashboardURL *string `json:"dashboard_url"` + Operation *string `json:"operation"` +} + +func (c *client) UpdateInstance(r *UpdateInstanceRequest) (*UpdateInstanceResponse, error) { + if err := validateUpdateInstanceRequest(r); err != nil { + return nil, err + } + + fullURL := fmt.Sprintf(serviceInstanceURLFmt, c.URL, r.InstanceID) + params := map[string]string{} + if r.AcceptsIncomplete { + params[AcceptsIncomplete] = "true" + } + + requestBody := &updateInstanceRequestBody{ + ServiceID: r.ServiceID, + PlanID: r.PlanID, + Parameters: r.Parameters, + PreviousValues: r.PreviousValues, + } + + if c.APIVersion.AtLeast(Version2_12()) { + requestBody.Context = r.Context + } + + response, err := c.prepareAndDo(http.MethodPatch, fullURL, params, requestBody, r.OriginatingIdentity) + if err != nil { + return nil, err + } + switch response.StatusCode { + case http.StatusOK: + responseBodyObj := &updateInstanceResponseBody{} + if err := c.unmarshalResponse(response, responseBodyObj); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + userResponse := &UpdateInstanceResponse{ + Async: false, + OperationKey: nil, + } + if c.validateAlphaAPIMethodsAllowed() == nil { + userResponse.DashboardURL = responseBodyObj.DashboardURL + } + + return userResponse, nil + case http.StatusAccepted: + if !r.AcceptsIncomplete { + // If the client did not signify that it could handle asynchronous + // operations, a '202 Accepted' response should be treated as an error. + return nil, c.handleFailureResponse(response) + } + + responseBodyObj := &updateInstanceResponseBody{} + if err := c.unmarshalResponse(response, responseBodyObj); err != nil { + return nil, HTTPStatusCodeError{StatusCode: response.StatusCode, ResponseError: err} + } + + var opPtr *OperationKey + if responseBodyObj.Operation != nil { + opStr := *responseBodyObj.Operation + op := OperationKey(opStr) + opPtr = &op + } + + userResponse := &UpdateInstanceResponse{ + Async: true, + OperationKey: opPtr, + } + if c.validateAlphaAPIMethodsAllowed() == nil { + userResponse.DashboardURL = responseBodyObj.DashboardURL + } + + // TODO: fix op key handling + + return userResponse, nil + default: + return nil, c.handleFailureResponse(response) + } +} + +func validateUpdateInstanceRequest(request *UpdateInstanceRequest) error { + if request.InstanceID == "" { + return required("instanceID") + } + + if request.ServiceID == "" { + return required("serviceID") + } + + return nil +} diff --git a/vendor/github.com/pmorie/go-open-service-broker-client/v2/version.go b/vendor/github.com/pmorie/go-open-service-broker-client/v2/version.go new file mode 100644 index 00000000..f7a670ad --- /dev/null +++ b/vendor/github.com/pmorie/go-open-service-broker-client/v2/version.go @@ -0,0 +1,54 @@ +package v2 + +// APIVersion represents a specific version of the OSB API. +type APIVersion struct { + label string + order byte +} + +// AtLeast returns whether the API version is greater than or equal to the +// given API version. +func (v APIVersion) AtLeast(test APIVersion) bool { + return v.order >= test.order +} + +// HeaderValue returns the value that should be sent in the API version header +// for this API version. +func (v APIVersion) HeaderValue() string { + return v.label +} + +const ( + // internalAPIVersion2_11 represents the 2.11 version of the Open Service + // Broker API. + internalAPIVersion2_11 = "2.11" + + // internalAPIVersion2_12 represents the 2.12 version of the Open Service + // Broker API. + internalAPIVersion2_12 = "2.12" + + // internalAPIVersion2_13 represents the 2.13 version of the Open Service + // Broker API. + internalAPIVersion2_13 = "2.13" +) + +//Version2_11 returns an APIVersion struct with the internal API version set to "2.11" +func Version2_11() APIVersion { + return APIVersion{label: internalAPIVersion2_11, order: 0} +} + +//Version2_12 returns an APIVersion struct with the internal API version set to "2.12" +func Version2_12() APIVersion { + return APIVersion{label: internalAPIVersion2_12, order: 1} +} + +//Version2_13 returns an APIVersion struct with the internal API version set to "2.13" +func Version2_13() APIVersion { + return APIVersion{label: internalAPIVersion2_13, order: 2} +} + +// LatestAPIVersion returns the latest supported API version in the current +// release of this library. +func LatestAPIVersion() APIVersion { + return Version2_13() +} diff --git a/vendor/github.com/pmorie/osb-broker-lib/LICENSE b/vendor/github.com/pmorie/osb-broker-lib/LICENSE new file mode 100644 index 00000000..261eeb9e --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/interface.go b/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/interface.go new file mode 100644 index 00000000..8c95c3ba --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/interface.go @@ -0,0 +1,219 @@ +package broker + +import ( + "net/http" + + osb "github.com/pmorie/go-open-service-broker-client/v2" +) + +// Interface contains the business logic for the broker's operations. +// Interface is the interface broker authors should implement and is +// embedded in an APISurface. +type Interface interface { + // ValidateBrokerAPIVersion encapsulates the business logic of validating + // the OSB API version sent to the broker with every request and returns + // an error. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#api-version-header + ValidateBrokerAPIVersion(version string) error + // GetCatalog encapsulates the business logic for returning the broker's + // catalog of services. Brokers must tell platforms they're integrating with + // which services they provide. GetCatalog is called when a platform makes + // initial contact with the broker to find out about that broker's services. + // + // The parameters are: + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#catalog-management + GetCatalog(c *RequestContext) (*CatalogResponse, error) + // Provision encapsulates the business logic for a provision operation and + // returns a osb.ProvisionResponse or an error. Provisioning creates a new + // instance of a particular service. + // + // The parameters are: + // - a osb.ProvisionRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a ProvisionResponse for a successful operation + // or an error. The APISurface handles translating ProvisionResponses or + // errors into the correct form in the http response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#provisioning + Provision(request *osb.ProvisionRequest, c *RequestContext) (*ProvisionResponse, error) + // Deprovision encapsulates the business logic for a deprovision operation + // and returns a osb.DeprovisionResponse or an error. Deprovisioning deletes + // an instance of a service and releases the resources associated with it. + // + // The parameters are: + // - a osb.DeprovisionRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a DeprovisionResponse for a successful + // operation or an error. The APISurface handles translating + // DeprovisionResponses or errors into the correct form in the http + // response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#deprovisioning + Deprovision(request *osb.DeprovisionRequest, c *RequestContext) (*DeprovisionResponse, error) + // LastOperation encapsulates the business logic for a last operation + // request and returns a osb.LastOperationResponse or an error. + // LastOperation is called when a platform checks the status of an ongoing + // asynchronous operation on an instance of a service. + // + // The parameters are: + // - a osb.LastOperationRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a LastOperationResponse for a successful + // operation or an error. The APISurface handles translating + // LastOperationResponses or errors into the correct form in the http + // response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#polling-last-operation + LastOperation(request *osb.LastOperationRequest, c *RequestContext) (*LastOperationResponse, error) + // Bind encapsulates the business logic for a bind operation and returns a + // osb.BindResponse or an error. Binding creates a new set of credentials for + // a consumer to use an instance of a service. Not all services are + // bindable; in order for a service to be bindable, either the service or + // the current plan associated with the instance must declare itself to be + // bindable. + // + // The parameters are: + // - a osb.BindRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a BindResponse for a successful operation or + // an error. The APISurface handles translating BindResponses or errors into + // the correct form in the http response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#binding + Bind(request *osb.BindRequest, c *RequestContext) (*BindResponse, error) + // GetBinding encapsulates the business logic that returns a binding in + // the form of a BindingResponse. The platform will only request a Binding + // if the broker's catalog has declared `"bindings_retrievable": true` for + // a particular service. + // + // The parameters are: + // - a osb.GetBindingRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a GetBindingResponse for a successful operation + // or an error. The APISurface handles translating GetBindingResponses or + // errors into the correct form in the http response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#fetching-a-service-binding + GetBinding(request *osb.GetBindingRequest, c *RequestContext) (*GetBindingResponse, error) + // BindingLastOperation encapsulates the business logic for a last operation + // request and returns a osb.BindingLastOperationResponse or an error. + // BindingLastOperation is called when a platform checks the status of an ongoing + // asynchronous binding operation on an instance of a binding. + // + // NOTE: Asynchronous bindings are currently a proposal against the OSB spec + // that is in the "validating through implementation phase". For more information, + // see the PR: https://github.com/openservicebrokerapi/servicebroker/pull/334 + // + // The parameters are: + // - a osb.BindingLastOperationRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a BindingLastOperationResponse for a successful + // operation or an error. The APISurface handles translating + // BindingLastOperationResponses or errors into the correct form in the http + // response. + // + // For more information, see: + // + // https://github.com/mattmcneeney/servicebroker/blob/219bf56c58a2f37d4a1a1b1b49b6e0dcc9683167/spec.md#polling-last-operation-for-service-bindings + BindingLastOperation(request *osb.BindingLastOperationRequest, c *RequestContext) (*LastOperationResponse, error) + // Unbind encapsulates the business logic for an unbind operation and + // returns a osb.UnbindResponse or an error. Unbind deletes a binding and the + // resources associated with it. + // + // The parameters are: + // - a osb.UnbindRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a UnbindResponse for a successful operation or + // an error. The APISurface handles translating UnbindResponses or errors + // into the correct form in the http response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#unbinding + Unbind(request *osb.UnbindRequest, c *RequestContext) (*UnbindResponse, error) + // Update encapsulates the business logic for an update operation and + // returns a osb.UpdateInstanceResponse or an error. Update updates the + // instance. + // + // The parameters are: + // - a osb.UpdateInstanceRequest created from the original http request + // - a RequestContext object which encapsulates: + // - a response writer, in case fine-grained control over the response is + // required + // - the original http request, in case access is required (to get special + // request headers, for example) + // + // Implementers should return a UpdateInstanceResponse for a successful operation or + // an error. The APISurface handles translating UpdateInstanceResponses or errors + // into the correct form in the http response. + // + // For more information, see: + // + // https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#updating-a-service-instance + Update(request *osb.UpdateInstanceRequest, c *RequestContext) (*UpdateInstanceResponse, error) +} + +// RequestContext encapsulates the following parameters: +// - a response writer, in case fine-grained control over the response is required +// - the original http request, in case access is required (to get special +// request headers, for example) +type RequestContext struct { + Writer http.ResponseWriter + Request *http.Request +} diff --git a/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/types.go b/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/types.go new file mode 100644 index 00000000..b1e65384 --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/types.go @@ -0,0 +1,53 @@ +package broker + +import osb "github.com/pmorie/go-open-service-broker-client/v2" + +// CatalogResponse is sent as the response to a catalog requests. +type CatalogResponse struct { + osb.CatalogResponse +} + +// ProvisionResponse is sent as the response to a provision call. +type ProvisionResponse struct { + osb.ProvisionResponse + + // Exists - is set if the request was already completed + // and the requested parameters are identical to the existing + // Service Instance. + Exists bool `json:"-"` +} + +// UpdateInstanceResponse is sent as the response to a update call. +type UpdateInstanceResponse struct { + osb.UpdateInstanceResponse +} + +// DeprovisionResponse is sent as the response to a deprovision call. +type DeprovisionResponse struct { + osb.DeprovisionResponse +} + +// LastOperationResponse is sent as the response to a last operation call. +type LastOperationResponse struct { + osb.LastOperationResponse +} + +// BindResponse is sent as the response to a bind call. +type BindResponse struct { + osb.BindResponse + + // Exists - is set if the request was already completed + // and the requested parameters are identical to the existing + // Service Binding. + Exists bool `json:"-"` +} + +// GetBinding is sent as the response to a get binding call. +type GetBindingResponse struct { + osb.GetBindingResponse +} + +// UnbindResponse is sent as the response to a bind call. +type UnbindResponse struct { + osb.UnbindResponse +} diff --git a/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/user_info.go b/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/user_info.go new file mode 100644 index 00000000..1ed34cd3 --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/pkg/broker/user_info.go @@ -0,0 +1,93 @@ +package broker + +import ( + "encoding/json" + "fmt" + + osb "github.com/pmorie/go-open-service-broker-client/v2" +) + +// parseKubernetesIdentity - creates a kubernetes identity from the +// orginating identity +func parseKubernetesIdentity(o osb.OriginatingIdentity) (*KubernetesUserInfo, error) { + u := KubernetesUserInfo{} + err := json.Unmarshal([]byte(o.Value), &u) + if err != nil { + return nil, fmt.Errorf("unable to unmarshal json for value while parsing Kubernetes identity") + } + return &u, nil +} + +// parseCloudFoundryIdentity - creates a cloud foundry identity from the +// orginating identity +func parseCloudFoundryIdentity(o osb.OriginatingIdentity) (*CloudFoundryUserInfo, error) { + m := map[string]interface{}{} + err := json.Unmarshal([]byte(o.Value), &m) + if err != nil { + return nil, fmt.Errorf("unable to unmarshal json for value while parsing cloud foundry identity") + } + // Validate that user_id MUST be in the json object. + var u interface{} + var user string + var ok bool + if u, ok = m["user_id"]; !ok { + return nil, fmt.Errorf("user_id key was not found in cloud foundry object") + } + user, ok = u.(string) + if !ok { + return nil, fmt.Errorf("user_id value was not a string in cloud foundry object") + } + delete(m, "user_id") + c := CloudFoundryUserInfo{UserID: user, Extras: m} + return &c, nil +} + +// ParseIdentity - retrieve the identity union type +func ParseIdentity(o osb.OriginatingIdentity) (Identity, error) { + identity := Identity{Platform: o.Platform} + switch o.Platform { + case osb.PlatformKubernetes: + k, err := parseKubernetesIdentity(o) + if err != nil { + return identity, err + } + identity.Kubernetes = k + case osb.PlatformCloudFoundry: + c, err := parseCloudFoundryIdentity(o) + if err != nil { + return identity, err + } + identity.CloudFoundry = c + default: + m := map[string]interface{}{} + err := json.Unmarshal([]byte(o.Value), &m) + if err != nil { + return identity, fmt.Errorf("unable to unmarshal json for value") + } + identity.Unknown = m + } + return identity, nil +} + +// Identity - union type, used to access the correct originating identity +// implementation type +type Identity struct { + Platform string + Kubernetes *KubernetesUserInfo + CloudFoundry *CloudFoundryUserInfo + Unknown map[string]interface{} +} + +// KubernetesUserInfo - kubernetes user info object +type KubernetesUserInfo struct { + Username string `json:"username"` + UID string `json:"uid"` + Groups []string `json:"groups"` + Extra map[string][]string `json:"extra"` +} + +// CloudFoundryUserInfo - cloud foundry user info object +type CloudFoundryUserInfo struct { + UserID string + Extras map[string]interface{} +} diff --git a/vendor/github.com/pmorie/osb-broker-lib/pkg/metrics/metrics.go b/vendor/github.com/pmorie/osb-broker-lib/pkg/metrics/metrics.go new file mode 100644 index 00000000..16e8c358 --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/pkg/metrics/metrics.go @@ -0,0 +1,32 @@ +package metrics + +import ( + prom "github.com/prometheus/client_golang/prometheus" +) + +const actionsMetricName = "osb_actions_total" + +// OSBMetricsCollector - action counter +type OSBMetricsCollector struct { + Actions *prom.CounterVec +} + +// New - constructs a metrics collector with an action counter +func New() *OSBMetricsCollector { + return &OSBMetricsCollector{ + Actions: prom.NewCounterVec(prom.CounterOpts{ + Name: actionsMetricName, + Help: "Total amount of actions requested.", + }, []string{"action"}), + } +} + +// Describe returns all descriptions of the collector. +func (c *OSBMetricsCollector) Describe(ch chan<- *prom.Desc) { + c.Actions.Describe(ch) +} + +// Collect returns the current state of all metrics of the collector. +func (c *OSBMetricsCollector) Collect(ch chan<- prom.Metric) { + c.Actions.Collect(ch) +} diff --git a/vendor/github.com/pmorie/osb-broker-lib/pkg/rest/apisurface.go b/vendor/github.com/pmorie/osb-broker-lib/pkg/rest/apisurface.go new file mode 100644 index 00000000..e07a9837 --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/pkg/rest/apisurface.go @@ -0,0 +1,676 @@ +package rest + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "net/http" + "strings" + + "github.com/golang/glog" + "github.com/gorilla/mux" + + osb "github.com/pmorie/go-open-service-broker-client/v2" + + "github.com/pmorie/osb-broker-lib/pkg/broker" + "github.com/pmorie/osb-broker-lib/pkg/metrics" +) + +// APISurface is a type that describes a OSB REST API surface. APISurface is +// responsible for decoding HTTP requests and transforming them into the request +// object for each operation and transforming responses and errors returned from +// the broker's internal business logic into the correct places in the HTTP +// response. +type APISurface struct { + // Broker contains the business logic that provides the + // implementation for the different OSB API operations. + Broker broker.Interface + Metrics *metrics.OSBMetricsCollector + EnableCORS bool +} + +// NewAPISurface returns a new, ready-to-go APISurface. +func NewAPISurface(brokerInterface broker.Interface, m *metrics.OSBMetricsCollector) (*APISurface, error) { + api := &APISurface{ + Broker: brokerInterface, + Metrics: m, + } + + return api, nil +} + +// OptionsHandler deals with the OPTIONS type request allowing the client to gather the headers. +func (s *APISurface) OptionsHandler(w http.ResponseWriter, r *http.Request) { + s.writeResponse(w, http.StatusOK, nil) +} + +// GetCatalogHandler is the mux handler that dispatches requests to get the +// broker's catalog to the broker's Interface. +func (s *APISurface) GetCatalogHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("get_catalog").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.GetCatalog(c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + s.writeResponse(w, http.StatusOK, response) +} + +// ProvisionHandler is the mux handler that dispatches ProvisionRequests to the +// broker's Interface. +func (s *APISurface) ProvisionHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("provision").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + request, err := unpackProvisionRequest(r) + if err != nil { + s.writeError(w, err, http.StatusBadRequest) + return + } + + glog.V(4).Infof("Received ProvisionRequest for instanceID %q", request.InstanceID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.Provision(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + // MUST be returned if the Service Instance was provisioned + // as a result of this request and Not async + status := http.StatusCreated + + // MUST be returned if the Service Instance provisioning is in progress. + if response.Async { + status = http.StatusAccepted + } + + if response.Exists { + // MUST be returned if the Service Instance already exists, + // is fully provisioned, and the requested parameters + // are identical to the existing Service Instance + status = http.StatusOK + } + + s.writeResponse(w, status, response) +} + +// unpackProvisionRequest unpacks an osb request from the given HTTP request. +func unpackProvisionRequest(r *http.Request) (*osb.ProvisionRequest, error) { + // unpacking an osb request from an http request involves: + // - unmarshaling the request body + // - getting IDs out of mux vars + // - getting query parameters from request URL + // - retrieve originating origin identity + osbRequest := &osb.ProvisionRequest{} + if err := unmarshalRequestBody(r, osbRequest); err != nil { + return nil, err + } + + vars := mux.Vars(r) + osbRequest.InstanceID = vars[osb.VarKeyInstanceID] + + asyncQueryParamVal := r.URL.Query().Get(osb.AcceptsIncomplete) + if strings.ToLower(asyncQueryParamVal) == "true" { + osbRequest.AcceptsIncomplete = true + } + identity, err := retrieveOriginatingIdentity(r) + // This could be not found because platforms may support the feature + // but are not guaranteed to. + if err != nil { + glog.Infof("Unable to retrieve originating identity - %v", err) + } + + osbRequest.OriginatingIdentity = identity + + return osbRequest, nil +} + +// DeprovisionHandler is the mux handler that dispatches deprovision requests to +// the broker's Interface. +func (s *APISurface) DeprovisionHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("deprovision").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + request, err := unpackDeprovisionRequest(r) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + glog.V(4).Infof("Received DeprovisionRequest for instanceID %q", request.InstanceID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.Deprovision(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + status := http.StatusOK + if response.Async { + status = http.StatusAccepted + } + + s.writeResponse(w, status, response) +} + +// unpackDeprovisionRequest unpacks an osb request from the given HTTP request. +func unpackDeprovisionRequest(r *http.Request) (*osb.DeprovisionRequest, error) { + osbRequest := &osb.DeprovisionRequest{} + + vars := mux.Vars(r) + osbRequest.InstanceID = vars[osb.VarKeyInstanceID] + osbRequest.ServiceID = r.FormValue(osb.VarKeyServiceID) + osbRequest.PlanID = r.FormValue(osb.VarKeyPlanID) + + asyncQueryParamVal := r.FormValue(osb.AcceptsIncomplete) + if strings.ToLower(asyncQueryParamVal) == "true" { + osbRequest.AcceptsIncomplete = true + } + identity, err := retrieveOriginatingIdentity(r) + // This could be not found because platforms may support the feature + // but are not guaranteed to. + if err != nil { + glog.Infof("Unable to retrieve originating identity - %v", err) + } + osbRequest.OriginatingIdentity = identity + + return osbRequest, nil +} + +// LastOperationHandler is the mux handler that dispatches last-operation +// requests to the broker's Interface. +func (s *APISurface) LastOperationHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("last_operation").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + request, err := unpackLastOperationRequest(r) + if err != nil { + // TODO: This should return a 400 in this case as it is either + // malformed or missing mandatory data, as per the OSB spec. + s.writeError(w, err, http.StatusInternalServerError) + return + } + + glog.V(4).Infof("Received LastOperationRequest for instanceID %q", request.InstanceID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.LastOperation(request, c) + if err != nil { + // TODO: This should return a 400 in this case as it is either + // malformed or missing mandatory data, as per the OSB spec. + s.writeError(w, err, http.StatusInternalServerError) + return + } + + s.writeResponse(w, http.StatusOK, response) +} + +// unpackLastOperationRequest unpacks an osb request from the given HTTP request. +func unpackLastOperationRequest(r *http.Request) (*osb.LastOperationRequest, error) { + osbRequest := &osb.LastOperationRequest{} + + vars := mux.Vars(r) + osbRequest.InstanceID = vars[osb.VarKeyInstanceID] + serviceID := vars[osb.VarKeyServiceID] + if serviceID != "" { + osbRequest.ServiceID = &serviceID + } + planID := vars[osb.VarKeyPlanID] + if planID != "" { + osbRequest.PlanID = &planID + } + operation := vars[osb.VarKeyOperation] + if operation != "" { + typedOperation := osb.OperationKey(operation) + osbRequest.OperationKey = &typedOperation + } + return osbRequest, nil +} + +// BindHandler is the mux handler that dispatches bind requests to the broker's +// Interface. +func (s *APISurface) BindHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("bind").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + request, err := unpackBindRequest(r) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + glog.V(4).Infof("Received BindRequest for instanceID %q, bindingID %q", request.InstanceID, request.BindingID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.Bind(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + // MUST be returned if the binding was created as a result of this request. + status := http.StatusCreated + + if response.Exists { + // MUST be returned if the binding already exists and the requested parameters + // are identical to the existing binding. + status = http.StatusOK + } else if response.Async { + // MUST be returned if the binding is in progress. NOTE: Async bindings + // are an alpha level feature currently in the "validating through + // implementation phase" of the OSB spec. See: + // https://github.com/openservicebrokerapi/servicebroker/pull/334 + status = http.StatusAccepted + } + + s.writeResponse(w, status, response) +} + +// unpackBindRequest unpacks an osb request from the given HTTP request. +func unpackBindRequest(r *http.Request) (*osb.BindRequest, error) { + osbRequest := &osb.BindRequest{} + if err := unmarshalRequestBody(r, osbRequest); err != nil { + return nil, err + } + + vars := mux.Vars(r) + osbRequest.InstanceID = vars[osb.VarKeyInstanceID] + osbRequest.BindingID = vars[osb.VarKeyBindingID] + identity, err := retrieveOriginatingIdentity(r) + // This could be not found because platforms may support the feature + // but are not guaranteed to. + if err != nil { + glog.Infof("Unable to retrieve originating identity - %v", err) + } + + osbRequest.OriginatingIdentity = identity + + return osbRequest, nil +} + +// GetBindingHandler is the mux handler that dispatches get binding requests to +// the broker's Interface. +func (s *APISurface) GetBindingHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("get_binding").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + vars := mux.Vars(r) + request, err := unpackGetBindingRequest(r, vars) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + glog.Infof("Received GetBinding request for instanceID %q, bindingID %q", request.InstanceID, request.BindingID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.GetBinding(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + s.writeResponse(w, http.StatusOK, response) +} + +// unpackGetBindingRequest unpacks an osb get binding request from the given +// HTTP request. +func unpackGetBindingRequest(r *http.Request, vars map[string]string) (*osb.GetBindingRequest, error) { + request := &osb.GetBindingRequest{} + + request.InstanceID = vars[osb.VarKeyInstanceID] + request.BindingID = vars[osb.VarKeyBindingID] + + return request, nil +} + +// GetBindingLastOperation is the mux handler that dispatches binding last +// operation requests to the broker's Interface. +func (s *APISurface) BindingLastOperationHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("binding_last_operation").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + vars := mux.Vars(r) + request, err := unpackBindingLastOperationRequest(r, vars) + if err != nil { + s.writeError(w, err, http.StatusBadRequest) + return + } + + glog.Infof("Received BindingLastOperationRequest for instanceID %q, bindingID %q", request.InstanceID, request.BindingID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.BindingLastOperation(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + s.writeResponse(w, http.StatusOK, response) +} + +// unpackBindingLastOperationRequest unpacks an osb binding last operation +// request from the given HTTP request. +func unpackBindingLastOperationRequest( + r *http.Request, vars map[string]string, +) (*osb.BindingLastOperationRequest, error) { + request := &osb.BindingLastOperationRequest{} + request.InstanceID = vars[osb.VarKeyInstanceID] + request.BindingID = vars[osb.VarKeyBindingID] + + serviceID := vars[osb.VarKeyServiceID] + if serviceID != "" { + request.ServiceID = &serviceID + } + + planID := vars[osb.VarKeyPlanID] + if planID != "" { + request.PlanID = &planID + } + + operation := vars[osb.VarKeyOperation] + if operation != "" { + typedOperation := osb.OperationKey(operation) + request.OperationKey = &typedOperation + } + + identity, err := retrieveOriginatingIdentity(r) + if err != nil { + return nil, err + } + request.OriginatingIdentity = identity + + return request, nil +} + +// UnbindHandler is the mux handler that dispatches unbind requests to the +// broker's Interface. +func (s *APISurface) UnbindHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("unbind").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + v := mux.Vars(r) + request, err := unpackUnbindRequest(r, v) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + glog.V(4).Infof("Received UnbindRequest for instanceID %q, bindingID %q", request.InstanceID, request.BindingID) + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.Unbind(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + s.writeResponse(w, http.StatusOK, response) +} + +// unpackUnbindRequest unpacks an osb request from the given HTTP request. +func unpackUnbindRequest(r *http.Request, vars map[string]string) (*osb.UnbindRequest, error) { + osbRequest := &osb.UnbindRequest{} + + osbRequest.InstanceID = vars[osb.VarKeyInstanceID] + osbRequest.BindingID = vars[osb.VarKeyBindingID] + + // plan_id and service_id are set in the query string parameters and thus need to + // be obtained differently than instance_id and binding_id. + osbRequest.PlanID = r.FormValue(osb.VarKeyPlanID) + osbRequest.ServiceID = r.FormValue(osb.VarKeyServiceID) + + identity, err := retrieveOriginatingIdentity(r) + // This could be not found because platforms may support the feature + // but are not guaranteed to. + if err != nil { + glog.Infof("Unable to retrieve originating identity - %v", err) + } + osbRequest.OriginatingIdentity = identity + + return osbRequest, nil +} + +// UpdateHandler is the mux handler that dispatches Update requests to the +// broker's Interface. +func (s *APISurface) UpdateHandler(w http.ResponseWriter, r *http.Request) { + s.Metrics.Actions.WithLabelValues("update").Inc() + + version := getBrokerAPIVersionFromRequest(r) + if err := s.Broker.ValidateBrokerAPIVersion(version); err != nil { + s.writeError(w, err, http.StatusPreconditionFailed) + return + } + + v := mux.Vars(r) + request, err := unpackUpdateRequest(r, v) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + glog.V(4).Infof("Received Update Request for instanceID %q", request.InstanceID) + + c := &broker.RequestContext{ + Writer: w, + Request: r, + } + + response, err := s.Broker.Update(request, c) + if err != nil { + s.writeError(w, err, http.StatusInternalServerError) + return + } + + status := http.StatusOK + if response.Async { + status = http.StatusAccepted + } + + s.writeResponse(w, status, response) +} + +func unpackUpdateRequest(r *http.Request, vars map[string]string) (*osb.UpdateInstanceRequest, error) { + osbRequest := &osb.UpdateInstanceRequest{} + if err := unmarshalRequestBody(r, osbRequest); err != nil { + return nil, err + } + + osbRequest.InstanceID = vars[osb.VarKeyInstanceID] + + asyncQueryParamVal := r.FormValue(osb.AcceptsIncomplete) + if strings.ToLower(asyncQueryParamVal) == "true" { + osbRequest.AcceptsIncomplete = true + } + identity, err := retrieveOriginatingIdentity(r) + // This could be not found because platforms may support the feature + // but are not guaranteed to. + if err != nil { + glog.Infof("Unable to retrieve originating identity - %v", err) + } + osbRequest.OriginatingIdentity = identity + + return osbRequest, nil +} + +// retrieveOriginatingIdentity retrieves the originating identity from +// the request header. +func retrieveOriginatingIdentity(r *http.Request) (*osb.OriginatingIdentity, error) { + identityHeader := r.Header.Get(osb.OriginatingIdentityHeader) + + if identityHeader != "" { + identitySlice := strings.Split(identityHeader, " ") + if len(identitySlice) != 2 { + glog.Infof("invalid header for originating origin - %v", identityHeader) + return nil, fmt.Errorf("invalid originating identity header") + } + // Base64 decode the value string so the value is passed as valid JSON. + val, err := base64.StdEncoding.DecodeString(identitySlice[1]) + if err != nil { + glog.Infof("invalid header for originating origin - %v", identityHeader) + return nil, fmt.Errorf("invalid encoding for value of originating identity header") + } + return &osb.OriginatingIdentity{ + Platform: identitySlice[0], + Value: string(val), + }, nil + } + return nil, fmt.Errorf("unable to find originating identity") +} + +// writeResponse will serialize 'object' to the HTTP ResponseWriter +// using the 'code' as the HTTP status code +func (s *APISurface) writeResponse(w http.ResponseWriter, code int, object interface{}) { + data, err := json.Marshal(object) + if err != nil { + w.WriteHeader(http.StatusInternalServerError) + return + } + + w.Header().Set("Content-Type", "application/json") + + if s.EnableCORS { + //Allow CORS here By * or specific origin + w.Header().Set("Access-Control-Allow-Origin", "*") + w.Header().Set("Access-Control-Allow-Methods", "POST, GET, OPTIONS, PUT, PATCH, DELETE") + w.Header().Set("Access-Control-Allow-Headers", "Origin, X-Requested-With, X-Broker-API-Version, X-Broker-API-Originating-Identity, Content-Type, Authorization, Accept") + } + + w.WriteHeader(code) + w.Write(data) +} + +// writeError accepts any error and writes it to the given ResponseWriter along +// with a status code. +// +// If the error is an osb.HTTPStatusCodeError, the error's StatusCode field will +// be used and the response body will contain the error's Description and +// ErrorMessage fields (if set). +// +// Otherwise, the given defaultStatusCode will be used, and the response body +// will have the result of calling the error's Error method set in the +// 'description' field. +// +// For more information about OSB errors, see: +// +// https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#service-broker-errors +func (s *APISurface) writeError(w http.ResponseWriter, err error, defaultStatusCode int) { + if httpErr, ok := osb.IsHTTPError(err); ok { + s.writeOSBStatusCodeErrorResponse(w, httpErr) + return + } + + s.writeErrorResponse(w, defaultStatusCode, err) +} + +// writeOSBStatusCodeErrorResponse writes the given HTTPStatusCodeError to the +// given ResponseWriter. The HTTP response's status code is the error's +// StatusCode field and the body contains the ErrorMessage and Description +// fields, if set. +func (s *APISurface) writeOSBStatusCodeErrorResponse(w http.ResponseWriter, err *osb.HTTPStatusCodeError) { + type e struct { + ErrorMessage *string `json:"error,omitempty"` + Description *string `json:"description,omitempty"` + } + + body := &e{} + if err.Description != nil { + body.Description = err.Description + } + + if err.ErrorMessage != nil { + body.ErrorMessage = err.ErrorMessage + } + + s.writeResponse(w, err.StatusCode, body) +} + +// writeErrorResponse writes the given status code and error to the given +// ResponseWriter. The response body will be a json object with the field +// 'description' set from calling Error() on the passed-in error. +func (s *APISurface) writeErrorResponse(w http.ResponseWriter, code int, err error) { + type e struct { + Description string `json:"description"` + } + s.writeResponse(w, code, &e{ + Description: err.Error(), + }) +} diff --git a/vendor/github.com/pmorie/osb-broker-lib/pkg/rest/util.go b/vendor/github.com/pmorie/osb-broker-lib/pkg/rest/util.go new file mode 100644 index 00000000..5939fd64 --- /dev/null +++ b/vendor/github.com/pmorie/osb-broker-lib/pkg/rest/util.go @@ -0,0 +1,27 @@ +package rest + +import ( + "encoding/json" + "io/ioutil" + "net/http" + + osb "github.com/pmorie/go-open-service-broker-client/v2" +) + +func getBrokerAPIVersionFromRequest(r *http.Request) string { + return r.Header.Get(osb.APIVersionHeader) +} + +func unmarshalRequestBody(request *http.Request, obj interface{}) error { + body, err := ioutil.ReadAll(request.Body) + if err != nil { + return err + } + + err = json.Unmarshal(body, obj) + if err != nil { + return err + } + + return nil +} diff --git a/vendor/github.com/prometheus/client_golang/AUTHORS.md b/vendor/github.com/prometheus/client_golang/AUTHORS.md new file mode 100644 index 00000000..c5275d5a --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/AUTHORS.md @@ -0,0 +1,18 @@ +The Prometheus project was started by Matt T. Proud (emeritus) and +Julius Volz in 2012. + +Maintainers of this repository: + +* Björn Rabenstein + +The following individuals have contributed code to this repository +(listed in alphabetical order): + +* Bernerd Schaefer +* Björn Rabenstein +* Daniel Bornkessel +* Jeff Younker +* Julius Volz +* Matt T. Proud +* Tobias Schmidt + diff --git a/vendor/github.com/prometheus/client_golang/LICENSE b/vendor/github.com/prometheus/client_golang/LICENSE new file mode 100644 index 00000000..261eeb9e --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/prometheus/client_golang/NOTICE b/vendor/github.com/prometheus/client_golang/NOTICE new file mode 100644 index 00000000..dd878a30 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/NOTICE @@ -0,0 +1,23 @@ +Prometheus instrumentation library for Go applications +Copyright 2012-2015 The Prometheus Authors + +This product includes software developed at +SoundCloud Ltd. (http://soundcloud.com/). + + +The following components are included in this product: + +perks - a fork of https://github.com/bmizerany/perks +https://github.com/beorn7/perks +Copyright 2013-2015 Blake Mizerany, Björn Rabenstein +See https://github.com/beorn7/perks/blob/master/README.md for license details. + +Go support for Protocol Buffers - Google's data interchange format +http://github.com/golang/protobuf/ +Copyright 2010 The Go Authors +See source code for license details. + +Support for streaming Protocol Buffer messages for the Go language (golang). +https://github.com/matttproud/golang_protobuf_extensions +Copyright 2013 Matt T. Proud +Licensed under the Apache License, Version 2.0 diff --git a/vendor/github.com/prometheus/client_golang/prometheus/collector.go b/vendor/github.com/prometheus/client_golang/prometheus/collector.go new file mode 100644 index 00000000..623d3d83 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/collector.go @@ -0,0 +1,75 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +// Collector is the interface implemented by anything that can be used by +// Prometheus to collect metrics. A Collector has to be registered for +// collection. See Registerer.Register. +// +// The stock metrics provided by this package (Gauge, Counter, Summary, +// Histogram, Untyped) are also Collectors (which only ever collect one metric, +// namely itself). An implementer of Collector may, however, collect multiple +// metrics in a coordinated fashion and/or create metrics on the fly. Examples +// for collectors already implemented in this library are the metric vectors +// (i.e. collection of multiple instances of the same Metric but with different +// label values) like GaugeVec or SummaryVec, and the ExpvarCollector. +type Collector interface { + // Describe sends the super-set of all possible descriptors of metrics + // collected by this Collector to the provided channel and returns once + // the last descriptor has been sent. The sent descriptors fulfill the + // consistency and uniqueness requirements described in the Desc + // documentation. (It is valid if one and the same Collector sends + // duplicate descriptors. Those duplicates are simply ignored. However, + // two different Collectors must not send duplicate descriptors.) This + // method idempotently sends the same descriptors throughout the + // lifetime of the Collector. If a Collector encounters an error while + // executing this method, it must send an invalid descriptor (created + // with NewInvalidDesc) to signal the error to the registry. + Describe(chan<- *Desc) + // Collect is called by the Prometheus registry when collecting + // metrics. The implementation sends each collected metric via the + // provided channel and returns once the last metric has been sent. The + // descriptor of each sent metric is one of those returned by + // Describe. Returned metrics that share the same descriptor must differ + // in their variable label values. This method may be called + // concurrently and must therefore be implemented in a concurrency safe + // way. Blocking occurs at the expense of total performance of rendering + // all registered metrics. Ideally, Collector implementations support + // concurrent readers. + Collect(chan<- Metric) +} + +// selfCollector implements Collector for a single Metric so that the Metric +// collects itself. Add it as an anonymous field to a struct that implements +// Metric, and call init with the Metric itself as an argument. +type selfCollector struct { + self Metric +} + +// init provides the selfCollector with a reference to the metric it is supposed +// to collect. It is usually called within the factory function to create a +// metric. See example. +func (c *selfCollector) init(self Metric) { + c.self = self +} + +// Describe implements Collector. +func (c *selfCollector) Describe(ch chan<- *Desc) { + ch <- c.self.Desc() +} + +// Collect implements Collector. +func (c *selfCollector) Collect(ch chan<- Metric) { + ch <- c.self +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/counter.go b/vendor/github.com/prometheus/client_golang/prometheus/counter.go new file mode 100644 index 00000000..ee37949a --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/counter.go @@ -0,0 +1,172 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "errors" +) + +// Counter is a Metric that represents a single numerical value that only ever +// goes up. That implies that it cannot be used to count items whose number can +// also go down, e.g. the number of currently running goroutines. Those +// "counters" are represented by Gauges. +// +// A Counter is typically used to count requests served, tasks completed, errors +// occurred, etc. +// +// To create Counter instances, use NewCounter. +type Counter interface { + Metric + Collector + + // Set is used to set the Counter to an arbitrary value. It is only used + // if you have to transfer a value from an external counter into this + // Prometheus metric. Do not use it for regular handling of a + // Prometheus counter (as it can be used to break the contract of + // monotonically increasing values). + // + // Deprecated: Use NewConstMetric to create a counter for an external + // value. A Counter should never be set. + Set(float64) + // Inc increments the counter by 1. + Inc() + // Add adds the given value to the counter. It panics if the value is < + // 0. + Add(float64) +} + +// CounterOpts is an alias for Opts. See there for doc comments. +type CounterOpts Opts + +// NewCounter creates a new Counter based on the provided CounterOpts. +func NewCounter(opts CounterOpts) Counter { + desc := NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ) + result := &counter{value: value{desc: desc, valType: CounterValue, labelPairs: desc.constLabelPairs}} + result.init(result) // Init self-collection. + return result +} + +type counter struct { + value +} + +func (c *counter) Add(v float64) { + if v < 0 { + panic(errors.New("counter cannot decrease in value")) + } + c.value.Add(v) +} + +// CounterVec is a Collector that bundles a set of Counters that all share the +// same Desc, but have different values for their variable labels. This is used +// if you want to count the same thing partitioned by various dimensions +// (e.g. number of HTTP requests, partitioned by response code and +// method). Create instances with NewCounterVec. +// +// CounterVec embeds MetricVec. See there for a full list of methods with +// detailed documentation. +type CounterVec struct { + *MetricVec +} + +// NewCounterVec creates a new CounterVec based on the provided CounterOpts and +// partitioned by the given label names. At least one label name must be +// provided. +func NewCounterVec(opts CounterOpts, labelNames []string) *CounterVec { + desc := NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + labelNames, + opts.ConstLabels, + ) + return &CounterVec{ + MetricVec: newMetricVec(desc, func(lvs ...string) Metric { + result := &counter{value: value{ + desc: desc, + valType: CounterValue, + labelPairs: makeLabelPairs(desc, lvs), + }} + result.init(result) // Init self-collection. + return result + }), + } +} + +// GetMetricWithLabelValues replaces the method of the same name in +// MetricVec. The difference is that this method returns a Counter and not a +// Metric so that no type conversion is required. +func (m *CounterVec) GetMetricWithLabelValues(lvs ...string) (Counter, error) { + metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...) + if metric != nil { + return metric.(Counter), err + } + return nil, err +} + +// GetMetricWith replaces the method of the same name in MetricVec. The +// difference is that this method returns a Counter and not a Metric so that no +// type conversion is required. +func (m *CounterVec) GetMetricWith(labels Labels) (Counter, error) { + metric, err := m.MetricVec.GetMetricWith(labels) + if metric != nil { + return metric.(Counter), err + } + return nil, err +} + +// WithLabelValues works as GetMetricWithLabelValues, but panics where +// GetMetricWithLabelValues would have returned an error. By not returning an +// error, WithLabelValues allows shortcuts like +// myVec.WithLabelValues("404", "GET").Add(42) +func (m *CounterVec) WithLabelValues(lvs ...string) Counter { + return m.MetricVec.WithLabelValues(lvs...).(Counter) +} + +// With works as GetMetricWith, but panics where GetMetricWithLabels would have +// returned an error. By not returning an error, With allows shortcuts like +// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42) +func (m *CounterVec) With(labels Labels) Counter { + return m.MetricVec.With(labels).(Counter) +} + +// CounterFunc is a Counter whose value is determined at collect time by calling a +// provided function. +// +// To create CounterFunc instances, use NewCounterFunc. +type CounterFunc interface { + Metric + Collector +} + +// NewCounterFunc creates a new CounterFunc based on the provided +// CounterOpts. The value reported is determined by calling the given function +// from within the Write method. Take into account that metric collection may +// happen concurrently. If that results in concurrent calls to Write, like in +// the case where a CounterFunc is directly registered with Prometheus, the +// provided function must be concurrency-safe. The function should also honor +// the contract for a Counter (values only go up, not down), but compliance will +// not be checked. +func NewCounterFunc(opts CounterOpts, function func() float64) CounterFunc { + return newValueFunc(NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), CounterValue, function) +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/desc.go b/vendor/github.com/prometheus/client_golang/prometheus/desc.go new file mode 100644 index 00000000..77f4b30e --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/desc.go @@ -0,0 +1,205 @@ +// Copyright 2016 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "errors" + "fmt" + "regexp" + "sort" + "strings" + + "github.com/golang/protobuf/proto" + + dto "github.com/prometheus/client_model/go" +) + +var ( + metricNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_:]*$`) + labelNameRE = regexp.MustCompile("^[a-zA-Z_][a-zA-Z0-9_]*$") +) + +// reservedLabelPrefix is a prefix which is not legal in user-supplied +// label names. +const reservedLabelPrefix = "__" + +// Labels represents a collection of label name -> value mappings. This type is +// commonly used with the With(Labels) and GetMetricWith(Labels) methods of +// metric vector Collectors, e.g.: +// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42) +// +// The other use-case is the specification of constant label pairs in Opts or to +// create a Desc. +type Labels map[string]string + +// Desc is the descriptor used by every Prometheus Metric. It is essentially +// the immutable meta-data of a Metric. The normal Metric implementations +// included in this package manage their Desc under the hood. Users only have to +// deal with Desc if they use advanced features like the ExpvarCollector or +// custom Collectors and Metrics. +// +// Descriptors registered with the same registry have to fulfill certain +// consistency and uniqueness criteria if they share the same fully-qualified +// name: They must have the same help string and the same label names (aka label +// dimensions) in each, constLabels and variableLabels, but they must differ in +// the values of the constLabels. +// +// Descriptors that share the same fully-qualified names and the same label +// values of their constLabels are considered equal. +// +// Use NewDesc to create new Desc instances. +type Desc struct { + // fqName has been built from Namespace, Subsystem, and Name. + fqName string + // help provides some helpful information about this metric. + help string + // constLabelPairs contains precalculated DTO label pairs based on + // the constant labels. + constLabelPairs []*dto.LabelPair + // VariableLabels contains names of labels for which the metric + // maintains variable values. + variableLabels []string + // id is a hash of the values of the ConstLabels and fqName. This + // must be unique among all registered descriptors and can therefore be + // used as an identifier of the descriptor. + id uint64 + // dimHash is a hash of the label names (preset and variable) and the + // Help string. Each Desc with the same fqName must have the same + // dimHash. + dimHash uint64 + // err is an error that occured during construction. It is reported on + // registration time. + err error +} + +// NewDesc allocates and initializes a new Desc. Errors are recorded in the Desc +// and will be reported on registration time. variableLabels and constLabels can +// be nil if no such labels should be set. fqName and help must not be empty. +// +// variableLabels only contain the label names. Their label values are variable +// and therefore not part of the Desc. (They are managed within the Metric.) +// +// For constLabels, the label values are constant. Therefore, they are fully +// specified in the Desc. See the Opts documentation for the implications of +// constant labels. +func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *Desc { + d := &Desc{ + fqName: fqName, + help: help, + variableLabels: variableLabels, + } + if help == "" { + d.err = errors.New("empty help string") + return d + } + if !metricNameRE.MatchString(fqName) { + d.err = fmt.Errorf("%q is not a valid metric name", fqName) + return d + } + // labelValues contains the label values of const labels (in order of + // their sorted label names) plus the fqName (at position 0). + labelValues := make([]string, 1, len(constLabels)+1) + labelValues[0] = fqName + labelNames := make([]string, 0, len(constLabels)+len(variableLabels)) + labelNameSet := map[string]struct{}{} + // First add only the const label names and sort them... + for labelName := range constLabels { + if !checkLabelName(labelName) { + d.err = fmt.Errorf("%q is not a valid label name", labelName) + return d + } + labelNames = append(labelNames, labelName) + labelNameSet[labelName] = struct{}{} + } + sort.Strings(labelNames) + // ... so that we can now add const label values in the order of their names. + for _, labelName := range labelNames { + labelValues = append(labelValues, constLabels[labelName]) + } + // Now add the variable label names, but prefix them with something that + // cannot be in a regular label name. That prevents matching the label + // dimension with a different mix between preset and variable labels. + for _, labelName := range variableLabels { + if !checkLabelName(labelName) { + d.err = fmt.Errorf("%q is not a valid label name", labelName) + return d + } + labelNames = append(labelNames, "$"+labelName) + labelNameSet[labelName] = struct{}{} + } + if len(labelNames) != len(labelNameSet) { + d.err = errors.New("duplicate label names") + return d + } + vh := hashNew() + for _, val := range labelValues { + vh = hashAdd(vh, val) + vh = hashAddByte(vh, separatorByte) + } + d.id = vh + // Sort labelNames so that order doesn't matter for the hash. + sort.Strings(labelNames) + // Now hash together (in this order) the help string and the sorted + // label names. + lh := hashNew() + lh = hashAdd(lh, help) + lh = hashAddByte(lh, separatorByte) + for _, labelName := range labelNames { + lh = hashAdd(lh, labelName) + lh = hashAddByte(lh, separatorByte) + } + d.dimHash = lh + + d.constLabelPairs = make([]*dto.LabelPair, 0, len(constLabels)) + for n, v := range constLabels { + d.constLabelPairs = append(d.constLabelPairs, &dto.LabelPair{ + Name: proto.String(n), + Value: proto.String(v), + }) + } + sort.Sort(LabelPairSorter(d.constLabelPairs)) + return d +} + +// NewInvalidDesc returns an invalid descriptor, i.e. a descriptor with the +// provided error set. If a collector returning such a descriptor is registered, +// registration will fail with the provided error. NewInvalidDesc can be used by +// a Collector to signal inability to describe itself. +func NewInvalidDesc(err error) *Desc { + return &Desc{ + err: err, + } +} + +func (d *Desc) String() string { + lpStrings := make([]string, 0, len(d.constLabelPairs)) + for _, lp := range d.constLabelPairs { + lpStrings = append( + lpStrings, + fmt.Sprintf("%s=%q", lp.GetName(), lp.GetValue()), + ) + } + return fmt.Sprintf( + "Desc{fqName: %q, help: %q, constLabels: {%s}, variableLabels: %v}", + d.fqName, + d.help, + strings.Join(lpStrings, ","), + d.variableLabels, + ) +} + +func checkLabelName(l string) bool { + return labelNameRE.MatchString(l) && + !strings.HasPrefix(l, reservedLabelPrefix) +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/doc.go b/vendor/github.com/prometheus/client_golang/prometheus/doc.go new file mode 100644 index 00000000..b15a2d3b --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/doc.go @@ -0,0 +1,181 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package prometheus provides metrics primitives to instrument code for +// monitoring. It also offers a registry for metrics. Sub-packages allow to +// expose the registered metrics via HTTP (package promhttp) or push them to a +// Pushgateway (package push). +// +// All exported functions and methods are safe to be used concurrently unless +//specified otherwise. +// +// A Basic Example +// +// As a starting point, a very basic usage example: +// +// package main +// +// import ( +// "net/http" +// +// "github.com/prometheus/client_golang/prometheus" +// "github.com/prometheus/client_golang/prometheus/promhttp" +// ) +// +// var ( +// cpuTemp = prometheus.NewGauge(prometheus.GaugeOpts{ +// Name: "cpu_temperature_celsius", +// Help: "Current temperature of the CPU.", +// }) +// hdFailures = prometheus.NewCounterVec( +// prometheus.CounterOpts{ +// Name: "hd_errors_total", +// Help: "Number of hard-disk errors.", +// }, +// []string{"device"}, +// ) +// ) +// +// func init() { +// // Metrics have to be registered to be exposed: +// prometheus.MustRegister(cpuTemp) +// prometheus.MustRegister(hdFailures) +// } +// +// func main() { +// cpuTemp.Set(65.3) +// hdFailures.With(prometheus.Labels{"device":"/dev/sda"}).Inc() +// +// // The Handler function provides a default handler to expose metrics +// // via an HTTP server. "/metrics" is the usual endpoint for that. +// http.Handle("/metrics", promhttp.Handler()) +// http.ListenAndServe(":8080", nil) +// } +// +// +// This is a complete program that exports two metrics, a Gauge and a Counter, +// the latter with a label attached to turn it into a (one-dimensional) vector. +// +// Metrics +// +// The number of exported identifiers in this package might appear a bit +// overwhelming. Hovever, in addition to the basic plumbing shown in the example +// above, you only need to understand the different metric types and their +// vector versions for basic usage. +// +// Above, you have already touched the Counter and the Gauge. There are two more +// advanced metric types: the Summary and Histogram. A more thorough description +// of those four metric types can be found in the Prometheus docs: +// https://prometheus.io/docs/concepts/metric_types/ +// +// A fifth "type" of metric is Untyped. It behaves like a Gauge, but signals the +// Prometheus server not to assume anything about its type. +// +// In addition to the fundamental metric types Gauge, Counter, Summary, +// Histogram, and Untyped, a very important part of the Prometheus data model is +// the partitioning of samples along dimensions called labels, which results in +// metric vectors. The fundamental types are GaugeVec, CounterVec, SummaryVec, +// HistogramVec, and UntypedVec. +// +// While only the fundamental metric types implement the Metric interface, both +// the metrics and their vector versions implement the Collector interface. A +// Collector manages the collection of a number of Metrics, but for convenience, +// a Metric can also “collect itself”. Note that Gauge, Counter, Summary, +// Histogram, and Untyped are interfaces themselves while GaugeVec, CounterVec, +// SummaryVec, HistogramVec, and UntypedVec are not. +// +// To create instances of Metrics and their vector versions, you need a suitable +// …Opts struct, i.e. GaugeOpts, CounterOpts, SummaryOpts, +// HistogramOpts, or UntypedOpts. +// +// Custom Collectors and constant Metrics +// +// While you could create your own implementations of Metric, most likely you +// will only ever implement the Collector interface on your own. At a first +// glance, a custom Collector seems handy to bundle Metrics for common +// registration (with the prime example of the different metric vectors above, +// which bundle all the metrics of the same name but with different labels). +// +// There is a more involved use case, too: If you already have metrics +// available, created outside of the Prometheus context, you don't need the +// interface of the various Metric types. You essentially want to mirror the +// existing numbers into Prometheus Metrics during collection. An own +// implementation of the Collector interface is perfect for that. You can create +// Metric instances “on the fly” using NewConstMetric, NewConstHistogram, and +// NewConstSummary (and their respective Must… versions). That will happen in +// the Collect method. The Describe method has to return separate Desc +// instances, representative of the “throw-away” metrics to be created +// later. NewDesc comes in handy to create those Desc instances. +// +// The Collector example illustrates the use case. You can also look at the +// source code of the processCollector (mirroring process metrics), the +// goCollector (mirroring Go metrics), or the expvarCollector (mirroring expvar +// metrics) as examples that are used in this package itself. +// +// If you just need to call a function to get a single float value to collect as +// a metric, GaugeFunc, CounterFunc, or UntypedFunc might be interesting +// shortcuts. +// +// Advanced Uses of the Registry +// +// While MustRegister is the by far most common way of registering a Collector, +// sometimes you might want to handle the errors the registration might +// cause. As suggested by the name, MustRegister panics if an error occurs. With +// the Register function, the error is returned and can be handled. +// +// An error is returned if the registered Collector is incompatible or +// inconsistent with already registered metrics. The registry aims for +// consistency of the collected metrics according to the Prometheus data +// model. Inconsistencies are ideally detected at registration time, not at +// collect time. The former will usually be detected at start-up time of a +// program, while the latter will only happen at scrape time, possibly not even +// on the first scrape if the inconsistency only becomes relevant later. That is +// the main reason why a Collector and a Metric have to describe themselves to +// the registry. +// +// So far, everything we did operated on the so-called default registry, as it +// can be found in the global DefaultRegistry variable. With NewRegistry, you +// can create a custom registry, or you can even implement the Registerer or +// Gatherer interfaces yourself. The methods Register and Unregister work in +// the same way on a custom registry as the global functions Register and +// Unregister on the default registry. +// +// There are a number of uses for custom registries: You can use registries +// with special properties, see NewPedanticRegistry. You can avoid global state, +// as it is imposed by the DefaultRegistry. You can use multiple registries at +// the same time to expose different metrics in different ways. You can use +// separate registries for testing purposes. +// +// Also note that the DefaultRegistry comes registered with a Collector for Go +// runtime metrics (via NewGoCollector) and a Collector for process metrics (via +// NewProcessCollector). With a custom registry, you are in control and decide +// yourself about the Collectors to register. +// +// HTTP Exposition +// +// The Registry implements the Gatherer interface. The caller of the Gather +// method can then expose the gathered metrics in some way. Usually, the metrics +// are served via HTTP on the /metrics endpoint. That's happening in the example +// above. The tools to expose metrics via HTTP are in the promhttp +// sub-package. (The top-level functions in the prometheus package are +// deprecated.) +// +// Pushing to the Pushgateway +// +// Function for pushing to the Pushgateway can be found in the push sub-package. +// +// Other Means of Exposition +// +// More ways of exposing metrics can easily be added. Sending metrics to +// Graphite would be an example that will soon be implemented. +package prometheus diff --git a/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go new file mode 100644 index 00000000..18a99d5f --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go @@ -0,0 +1,119 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "encoding/json" + "expvar" +) + +type expvarCollector struct { + exports map[string]*Desc +} + +// NewExpvarCollector returns a newly allocated expvar Collector that still has +// to be registered with a Prometheus registry. +// +// An expvar Collector collects metrics from the expvar interface. It provides a +// quick way to expose numeric values that are already exported via expvar as +// Prometheus metrics. Note that the data models of expvar and Prometheus are +// fundamentally different, and that the expvar Collector is inherently slower +// than native Prometheus metrics. Thus, the expvar Collector is probably great +// for experiments and prototying, but you should seriously consider a more +// direct implementation of Prometheus metrics for monitoring production +// systems. +// +// The exports map has the following meaning: +// +// The keys in the map correspond to expvar keys, i.e. for every expvar key you +// want to export as Prometheus metric, you need an entry in the exports +// map. The descriptor mapped to each key describes how to export the expvar +// value. It defines the name and the help string of the Prometheus metric +// proxying the expvar value. The type will always be Untyped. +// +// For descriptors without variable labels, the expvar value must be a number or +// a bool. The number is then directly exported as the Prometheus sample +// value. (For a bool, 'false' translates to 0 and 'true' to 1). Expvar values +// that are not numbers or bools are silently ignored. +// +// If the descriptor has one variable label, the expvar value must be an expvar +// map. The keys in the expvar map become the various values of the one +// Prometheus label. The values in the expvar map must be numbers or bools again +// as above. +// +// For descriptors with more than one variable label, the expvar must be a +// nested expvar map, i.e. where the values of the topmost map are maps again +// etc. until a depth is reached that corresponds to the number of labels. The +// leaves of that structure must be numbers or bools as above to serve as the +// sample values. +// +// Anything that does not fit into the scheme above is silently ignored. +func NewExpvarCollector(exports map[string]*Desc) Collector { + return &expvarCollector{ + exports: exports, + } +} + +// Describe implements Collector. +func (e *expvarCollector) Describe(ch chan<- *Desc) { + for _, desc := range e.exports { + ch <- desc + } +} + +// Collect implements Collector. +func (e *expvarCollector) Collect(ch chan<- Metric) { + for name, desc := range e.exports { + var m Metric + expVar := expvar.Get(name) + if expVar == nil { + continue + } + var v interface{} + labels := make([]string, len(desc.variableLabels)) + if err := json.Unmarshal([]byte(expVar.String()), &v); err != nil { + ch <- NewInvalidMetric(desc, err) + continue + } + var processValue func(v interface{}, i int) + processValue = func(v interface{}, i int) { + if i >= len(labels) { + copiedLabels := append(make([]string, 0, len(labels)), labels...) + switch v := v.(type) { + case float64: + m = MustNewConstMetric(desc, UntypedValue, v, copiedLabels...) + case bool: + if v { + m = MustNewConstMetric(desc, UntypedValue, 1, copiedLabels...) + } else { + m = MustNewConstMetric(desc, UntypedValue, 0, copiedLabels...) + } + default: + return + } + ch <- m + return + } + vm, ok := v.(map[string]interface{}) + if !ok { + return + } + for lv, val := range vm { + labels[i] = lv + processValue(val, i+1) + } + } + processValue(v, 0) + } +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/fnv.go b/vendor/github.com/prometheus/client_golang/prometheus/fnv.go new file mode 100644 index 00000000..e3b67df8 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/fnv.go @@ -0,0 +1,29 @@ +package prometheus + +// Inline and byte-free variant of hash/fnv's fnv64a. + +const ( + offset64 = 14695981039346656037 + prime64 = 1099511628211 +) + +// hashNew initializies a new fnv64a hash value. +func hashNew() uint64 { + return offset64 +} + +// hashAdd adds a string to a fnv64a hash value, returning the updated hash. +func hashAdd(h uint64, s string) uint64 { + for i := 0; i < len(s); i++ { + h ^= uint64(s[i]) + h *= prime64 + } + return h +} + +// hashAddByte adds a byte to a fnv64a hash value, returning the updated hash. +func hashAddByte(h uint64, b byte) uint64 { + h ^= uint64(b) + h *= prime64 + return h +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/gauge.go b/vendor/github.com/prometheus/client_golang/prometheus/gauge.go new file mode 100644 index 00000000..8b70e514 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/gauge.go @@ -0,0 +1,140 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +// Gauge is a Metric that represents a single numerical value that can +// arbitrarily go up and down. +// +// A Gauge is typically used for measured values like temperatures or current +// memory usage, but also "counts" that can go up and down, like the number of +// running goroutines. +// +// To create Gauge instances, use NewGauge. +type Gauge interface { + Metric + Collector + + // Set sets the Gauge to an arbitrary value. + Set(float64) + // Inc increments the Gauge by 1. + Inc() + // Dec decrements the Gauge by 1. + Dec() + // Add adds the given value to the Gauge. (The value can be + // negative, resulting in a decrease of the Gauge.) + Add(float64) + // Sub subtracts the given value from the Gauge. (The value can be + // negative, resulting in an increase of the Gauge.) + Sub(float64) +} + +// GaugeOpts is an alias for Opts. See there for doc comments. +type GaugeOpts Opts + +// NewGauge creates a new Gauge based on the provided GaugeOpts. +func NewGauge(opts GaugeOpts) Gauge { + return newValue(NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), GaugeValue, 0) +} + +// GaugeVec is a Collector that bundles a set of Gauges that all share the same +// Desc, but have different values for their variable labels. This is used if +// you want to count the same thing partitioned by various dimensions +// (e.g. number of operations queued, partitioned by user and operation +// type). Create instances with NewGaugeVec. +type GaugeVec struct { + *MetricVec +} + +// NewGaugeVec creates a new GaugeVec based on the provided GaugeOpts and +// partitioned by the given label names. At least one label name must be +// provided. +func NewGaugeVec(opts GaugeOpts, labelNames []string) *GaugeVec { + desc := NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + labelNames, + opts.ConstLabels, + ) + return &GaugeVec{ + MetricVec: newMetricVec(desc, func(lvs ...string) Metric { + return newValue(desc, GaugeValue, 0, lvs...) + }), + } +} + +// GetMetricWithLabelValues replaces the method of the same name in +// MetricVec. The difference is that this method returns a Gauge and not a +// Metric so that no type conversion is required. +func (m *GaugeVec) GetMetricWithLabelValues(lvs ...string) (Gauge, error) { + metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...) + if metric != nil { + return metric.(Gauge), err + } + return nil, err +} + +// GetMetricWith replaces the method of the same name in MetricVec. The +// difference is that this method returns a Gauge and not a Metric so that no +// type conversion is required. +func (m *GaugeVec) GetMetricWith(labels Labels) (Gauge, error) { + metric, err := m.MetricVec.GetMetricWith(labels) + if metric != nil { + return metric.(Gauge), err + } + return nil, err +} + +// WithLabelValues works as GetMetricWithLabelValues, but panics where +// GetMetricWithLabelValues would have returned an error. By not returning an +// error, WithLabelValues allows shortcuts like +// myVec.WithLabelValues("404", "GET").Add(42) +func (m *GaugeVec) WithLabelValues(lvs ...string) Gauge { + return m.MetricVec.WithLabelValues(lvs...).(Gauge) +} + +// With works as GetMetricWith, but panics where GetMetricWithLabels would have +// returned an error. By not returning an error, With allows shortcuts like +// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42) +func (m *GaugeVec) With(labels Labels) Gauge { + return m.MetricVec.With(labels).(Gauge) +} + +// GaugeFunc is a Gauge whose value is determined at collect time by calling a +// provided function. +// +// To create GaugeFunc instances, use NewGaugeFunc. +type GaugeFunc interface { + Metric + Collector +} + +// NewGaugeFunc creates a new GaugeFunc based on the provided GaugeOpts. The +// value reported is determined by calling the given function from within the +// Write method. Take into account that metric collection may happen +// concurrently. If that results in concurrent calls to Write, like in the case +// where a GaugeFunc is directly registered with Prometheus, the provided +// function must be concurrency-safe. +func NewGaugeFunc(opts GaugeOpts, function func() float64) GaugeFunc { + return newValueFunc(NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), GaugeValue, function) +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go new file mode 100644 index 00000000..abc9d4ec --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go @@ -0,0 +1,263 @@ +package prometheus + +import ( + "fmt" + "runtime" + "runtime/debug" + "time" +) + +type goCollector struct { + goroutines Gauge + gcDesc *Desc + + // metrics to describe and collect + metrics memStatsMetrics +} + +// NewGoCollector returns a collector which exports metrics about the current +// go process. +func NewGoCollector() Collector { + return &goCollector{ + goroutines: NewGauge(GaugeOpts{ + Namespace: "go", + Name: "goroutines", + Help: "Number of goroutines that currently exist.", + }), + gcDesc: NewDesc( + "go_gc_duration_seconds", + "A summary of the GC invocation durations.", + nil, nil), + metrics: memStatsMetrics{ + { + desc: NewDesc( + memstatNamespace("alloc_bytes"), + "Number of bytes allocated and still in use.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.Alloc) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("alloc_bytes_total"), + "Total number of bytes allocated, even if freed.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.TotalAlloc) }, + valType: CounterValue, + }, { + desc: NewDesc( + memstatNamespace("sys_bytes"), + "Number of bytes obtained by system. Sum of all system allocations.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.Sys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("lookups_total"), + "Total number of pointer lookups.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.Lookups) }, + valType: CounterValue, + }, { + desc: NewDesc( + memstatNamespace("mallocs_total"), + "Total number of mallocs.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.Mallocs) }, + valType: CounterValue, + }, { + desc: NewDesc( + memstatNamespace("frees_total"), + "Total number of frees.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.Frees) }, + valType: CounterValue, + }, { + desc: NewDesc( + memstatNamespace("heap_alloc_bytes"), + "Number of heap bytes allocated and still in use.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapAlloc) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("heap_sys_bytes"), + "Number of heap bytes obtained from system.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("heap_idle_bytes"), + "Number of heap bytes waiting to be used.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapIdle) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("heap_inuse_bytes"), + "Number of heap bytes that are in use.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapInuse) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("heap_released_bytes_total"), + "Total number of heap bytes released to OS.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapReleased) }, + valType: CounterValue, + }, { + desc: NewDesc( + memstatNamespace("heap_objects"), + "Number of allocated objects.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapObjects) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("stack_inuse_bytes"), + "Number of bytes in use by the stack allocator.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.StackInuse) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("stack_sys_bytes"), + "Number of bytes obtained from system for stack allocator.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.StackSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("mspan_inuse_bytes"), + "Number of bytes in use by mspan structures.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.MSpanInuse) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("mspan_sys_bytes"), + "Number of bytes used for mspan structures obtained from system.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.MSpanSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("mcache_inuse_bytes"), + "Number of bytes in use by mcache structures.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.MCacheInuse) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("mcache_sys_bytes"), + "Number of bytes used for mcache structures obtained from system.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.MCacheSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("buck_hash_sys_bytes"), + "Number of bytes used by the profiling bucket hash table.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.BuckHashSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("gc_sys_bytes"), + "Number of bytes used for garbage collection system metadata.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.GCSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("other_sys_bytes"), + "Number of bytes used for other system allocations.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.OtherSys) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("next_gc_bytes"), + "Number of heap bytes when next garbage collection will take place.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.NextGC) }, + valType: GaugeValue, + }, { + desc: NewDesc( + memstatNamespace("last_gc_time_seconds"), + "Number of seconds since 1970 of last garbage collection.", + nil, nil, + ), + eval: func(ms *runtime.MemStats) float64 { return float64(ms.LastGC) / 1e9 }, + valType: GaugeValue, + }, + }, + } +} + +func memstatNamespace(s string) string { + return fmt.Sprintf("go_memstats_%s", s) +} + +// Describe returns all descriptions of the collector. +func (c *goCollector) Describe(ch chan<- *Desc) { + ch <- c.goroutines.Desc() + ch <- c.gcDesc + + for _, i := range c.metrics { + ch <- i.desc + } +} + +// Collect returns the current state of all metrics of the collector. +func (c *goCollector) Collect(ch chan<- Metric) { + c.goroutines.Set(float64(runtime.NumGoroutine())) + ch <- c.goroutines + + var stats debug.GCStats + stats.PauseQuantiles = make([]time.Duration, 5) + debug.ReadGCStats(&stats) + + quantiles := make(map[float64]float64) + for idx, pq := range stats.PauseQuantiles[1:] { + quantiles[float64(idx+1)/float64(len(stats.PauseQuantiles)-1)] = pq.Seconds() + } + quantiles[0.0] = stats.PauseQuantiles[0].Seconds() + ch <- MustNewConstSummary(c.gcDesc, uint64(stats.NumGC), float64(stats.PauseTotal.Seconds()), quantiles) + + ms := &runtime.MemStats{} + runtime.ReadMemStats(ms) + for _, i := range c.metrics { + ch <- MustNewConstMetric(i.desc, i.valType, i.eval(ms)) + } +} + +// memStatsMetrics provide description, value, and value type for memstat metrics. +type memStatsMetrics []struct { + desc *Desc + eval func(*runtime.MemStats) float64 + valType ValueType +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go new file mode 100644 index 00000000..9719e8fa --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go @@ -0,0 +1,444 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "fmt" + "math" + "sort" + "sync/atomic" + + "github.com/golang/protobuf/proto" + + dto "github.com/prometheus/client_model/go" +) + +// A Histogram counts individual observations from an event or sample stream in +// configurable buckets. Similar to a summary, it also provides a sum of +// observations and an observation count. +// +// On the Prometheus server, quantiles can be calculated from a Histogram using +// the histogram_quantile function in the query language. +// +// Note that Histograms, in contrast to Summaries, can be aggregated with the +// Prometheus query language (see the documentation for detailed +// procedures). However, Histograms require the user to pre-define suitable +// buckets, and they are in general less accurate. The Observe method of a +// Histogram has a very low performance overhead in comparison with the Observe +// method of a Summary. +// +// To create Histogram instances, use NewHistogram. +type Histogram interface { + Metric + Collector + + // Observe adds a single observation to the histogram. + Observe(float64) +} + +// bucketLabel is used for the label that defines the upper bound of a +// bucket of a histogram ("le" -> "less or equal"). +const bucketLabel = "le" + +// DefBuckets are the default Histogram buckets. The default buckets are +// tailored to broadly measure the response time (in seconds) of a network +// service. Most likely, however, you will be required to define buckets +// customized to your use case. +var ( + DefBuckets = []float64{.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10} + + errBucketLabelNotAllowed = fmt.Errorf( + "%q is not allowed as label name in histograms", bucketLabel, + ) +) + +// LinearBuckets creates 'count' buckets, each 'width' wide, where the lowest +// bucket has an upper bound of 'start'. The final +Inf bucket is not counted +// and not included in the returned slice. The returned slice is meant to be +// used for the Buckets field of HistogramOpts. +// +// The function panics if 'count' is zero or negative. +func LinearBuckets(start, width float64, count int) []float64 { + if count < 1 { + panic("LinearBuckets needs a positive count") + } + buckets := make([]float64, count) + for i := range buckets { + buckets[i] = start + start += width + } + return buckets +} + +// ExponentialBuckets creates 'count' buckets, where the lowest bucket has an +// upper bound of 'start' and each following bucket's upper bound is 'factor' +// times the previous bucket's upper bound. The final +Inf bucket is not counted +// and not included in the returned slice. The returned slice is meant to be +// used for the Buckets field of HistogramOpts. +// +// The function panics if 'count' is 0 or negative, if 'start' is 0 or negative, +// or if 'factor' is less than or equal 1. +func ExponentialBuckets(start, factor float64, count int) []float64 { + if count < 1 { + panic("ExponentialBuckets needs a positive count") + } + if start <= 0 { + panic("ExponentialBuckets needs a positive start value") + } + if factor <= 1 { + panic("ExponentialBuckets needs a factor greater than 1") + } + buckets := make([]float64, count) + for i := range buckets { + buckets[i] = start + start *= factor + } + return buckets +} + +// HistogramOpts bundles the options for creating a Histogram metric. It is +// mandatory to set Name and Help to a non-empty string. All other fields are +// optional and can safely be left at their zero value. +type HistogramOpts struct { + // Namespace, Subsystem, and Name are components of the fully-qualified + // name of the Histogram (created by joining these components with + // "_"). Only Name is mandatory, the others merely help structuring the + // name. Note that the fully-qualified name of the Histogram must be a + // valid Prometheus metric name. + Namespace string + Subsystem string + Name string + + // Help provides information about this Histogram. Mandatory! + // + // Metrics with the same fully-qualified name must have the same Help + // string. + Help string + + // ConstLabels are used to attach fixed labels to this + // Histogram. Histograms with the same fully-qualified name must have the + // same label names in their ConstLabels. + // + // Note that in most cases, labels have a value that varies during the + // lifetime of a process. Those labels are usually managed with a + // HistogramVec. ConstLabels serve only special purposes. One is for the + // special case where the value of a label does not change during the + // lifetime of a process, e.g. if the revision of the running binary is + // put into a label. Another, more advanced purpose is if more than one + // Collector needs to collect Histograms with the same fully-qualified + // name. In that case, those Summaries must differ in the values of + // their ConstLabels. See the Collector examples. + // + // If the value of a label never changes (not even between binaries), + // that label most likely should not be a label at all (but part of the + // metric name). + ConstLabels Labels + + // Buckets defines the buckets into which observations are counted. Each + // element in the slice is the upper inclusive bound of a bucket. The + // values must be sorted in strictly increasing order. There is no need + // to add a highest bucket with +Inf bound, it will be added + // implicitly. The default value is DefBuckets. + Buckets []float64 +} + +// NewHistogram creates a new Histogram based on the provided HistogramOpts. It +// panics if the buckets in HistogramOpts are not in strictly increasing order. +func NewHistogram(opts HistogramOpts) Histogram { + return newHistogram( + NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), + opts, + ) +} + +func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogram { + if len(desc.variableLabels) != len(labelValues) { + panic(errInconsistentCardinality) + } + + for _, n := range desc.variableLabels { + if n == bucketLabel { + panic(errBucketLabelNotAllowed) + } + } + for _, lp := range desc.constLabelPairs { + if lp.GetName() == bucketLabel { + panic(errBucketLabelNotAllowed) + } + } + + if len(opts.Buckets) == 0 { + opts.Buckets = DefBuckets + } + + h := &histogram{ + desc: desc, + upperBounds: opts.Buckets, + labelPairs: makeLabelPairs(desc, labelValues), + } + for i, upperBound := range h.upperBounds { + if i < len(h.upperBounds)-1 { + if upperBound >= h.upperBounds[i+1] { + panic(fmt.Errorf( + "histogram buckets must be in increasing order: %f >= %f", + upperBound, h.upperBounds[i+1], + )) + } + } else { + if math.IsInf(upperBound, +1) { + // The +Inf bucket is implicit. Remove it here. + h.upperBounds = h.upperBounds[:i] + } + } + } + // Finally we know the final length of h.upperBounds and can make counts. + h.counts = make([]uint64, len(h.upperBounds)) + + h.init(h) // Init self-collection. + return h +} + +type histogram struct { + // sumBits contains the bits of the float64 representing the sum of all + // observations. sumBits and count have to go first in the struct to + // guarantee alignment for atomic operations. + // http://golang.org/pkg/sync/atomic/#pkg-note-BUG + sumBits uint64 + count uint64 + + selfCollector + // Note that there is no mutex required. + + desc *Desc + + upperBounds []float64 + counts []uint64 + + labelPairs []*dto.LabelPair +} + +func (h *histogram) Desc() *Desc { + return h.desc +} + +func (h *histogram) Observe(v float64) { + // TODO(beorn7): For small numbers of buckets (<30), a linear search is + // slightly faster than the binary search. If we really care, we could + // switch from one search strategy to the other depending on the number + // of buckets. + // + // Microbenchmarks (BenchmarkHistogramNoLabels): + // 11 buckets: 38.3 ns/op linear - binary 48.7 ns/op + // 100 buckets: 78.1 ns/op linear - binary 54.9 ns/op + // 300 buckets: 154 ns/op linear - binary 61.6 ns/op + i := sort.SearchFloat64s(h.upperBounds, v) + if i < len(h.counts) { + atomic.AddUint64(&h.counts[i], 1) + } + atomic.AddUint64(&h.count, 1) + for { + oldBits := atomic.LoadUint64(&h.sumBits) + newBits := math.Float64bits(math.Float64frombits(oldBits) + v) + if atomic.CompareAndSwapUint64(&h.sumBits, oldBits, newBits) { + break + } + } +} + +func (h *histogram) Write(out *dto.Metric) error { + his := &dto.Histogram{} + buckets := make([]*dto.Bucket, len(h.upperBounds)) + + his.SampleSum = proto.Float64(math.Float64frombits(atomic.LoadUint64(&h.sumBits))) + his.SampleCount = proto.Uint64(atomic.LoadUint64(&h.count)) + var count uint64 + for i, upperBound := range h.upperBounds { + count += atomic.LoadUint64(&h.counts[i]) + buckets[i] = &dto.Bucket{ + CumulativeCount: proto.Uint64(count), + UpperBound: proto.Float64(upperBound), + } + } + his.Bucket = buckets + out.Histogram = his + out.Label = h.labelPairs + return nil +} + +// HistogramVec is a Collector that bundles a set of Histograms that all share the +// same Desc, but have different values for their variable labels. This is used +// if you want to count the same thing partitioned by various dimensions +// (e.g. HTTP request latencies, partitioned by status code and method). Create +// instances with NewHistogramVec. +type HistogramVec struct { + *MetricVec +} + +// NewHistogramVec creates a new HistogramVec based on the provided HistogramOpts and +// partitioned by the given label names. At least one label name must be +// provided. +func NewHistogramVec(opts HistogramOpts, labelNames []string) *HistogramVec { + desc := NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + labelNames, + opts.ConstLabels, + ) + return &HistogramVec{ + MetricVec: newMetricVec(desc, func(lvs ...string) Metric { + return newHistogram(desc, opts, lvs...) + }), + } +} + +// GetMetricWithLabelValues replaces the method of the same name in +// MetricVec. The difference is that this method returns a Histogram and not a +// Metric so that no type conversion is required. +func (m *HistogramVec) GetMetricWithLabelValues(lvs ...string) (Histogram, error) { + metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...) + if metric != nil { + return metric.(Histogram), err + } + return nil, err +} + +// GetMetricWith replaces the method of the same name in MetricVec. The +// difference is that this method returns a Histogram and not a Metric so that no +// type conversion is required. +func (m *HistogramVec) GetMetricWith(labels Labels) (Histogram, error) { + metric, err := m.MetricVec.GetMetricWith(labels) + if metric != nil { + return metric.(Histogram), err + } + return nil, err +} + +// WithLabelValues works as GetMetricWithLabelValues, but panics where +// GetMetricWithLabelValues would have returned an error. By not returning an +// error, WithLabelValues allows shortcuts like +// myVec.WithLabelValues("404", "GET").Observe(42.21) +func (m *HistogramVec) WithLabelValues(lvs ...string) Histogram { + return m.MetricVec.WithLabelValues(lvs...).(Histogram) +} + +// With works as GetMetricWith, but panics where GetMetricWithLabels would have +// returned an error. By not returning an error, With allows shortcuts like +// myVec.With(Labels{"code": "404", "method": "GET"}).Observe(42.21) +func (m *HistogramVec) With(labels Labels) Histogram { + return m.MetricVec.With(labels).(Histogram) +} + +type constHistogram struct { + desc *Desc + count uint64 + sum float64 + buckets map[float64]uint64 + labelPairs []*dto.LabelPair +} + +func (h *constHistogram) Desc() *Desc { + return h.desc +} + +func (h *constHistogram) Write(out *dto.Metric) error { + his := &dto.Histogram{} + buckets := make([]*dto.Bucket, 0, len(h.buckets)) + + his.SampleCount = proto.Uint64(h.count) + his.SampleSum = proto.Float64(h.sum) + + for upperBound, count := range h.buckets { + buckets = append(buckets, &dto.Bucket{ + CumulativeCount: proto.Uint64(count), + UpperBound: proto.Float64(upperBound), + }) + } + + if len(buckets) > 0 { + sort.Sort(buckSort(buckets)) + } + his.Bucket = buckets + + out.Histogram = his + out.Label = h.labelPairs + + return nil +} + +// NewConstHistogram returns a metric representing a Prometheus histogram with +// fixed values for the count, sum, and bucket counts. As those parameters +// cannot be changed, the returned value does not implement the Histogram +// interface (but only the Metric interface). Users of this package will not +// have much use for it in regular operations. However, when implementing custom +// Collectors, it is useful as a throw-away metric that is generated on the fly +// to send it to Prometheus in the Collect method. +// +// buckets is a map of upper bounds to cumulative counts, excluding the +Inf +// bucket. +// +// NewConstHistogram returns an error if the length of labelValues is not +// consistent with the variable labels in Desc. +func NewConstHistogram( + desc *Desc, + count uint64, + sum float64, + buckets map[float64]uint64, + labelValues ...string, +) (Metric, error) { + if len(desc.variableLabels) != len(labelValues) { + return nil, errInconsistentCardinality + } + return &constHistogram{ + desc: desc, + count: count, + sum: sum, + buckets: buckets, + labelPairs: makeLabelPairs(desc, labelValues), + }, nil +} + +// MustNewConstHistogram is a version of NewConstHistogram that panics where +// NewConstMetric would have returned an error. +func MustNewConstHistogram( + desc *Desc, + count uint64, + sum float64, + buckets map[float64]uint64, + labelValues ...string, +) Metric { + m, err := NewConstHistogram(desc, count, sum, buckets, labelValues...) + if err != nil { + panic(err) + } + return m +} + +type buckSort []*dto.Bucket + +func (s buckSort) Len() int { + return len(s) +} + +func (s buckSort) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s buckSort) Less(i, j int) bool { + return s[i].GetUpperBound() < s[j].GetUpperBound() +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/http.go b/vendor/github.com/prometheus/client_golang/prometheus/http.go new file mode 100644 index 00000000..67ee5ac7 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/http.go @@ -0,0 +1,490 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "bufio" + "bytes" + "compress/gzip" + "fmt" + "io" + "net" + "net/http" + "strconv" + "strings" + "sync" + "time" + + "github.com/prometheus/common/expfmt" +) + +// TODO(beorn7): Remove this whole file. It is a partial mirror of +// promhttp/http.go (to avoid circular import chains) where everything HTTP +// related should live. The functions here are just for avoiding +// breakage. Everything is deprecated. + +const ( + contentTypeHeader = "Content-Type" + contentLengthHeader = "Content-Length" + contentEncodingHeader = "Content-Encoding" + acceptEncodingHeader = "Accept-Encoding" +) + +var bufPool sync.Pool + +func getBuf() *bytes.Buffer { + buf := bufPool.Get() + if buf == nil { + return &bytes.Buffer{} + } + return buf.(*bytes.Buffer) +} + +func giveBuf(buf *bytes.Buffer) { + buf.Reset() + bufPool.Put(buf) +} + +// Handler returns an HTTP handler for the DefaultGatherer. It is +// already instrumented with InstrumentHandler (using "prometheus" as handler +// name). +// +// Deprecated: Please note the issues described in the doc comment of +// InstrumentHandler. You might want to consider using promhttp.Handler instead +// (which is non instrumented). +func Handler() http.Handler { + return InstrumentHandler("prometheus", UninstrumentedHandler()) +} + +// UninstrumentedHandler returns an HTTP handler for the DefaultGatherer. +// +// Deprecated: Use promhttp.Handler instead. See there for further documentation. +func UninstrumentedHandler() http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { + mfs, err := DefaultGatherer.Gather() + if err != nil { + http.Error(w, "An error has occurred during metrics collection:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + + contentType := expfmt.Negotiate(req.Header) + buf := getBuf() + defer giveBuf(buf) + writer, encoding := decorateWriter(req, buf) + enc := expfmt.NewEncoder(writer, contentType) + var lastErr error + for _, mf := range mfs { + if err := enc.Encode(mf); err != nil { + lastErr = err + http.Error(w, "An error has occurred during metrics encoding:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + } + if closer, ok := writer.(io.Closer); ok { + closer.Close() + } + if lastErr != nil && buf.Len() == 0 { + http.Error(w, "No metrics encoded, last error:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + header := w.Header() + header.Set(contentTypeHeader, string(contentType)) + header.Set(contentLengthHeader, fmt.Sprint(buf.Len())) + if encoding != "" { + header.Set(contentEncodingHeader, encoding) + } + w.Write(buf.Bytes()) + }) +} + +// decorateWriter wraps a writer to handle gzip compression if requested. It +// returns the decorated writer and the appropriate "Content-Encoding" header +// (which is empty if no compression is enabled). +func decorateWriter(request *http.Request, writer io.Writer) (io.Writer, string) { + header := request.Header.Get(acceptEncodingHeader) + parts := strings.Split(header, ",") + for _, part := range parts { + part := strings.TrimSpace(part) + if part == "gzip" || strings.HasPrefix(part, "gzip;") { + return gzip.NewWriter(writer), "gzip" + } + } + return writer, "" +} + +var instLabels = []string{"method", "code"} + +type nower interface { + Now() time.Time +} + +type nowFunc func() time.Time + +func (n nowFunc) Now() time.Time { + return n() +} + +var now nower = nowFunc(func() time.Time { + return time.Now() +}) + +func nowSeries(t ...time.Time) nower { + return nowFunc(func() time.Time { + defer func() { + t = t[1:] + }() + + return t[0] + }) +} + +// InstrumentHandler wraps the given HTTP handler for instrumentation. It +// registers four metric collectors (if not already done) and reports HTTP +// metrics to the (newly or already) registered collectors: http_requests_total +// (CounterVec), http_request_duration_microseconds (Summary), +// http_request_size_bytes (Summary), http_response_size_bytes (Summary). Each +// has a constant label named "handler" with the provided handlerName as +// value. http_requests_total is a metric vector partitioned by HTTP method +// (label name "method") and HTTP status code (label name "code"). +// +// Deprecated: InstrumentHandler has several issues: +// +// - It uses Summaries rather than Histograms. Summaries are not useful if +// aggregation across multiple instances is required. +// +// - It uses microseconds as unit, which is deprecated and should be replaced by +// seconds. +// +// - The size of the request is calculated in a separate goroutine. Since this +// calculator requires access to the request header, it creates a race with +// any writes to the header performed during request handling. +// httputil.ReverseProxy is a prominent example for a handler +// performing such writes. +// +// Upcoming versions of this package will provide ways of instrumenting HTTP +// handlers that are more flexible and have fewer issues. Please prefer direct +// instrumentation in the meantime. +func InstrumentHandler(handlerName string, handler http.Handler) http.HandlerFunc { + return InstrumentHandlerFunc(handlerName, handler.ServeHTTP) +} + +// InstrumentHandlerFunc wraps the given function for instrumentation. It +// otherwise works in the same way as InstrumentHandler (and shares the same +// issues). +// +// Deprecated: InstrumentHandlerFunc is deprecated for the same reasons as +// InstrumentHandler is. +func InstrumentHandlerFunc(handlerName string, handlerFunc func(http.ResponseWriter, *http.Request)) http.HandlerFunc { + return InstrumentHandlerFuncWithOpts( + SummaryOpts{ + Subsystem: "http", + ConstLabels: Labels{"handler": handlerName}, + }, + handlerFunc, + ) +} + +// InstrumentHandlerWithOpts works like InstrumentHandler (and shares the same +// issues) but provides more flexibility (at the cost of a more complex call +// syntax). As InstrumentHandler, this function registers four metric +// collectors, but it uses the provided SummaryOpts to create them. However, the +// fields "Name" and "Help" in the SummaryOpts are ignored. "Name" is replaced +// by "requests_total", "request_duration_microseconds", "request_size_bytes", +// and "response_size_bytes", respectively. "Help" is replaced by an appropriate +// help string. The names of the variable labels of the http_requests_total +// CounterVec are "method" (get, post, etc.), and "code" (HTTP status code). +// +// If InstrumentHandlerWithOpts is called as follows, it mimics exactly the +// behavior of InstrumentHandler: +// +// prometheus.InstrumentHandlerWithOpts( +// prometheus.SummaryOpts{ +// Subsystem: "http", +// ConstLabels: prometheus.Labels{"handler": handlerName}, +// }, +// handler, +// ) +// +// Technical detail: "requests_total" is a CounterVec, not a SummaryVec, so it +// cannot use SummaryOpts. Instead, a CounterOpts struct is created internally, +// and all its fields are set to the equally named fields in the provided +// SummaryOpts. +// +// Deprecated: InstrumentHandlerWithOpts is deprecated for the same reasons as +// InstrumentHandler is. +func InstrumentHandlerWithOpts(opts SummaryOpts, handler http.Handler) http.HandlerFunc { + return InstrumentHandlerFuncWithOpts(opts, handler.ServeHTTP) +} + +// InstrumentHandlerFuncWithOpts works like InstrumentHandlerFunc (and shares +// the same issues) but provides more flexibility (at the cost of a more complex +// call syntax). See InstrumentHandlerWithOpts for details how the provided +// SummaryOpts are used. +// +// Deprecated: InstrumentHandlerFuncWithOpts is deprecated for the same reasons +// as InstrumentHandler is. +func InstrumentHandlerFuncWithOpts(opts SummaryOpts, handlerFunc func(http.ResponseWriter, *http.Request)) http.HandlerFunc { + reqCnt := NewCounterVec( + CounterOpts{ + Namespace: opts.Namespace, + Subsystem: opts.Subsystem, + Name: "requests_total", + Help: "Total number of HTTP requests made.", + ConstLabels: opts.ConstLabels, + }, + instLabels, + ) + + opts.Name = "request_duration_microseconds" + opts.Help = "The HTTP request latencies in microseconds." + reqDur := NewSummary(opts) + + opts.Name = "request_size_bytes" + opts.Help = "The HTTP request sizes in bytes." + reqSz := NewSummary(opts) + + opts.Name = "response_size_bytes" + opts.Help = "The HTTP response sizes in bytes." + resSz := NewSummary(opts) + + regReqCnt := MustRegisterOrGet(reqCnt).(*CounterVec) + regReqDur := MustRegisterOrGet(reqDur).(Summary) + regReqSz := MustRegisterOrGet(reqSz).(Summary) + regResSz := MustRegisterOrGet(resSz).(Summary) + + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + now := time.Now() + + delegate := &responseWriterDelegator{ResponseWriter: w} + out := make(chan int) + urlLen := 0 + if r.URL != nil { + urlLen = len(r.URL.String()) + } + go computeApproximateRequestSize(r, out, urlLen) + + _, cn := w.(http.CloseNotifier) + _, fl := w.(http.Flusher) + _, hj := w.(http.Hijacker) + _, rf := w.(io.ReaderFrom) + var rw http.ResponseWriter + if cn && fl && hj && rf { + rw = &fancyResponseWriterDelegator{delegate} + } else { + rw = delegate + } + handlerFunc(rw, r) + + elapsed := float64(time.Since(now)) / float64(time.Microsecond) + + method := sanitizeMethod(r.Method) + code := sanitizeCode(delegate.status) + regReqCnt.WithLabelValues(method, code).Inc() + regReqDur.Observe(elapsed) + regResSz.Observe(float64(delegate.written)) + regReqSz.Observe(float64(<-out)) + }) +} + +func computeApproximateRequestSize(r *http.Request, out chan int, s int) { + s += len(r.Method) + s += len(r.Proto) + for name, values := range r.Header { + s += len(name) + for _, value := range values { + s += len(value) + } + } + s += len(r.Host) + + // N.B. r.Form and r.MultipartForm are assumed to be included in r.URL. + + if r.ContentLength != -1 { + s += int(r.ContentLength) + } + out <- s +} + +type responseWriterDelegator struct { + http.ResponseWriter + + handler, method string + status int + written int64 + wroteHeader bool +} + +func (r *responseWriterDelegator) WriteHeader(code int) { + r.status = code + r.wroteHeader = true + r.ResponseWriter.WriteHeader(code) +} + +func (r *responseWriterDelegator) Write(b []byte) (int, error) { + if !r.wroteHeader { + r.WriteHeader(http.StatusOK) + } + n, err := r.ResponseWriter.Write(b) + r.written += int64(n) + return n, err +} + +type fancyResponseWriterDelegator struct { + *responseWriterDelegator +} + +func (f *fancyResponseWriterDelegator) CloseNotify() <-chan bool { + return f.ResponseWriter.(http.CloseNotifier).CloseNotify() +} + +func (f *fancyResponseWriterDelegator) Flush() { + f.ResponseWriter.(http.Flusher).Flush() +} + +func (f *fancyResponseWriterDelegator) Hijack() (net.Conn, *bufio.ReadWriter, error) { + return f.ResponseWriter.(http.Hijacker).Hijack() +} + +func (f *fancyResponseWriterDelegator) ReadFrom(r io.Reader) (int64, error) { + if !f.wroteHeader { + f.WriteHeader(http.StatusOK) + } + n, err := f.ResponseWriter.(io.ReaderFrom).ReadFrom(r) + f.written += n + return n, err +} + +func sanitizeMethod(m string) string { + switch m { + case "GET", "get": + return "get" + case "PUT", "put": + return "put" + case "HEAD", "head": + return "head" + case "POST", "post": + return "post" + case "DELETE", "delete": + return "delete" + case "CONNECT", "connect": + return "connect" + case "OPTIONS", "options": + return "options" + case "NOTIFY", "notify": + return "notify" + default: + return strings.ToLower(m) + } +} + +func sanitizeCode(s int) string { + switch s { + case 100: + return "100" + case 101: + return "101" + + case 200: + return "200" + case 201: + return "201" + case 202: + return "202" + case 203: + return "203" + case 204: + return "204" + case 205: + return "205" + case 206: + return "206" + + case 300: + return "300" + case 301: + return "301" + case 302: + return "302" + case 304: + return "304" + case 305: + return "305" + case 307: + return "307" + + case 400: + return "400" + case 401: + return "401" + case 402: + return "402" + case 403: + return "403" + case 404: + return "404" + case 405: + return "405" + case 406: + return "406" + case 407: + return "407" + case 408: + return "408" + case 409: + return "409" + case 410: + return "410" + case 411: + return "411" + case 412: + return "412" + case 413: + return "413" + case 414: + return "414" + case 415: + return "415" + case 416: + return "416" + case 417: + return "417" + case 418: + return "418" + + case 500: + return "500" + case 501: + return "501" + case 502: + return "502" + case 503: + return "503" + case 504: + return "504" + case 505: + return "505" + + case 428: + return "428" + case 429: + return "429" + case 431: + return "431" + case 511: + return "511" + + default: + return strconv.Itoa(s) + } +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/metric.go b/vendor/github.com/prometheus/client_golang/prometheus/metric.go new file mode 100644 index 00000000..d4063d98 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/metric.go @@ -0,0 +1,166 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "strings" + + dto "github.com/prometheus/client_model/go" +) + +const separatorByte byte = 255 + +// A Metric models a single sample value with its meta data being exported to +// Prometheus. Implementations of Metric in this package are Gauge, Counter, +// Histogram, Summary, and Untyped. +type Metric interface { + // Desc returns the descriptor for the Metric. This method idempotently + // returns the same descriptor throughout the lifetime of the + // Metric. The returned descriptor is immutable by contract. A Metric + // unable to describe itself must return an invalid descriptor (created + // with NewInvalidDesc). + Desc() *Desc + // Write encodes the Metric into a "Metric" Protocol Buffer data + // transmission object. + // + // Metric implementations must observe concurrency safety as reads of + // this metric may occur at any time, and any blocking occurs at the + // expense of total performance of rendering all registered + // metrics. Ideally, Metric implementations should support concurrent + // readers. + // + // While populating dto.Metric, it is the responsibility of the + // implementation to ensure validity of the Metric protobuf (like valid + // UTF-8 strings or syntactically valid metric and label names). It is + // recommended to sort labels lexicographically. (Implementers may find + // LabelPairSorter useful for that.) Callers of Write should still make + // sure of sorting if they depend on it. + Write(*dto.Metric) error + // TODO(beorn7): The original rationale of passing in a pre-allocated + // dto.Metric protobuf to save allocations has disappeared. The + // signature of this method should be changed to "Write() (*dto.Metric, + // error)". +} + +// Opts bundles the options for creating most Metric types. Each metric +// implementation XXX has its own XXXOpts type, but in most cases, it is just be +// an alias of this type (which might change when the requirement arises.) +// +// It is mandatory to set Name and Help to a non-empty string. All other fields +// are optional and can safely be left at their zero value. +type Opts struct { + // Namespace, Subsystem, and Name are components of the fully-qualified + // name of the Metric (created by joining these components with + // "_"). Only Name is mandatory, the others merely help structuring the + // name. Note that the fully-qualified name of the metric must be a + // valid Prometheus metric name. + Namespace string + Subsystem string + Name string + + // Help provides information about this metric. Mandatory! + // + // Metrics with the same fully-qualified name must have the same Help + // string. + Help string + + // ConstLabels are used to attach fixed labels to this metric. Metrics + // with the same fully-qualified name must have the same label names in + // their ConstLabels. + // + // Note that in most cases, labels have a value that varies during the + // lifetime of a process. Those labels are usually managed with a metric + // vector collector (like CounterVec, GaugeVec, UntypedVec). ConstLabels + // serve only special purposes. One is for the special case where the + // value of a label does not change during the lifetime of a process, + // e.g. if the revision of the running binary is put into a + // label. Another, more advanced purpose is if more than one Collector + // needs to collect Metrics with the same fully-qualified name. In that + // case, those Metrics must differ in the values of their + // ConstLabels. See the Collector examples. + // + // If the value of a label never changes (not even between binaries), + // that label most likely should not be a label at all (but part of the + // metric name). + ConstLabels Labels +} + +// BuildFQName joins the given three name components by "_". Empty name +// components are ignored. If the name parameter itself is empty, an empty +// string is returned, no matter what. Metric implementations included in this +// library use this function internally to generate the fully-qualified metric +// name from the name component in their Opts. Users of the library will only +// need this function if they implement their own Metric or instantiate a Desc +// (with NewDesc) directly. +func BuildFQName(namespace, subsystem, name string) string { + if name == "" { + return "" + } + switch { + case namespace != "" && subsystem != "": + return strings.Join([]string{namespace, subsystem, name}, "_") + case namespace != "": + return strings.Join([]string{namespace, name}, "_") + case subsystem != "": + return strings.Join([]string{subsystem, name}, "_") + } + return name +} + +// LabelPairSorter implements sort.Interface. It is used to sort a slice of +// dto.LabelPair pointers. This is useful for implementing the Write method of +// custom metrics. +type LabelPairSorter []*dto.LabelPair + +func (s LabelPairSorter) Len() int { + return len(s) +} + +func (s LabelPairSorter) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s LabelPairSorter) Less(i, j int) bool { + return s[i].GetName() < s[j].GetName() +} + +type hashSorter []uint64 + +func (s hashSorter) Len() int { + return len(s) +} + +func (s hashSorter) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s hashSorter) Less(i, j int) bool { + return s[i] < s[j] +} + +type invalidMetric struct { + desc *Desc + err error +} + +// NewInvalidMetric returns a metric whose Write method always returns the +// provided error. It is useful if a Collector finds itself unable to collect +// a metric and wishes to report an error to the registry. +func NewInvalidMetric(desc *Desc, err error) Metric { + return &invalidMetric{desc, err} +} + +func (m *invalidMetric) Desc() *Desc { return m.desc } + +func (m *invalidMetric) Write(*dto.Metric) error { return m.err } diff --git a/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go new file mode 100644 index 00000000..e31e62e7 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go @@ -0,0 +1,142 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import "github.com/prometheus/procfs" + +type processCollector struct { + pid int + collectFn func(chan<- Metric) + pidFn func() (int, error) + cpuTotal Counter + openFDs, maxFDs Gauge + vsize, rss Gauge + startTime Gauge +} + +// NewProcessCollector returns a collector which exports the current state of +// process metrics including cpu, memory and file descriptor usage as well as +// the process start time for the given process id under the given namespace. +func NewProcessCollector(pid int, namespace string) Collector { + return NewProcessCollectorPIDFn( + func() (int, error) { return pid, nil }, + namespace, + ) +} + +// NewProcessCollectorPIDFn returns a collector which exports the current state +// of process metrics including cpu, memory and file descriptor usage as well +// as the process start time under the given namespace. The given pidFn is +// called on each collect and is used to determine the process to export +// metrics for. +func NewProcessCollectorPIDFn( + pidFn func() (int, error), + namespace string, +) Collector { + c := processCollector{ + pidFn: pidFn, + collectFn: func(chan<- Metric) {}, + + cpuTotal: NewCounter(CounterOpts{ + Namespace: namespace, + Name: "process_cpu_seconds_total", + Help: "Total user and system CPU time spent in seconds.", + }), + openFDs: NewGauge(GaugeOpts{ + Namespace: namespace, + Name: "process_open_fds", + Help: "Number of open file descriptors.", + }), + maxFDs: NewGauge(GaugeOpts{ + Namespace: namespace, + Name: "process_max_fds", + Help: "Maximum number of open file descriptors.", + }), + vsize: NewGauge(GaugeOpts{ + Namespace: namespace, + Name: "process_virtual_memory_bytes", + Help: "Virtual memory size in bytes.", + }), + rss: NewGauge(GaugeOpts{ + Namespace: namespace, + Name: "process_resident_memory_bytes", + Help: "Resident memory size in bytes.", + }), + startTime: NewGauge(GaugeOpts{ + Namespace: namespace, + Name: "process_start_time_seconds", + Help: "Start time of the process since unix epoch in seconds.", + }), + } + + // Set up process metric collection if supported by the runtime. + if _, err := procfs.NewStat(); err == nil { + c.collectFn = c.processCollect + } + + return &c +} + +// Describe returns all descriptions of the collector. +func (c *processCollector) Describe(ch chan<- *Desc) { + ch <- c.cpuTotal.Desc() + ch <- c.openFDs.Desc() + ch <- c.maxFDs.Desc() + ch <- c.vsize.Desc() + ch <- c.rss.Desc() + ch <- c.startTime.Desc() +} + +// Collect returns the current state of all metrics of the collector. +func (c *processCollector) Collect(ch chan<- Metric) { + c.collectFn(ch) +} + +// TODO(ts): Bring back error reporting by reverting 7faf9e7 as soon as the +// client allows users to configure the error behavior. +func (c *processCollector) processCollect(ch chan<- Metric) { + pid, err := c.pidFn() + if err != nil { + return + } + + p, err := procfs.NewProc(pid) + if err != nil { + return + } + + if stat, err := p.NewStat(); err == nil { + c.cpuTotal.Set(stat.CPUTime()) + ch <- c.cpuTotal + c.vsize.Set(float64(stat.VirtualMemory())) + ch <- c.vsize + c.rss.Set(float64(stat.ResidentMemory())) + ch <- c.rss + + if startTime, err := stat.StartTime(); err == nil { + c.startTime.Set(startTime) + ch <- c.startTime + } + } + + if fds, err := p.FileDescriptorsLen(); err == nil { + c.openFDs.Set(float64(fds)) + ch <- c.openFDs + } + + if limits, err := p.NewLimits(); err == nil { + c.maxFDs.Set(float64(limits.OpenFiles)) + ch <- c.maxFDs + } +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go new file mode 100644 index 00000000..b6dd5a26 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go @@ -0,0 +1,201 @@ +// Copyright 2016 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Copyright (c) 2013, The Prometheus Authors +// All rights reserved. +// +// Use of this source code is governed by a BSD-style license that can be found +// in the LICENSE file. + +// Package promhttp contains functions to create http.Handler instances to +// expose Prometheus metrics via HTTP. In later versions of this package, it +// will also contain tooling to instrument instances of http.Handler and +// http.RoundTripper. +// +// promhttp.Handler acts on the prometheus.DefaultGatherer. With HandlerFor, +// you can create a handler for a custom registry or anything that implements +// the Gatherer interface. It also allows to create handlers that act +// differently on errors or allow to log errors. +package promhttp + +import ( + "bytes" + "compress/gzip" + "fmt" + "io" + "net/http" + "strings" + "sync" + + "github.com/prometheus/common/expfmt" + + "github.com/prometheus/client_golang/prometheus" +) + +const ( + contentTypeHeader = "Content-Type" + contentLengthHeader = "Content-Length" + contentEncodingHeader = "Content-Encoding" + acceptEncodingHeader = "Accept-Encoding" +) + +var bufPool sync.Pool + +func getBuf() *bytes.Buffer { + buf := bufPool.Get() + if buf == nil { + return &bytes.Buffer{} + } + return buf.(*bytes.Buffer) +} + +func giveBuf(buf *bytes.Buffer) { + buf.Reset() + bufPool.Put(buf) +} + +// Handler returns an HTTP handler for the prometheus.DefaultGatherer. The +// Handler uses the default HandlerOpts, i.e. report the first error as an HTTP +// error, no error logging, and compression if requested by the client. +// +// If you want to create a Handler for the DefaultGatherer with different +// HandlerOpts, create it with HandlerFor with prometheus.DefaultGatherer and +// your desired HandlerOpts. +func Handler() http.Handler { + return HandlerFor(prometheus.DefaultGatherer, HandlerOpts{}) +} + +// HandlerFor returns an http.Handler for the provided Gatherer. The behavior +// of the Handler is defined by the provided HandlerOpts. +func HandlerFor(reg prometheus.Gatherer, opts HandlerOpts) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { + mfs, err := reg.Gather() + if err != nil { + if opts.ErrorLog != nil { + opts.ErrorLog.Println("error gathering metrics:", err) + } + switch opts.ErrorHandling { + case PanicOnError: + panic(err) + case ContinueOnError: + if len(mfs) == 0 { + http.Error(w, "No metrics gathered, last error:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + case HTTPErrorOnError: + http.Error(w, "An error has occurred during metrics gathering:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + } + + contentType := expfmt.Negotiate(req.Header) + buf := getBuf() + defer giveBuf(buf) + writer, encoding := decorateWriter(req, buf, opts.DisableCompression) + enc := expfmt.NewEncoder(writer, contentType) + var lastErr error + for _, mf := range mfs { + if err := enc.Encode(mf); err != nil { + lastErr = err + if opts.ErrorLog != nil { + opts.ErrorLog.Println("error encoding metric family:", err) + } + switch opts.ErrorHandling { + case PanicOnError: + panic(err) + case ContinueOnError: + // Handled later. + case HTTPErrorOnError: + http.Error(w, "An error has occurred during metrics encoding:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + } + } + if closer, ok := writer.(io.Closer); ok { + closer.Close() + } + if lastErr != nil && buf.Len() == 0 { + http.Error(w, "No metrics encoded, last error:\n\n"+err.Error(), http.StatusInternalServerError) + return + } + header := w.Header() + header.Set(contentTypeHeader, string(contentType)) + header.Set(contentLengthHeader, fmt.Sprint(buf.Len())) + if encoding != "" { + header.Set(contentEncodingHeader, encoding) + } + w.Write(buf.Bytes()) + // TODO(beorn7): Consider streaming serving of metrics. + }) +} + +// HandlerErrorHandling defines how a Handler serving metrics will handle +// errors. +type HandlerErrorHandling int + +// These constants cause handlers serving metrics to behave as described if +// errors are encountered. +const ( + // Serve an HTTP status code 500 upon the first error + // encountered. Report the error message in the body. + HTTPErrorOnError HandlerErrorHandling = iota + // Ignore errors and try to serve as many metrics as possible. However, + // if no metrics can be served, serve an HTTP status code 500 and the + // last error message in the body. Only use this in deliberate "best + // effort" metrics collection scenarios. It is recommended to at least + // log errors (by providing an ErrorLog in HandlerOpts) to not mask + // errors completely. + ContinueOnError + // Panic upon the first error encountered (useful for "crash only" apps). + PanicOnError +) + +// Logger is the minimal interface HandlerOpts needs for logging. Note that +// log.Logger from the standard library implements this interface, and it is +// easy to implement by custom loggers, if they don't do so already anyway. +type Logger interface { + Println(v ...interface{}) +} + +// HandlerOpts specifies options how to serve metrics via an http.Handler. The +// zero value of HandlerOpts is a reasonable default. +type HandlerOpts struct { + // ErrorLog specifies an optional logger for errors collecting and + // serving metrics. If nil, errors are not logged at all. + ErrorLog Logger + // ErrorHandling defines how errors are handled. Note that errors are + // logged regardless of the configured ErrorHandling provided ErrorLog + // is not nil. + ErrorHandling HandlerErrorHandling + // If DisableCompression is true, the handler will never compress the + // response, even if requested by the client. + DisableCompression bool +} + +// decorateWriter wraps a writer to handle gzip compression if requested. It +// returns the decorated writer and the appropriate "Content-Encoding" header +// (which is empty if no compression is enabled). +func decorateWriter(request *http.Request, writer io.Writer, compressionDisabled bool) (io.Writer, string) { + if compressionDisabled { + return writer, "" + } + header := request.Header.Get(acceptEncodingHeader) + parts := strings.Split(header, ",") + for _, part := range parts { + part := strings.TrimSpace(part) + if part == "gzip" || strings.HasPrefix(part, "gzip;") { + return gzip.NewWriter(writer), "gzip" + } + } + return writer, "" +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/registry.go b/vendor/github.com/prometheus/client_golang/prometheus/registry.go new file mode 100644 index 00000000..32a3986b --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/registry.go @@ -0,0 +1,806 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "bytes" + "errors" + "fmt" + "os" + "sort" + "sync" + + "github.com/golang/protobuf/proto" + + dto "github.com/prometheus/client_model/go" +) + +const ( + // Capacity for the channel to collect metrics and descriptors. + capMetricChan = 1000 + capDescChan = 10 +) + +// DefaultRegisterer and DefaultGatherer are the implementations of the +// Registerer and Gatherer interface a number of convenience functions in this +// package act on. Initially, both variables point to the same Registry, which +// has a process collector (see NewProcessCollector) and a Go collector (see +// NewGoCollector) already registered. This approach to keep default instances +// as global state mirrors the approach of other packages in the Go standard +// library. Note that there are caveats. Change the variables with caution and +// only if you understand the consequences. Users who want to avoid global state +// altogether should not use the convenience function and act on custom +// instances instead. +var ( + defaultRegistry = NewRegistry() + DefaultRegisterer Registerer = defaultRegistry + DefaultGatherer Gatherer = defaultRegistry +) + +func init() { + MustRegister(NewProcessCollector(os.Getpid(), "")) + MustRegister(NewGoCollector()) +} + +// NewRegistry creates a new vanilla Registry without any Collectors +// pre-registered. +func NewRegistry() *Registry { + return &Registry{ + collectorsByID: map[uint64]Collector{}, + descIDs: map[uint64]struct{}{}, + dimHashesByName: map[string]uint64{}, + } +} + +// NewPedanticRegistry returns a registry that checks during collection if each +// collected Metric is consistent with its reported Desc, and if the Desc has +// actually been registered with the registry. +// +// Usually, a Registry will be happy as long as the union of all collected +// Metrics is consistent and valid even if some metrics are not consistent with +// their own Desc or a Desc provided by their registered Collector. Well-behaved +// Collectors and Metrics will only provide consistent Descs. This Registry is +// useful to test the implementation of Collectors and Metrics. +func NewPedanticRegistry() *Registry { + r := NewRegistry() + r.pedanticChecksEnabled = true + return r +} + +// Registerer is the interface for the part of a registry in charge of +// registering and unregistering. Users of custom registries should use +// Registerer as type for registration purposes (rather then the Registry type +// directly). In that way, they are free to use custom Registerer implementation +// (e.g. for testing purposes). +type Registerer interface { + // Register registers a new Collector to be included in metrics + // collection. It returns an error if the descriptors provided by the + // Collector are invalid or if they — in combination with descriptors of + // already registered Collectors — do not fulfill the consistency and + // uniqueness criteria described in the documentation of metric.Desc. + // + // If the provided Collector is equal to a Collector already registered + // (which includes the case of re-registering the same Collector), the + // returned error is an instance of AlreadyRegisteredError, which + // contains the previously registered Collector. + // + // It is in general not safe to register the same Collector multiple + // times concurrently. + Register(Collector) error + // MustRegister works like Register but registers any number of + // Collectors and panics upon the first registration that causes an + // error. + MustRegister(...Collector) + // Unregister unregisters the Collector that equals the Collector passed + // in as an argument. (Two Collectors are considered equal if their + // Describe method yields the same set of descriptors.) The function + // returns whether a Collector was unregistered. + // + // Note that even after unregistering, it will not be possible to + // register a new Collector that is inconsistent with the unregistered + // Collector, e.g. a Collector collecting metrics with the same name but + // a different help string. The rationale here is that the same registry + // instance must only collect consistent metrics throughout its + // lifetime. + Unregister(Collector) bool +} + +// Gatherer is the interface for the part of a registry in charge of gathering +// the collected metrics into a number of MetricFamilies. The Gatherer interface +// comes with the same general implication as described for the Registerer +// interface. +type Gatherer interface { + // Gather calls the Collect method of the registered Collectors and then + // gathers the collected metrics into a lexicographically sorted slice + // of MetricFamily protobufs. Even if an error occurs, Gather attempts + // to gather as many metrics as possible. Hence, if a non-nil error is + // returned, the returned MetricFamily slice could be nil (in case of a + // fatal error that prevented any meaningful metric collection) or + // contain a number of MetricFamily protobufs, some of which might be + // incomplete, and some might be missing altogether. The returned error + // (which might be a MultiError) explains the details. In scenarios + // where complete collection is critical, the returned MetricFamily + // protobufs should be disregarded if the returned error is non-nil. + Gather() ([]*dto.MetricFamily, error) +} + +// Register registers the provided Collector with the DefaultRegisterer. +// +// Register is a shortcut for DefaultRegisterer.Register(c). See there for more +// details. +func Register(c Collector) error { + return DefaultRegisterer.Register(c) +} + +// MustRegister registers the provided Collectors with the DefaultRegisterer and +// panics if any error occurs. +// +// MustRegister is a shortcut for DefaultRegisterer.MustRegister(cs...). See +// there for more details. +func MustRegister(cs ...Collector) { + DefaultRegisterer.MustRegister(cs...) +} + +// RegisterOrGet registers the provided Collector with the DefaultRegisterer and +// returns the Collector, unless an equal Collector was registered before, in +// which case that Collector is returned. +// +// Deprecated: RegisterOrGet is merely a convenience function for the +// implementation as described in the documentation for +// AlreadyRegisteredError. As the use case is relatively rare, this function +// will be removed in a future version of this package to clean up the +// namespace. +func RegisterOrGet(c Collector) (Collector, error) { + if err := Register(c); err != nil { + if are, ok := err.(AlreadyRegisteredError); ok { + return are.ExistingCollector, nil + } + return nil, err + } + return c, nil +} + +// MustRegisterOrGet behaves like RegisterOrGet but panics instead of returning +// an error. +// +// Deprecated: This is deprecated for the same reason RegisterOrGet is. See +// there for details. +func MustRegisterOrGet(c Collector) Collector { + c, err := RegisterOrGet(c) + if err != nil { + panic(err) + } + return c +} + +// Unregister removes the registration of the provided Collector from the +// DefaultRegisterer. +// +// Unregister is a shortcut for DefaultRegisterer.Unregister(c). See there for +// more details. +func Unregister(c Collector) bool { + return DefaultRegisterer.Unregister(c) +} + +// GathererFunc turns a function into a Gatherer. +type GathererFunc func() ([]*dto.MetricFamily, error) + +// Gather implements Gatherer. +func (gf GathererFunc) Gather() ([]*dto.MetricFamily, error) { + return gf() +} + +// SetMetricFamilyInjectionHook replaces the DefaultGatherer with one that +// gathers from the previous DefaultGatherers but then merges the MetricFamily +// protobufs returned from the provided hook function with the MetricFamily +// protobufs returned from the original DefaultGatherer. +// +// Deprecated: This function manipulates the DefaultGatherer variable. Consider +// the implications, i.e. don't do this concurrently with any uses of the +// DefaultGatherer. In the rare cases where you need to inject MetricFamily +// protobufs directly, it is recommended to use a custom Registry and combine it +// with a custom Gatherer using the Gatherers type (see +// there). SetMetricFamilyInjectionHook only exists for compatibility reasons +// with previous versions of this package. +func SetMetricFamilyInjectionHook(hook func() []*dto.MetricFamily) { + DefaultGatherer = Gatherers{ + DefaultGatherer, + GathererFunc(func() ([]*dto.MetricFamily, error) { return hook(), nil }), + } +} + +// AlreadyRegisteredError is returned by the Register method if the Collector to +// be registered has already been registered before, or a different Collector +// that collects the same metrics has been registered before. Registration fails +// in that case, but you can detect from the kind of error what has +// happened. The error contains fields for the existing Collector and the +// (rejected) new Collector that equals the existing one. This can be used to +// find out if an equal Collector has been registered before and switch over to +// using the old one, as demonstrated in the example. +type AlreadyRegisteredError struct { + ExistingCollector, NewCollector Collector +} + +func (err AlreadyRegisteredError) Error() string { + return "duplicate metrics collector registration attempted" +} + +// MultiError is a slice of errors implementing the error interface. It is used +// by a Gatherer to report multiple errors during MetricFamily gathering. +type MultiError []error + +func (errs MultiError) Error() string { + if len(errs) == 0 { + return "" + } + buf := &bytes.Buffer{} + fmt.Fprintf(buf, "%d error(s) occurred:", len(errs)) + for _, err := range errs { + fmt.Fprintf(buf, "\n* %s", err) + } + return buf.String() +} + +// MaybeUnwrap returns nil if len(errs) is 0. It returns the first and only +// contained error as error if len(errs is 1). In all other cases, it returns +// the MultiError directly. This is helpful for returning a MultiError in a way +// that only uses the MultiError if needed. +func (errs MultiError) MaybeUnwrap() error { + switch len(errs) { + case 0: + return nil + case 1: + return errs[0] + default: + return errs + } +} + +// Registry registers Prometheus collectors, collects their metrics, and gathers +// them into MetricFamilies for exposition. It implements both Registerer and +// Gatherer. The zero value is not usable. Create instances with NewRegistry or +// NewPedanticRegistry. +type Registry struct { + mtx sync.RWMutex + collectorsByID map[uint64]Collector // ID is a hash of the descIDs. + descIDs map[uint64]struct{} + dimHashesByName map[string]uint64 + pedanticChecksEnabled bool +} + +// Register implements Registerer. +func (r *Registry) Register(c Collector) error { + var ( + descChan = make(chan *Desc, capDescChan) + newDescIDs = map[uint64]struct{}{} + newDimHashesByName = map[string]uint64{} + collectorID uint64 // Just a sum of all desc IDs. + duplicateDescErr error + ) + go func() { + c.Describe(descChan) + close(descChan) + }() + r.mtx.Lock() + defer r.mtx.Unlock() + // Coduct various tests... + for desc := range descChan { + + // Is the descriptor valid at all? + if desc.err != nil { + return fmt.Errorf("descriptor %s is invalid: %s", desc, desc.err) + } + + // Is the descID unique? + // (In other words: Is the fqName + constLabel combination unique?) + if _, exists := r.descIDs[desc.id]; exists { + duplicateDescErr = fmt.Errorf("descriptor %s already exists with the same fully-qualified name and const label values", desc) + } + // If it is not a duplicate desc in this collector, add it to + // the collectorID. (We allow duplicate descs within the same + // collector, but their existence must be a no-op.) + if _, exists := newDescIDs[desc.id]; !exists { + newDescIDs[desc.id] = struct{}{} + collectorID += desc.id + } + + // Are all the label names and the help string consistent with + // previous descriptors of the same name? + // First check existing descriptors... + if dimHash, exists := r.dimHashesByName[desc.fqName]; exists { + if dimHash != desc.dimHash { + return fmt.Errorf("a previously registered descriptor with the same fully-qualified name as %s has different label names or a different help string", desc) + } + } else { + // ...then check the new descriptors already seen. + if dimHash, exists := newDimHashesByName[desc.fqName]; exists { + if dimHash != desc.dimHash { + return fmt.Errorf("descriptors reported by collector have inconsistent label names or help strings for the same fully-qualified name, offender is %s", desc) + } + } else { + newDimHashesByName[desc.fqName] = desc.dimHash + } + } + } + // Did anything happen at all? + if len(newDescIDs) == 0 { + return errors.New("collector has no descriptors") + } + if existing, exists := r.collectorsByID[collectorID]; exists { + return AlreadyRegisteredError{ + ExistingCollector: existing, + NewCollector: c, + } + } + // If the collectorID is new, but at least one of the descs existed + // before, we are in trouble. + if duplicateDescErr != nil { + return duplicateDescErr + } + + // Only after all tests have passed, actually register. + r.collectorsByID[collectorID] = c + for hash := range newDescIDs { + r.descIDs[hash] = struct{}{} + } + for name, dimHash := range newDimHashesByName { + r.dimHashesByName[name] = dimHash + } + return nil +} + +// Unregister implements Registerer. +func (r *Registry) Unregister(c Collector) bool { + var ( + descChan = make(chan *Desc, capDescChan) + descIDs = map[uint64]struct{}{} + collectorID uint64 // Just a sum of the desc IDs. + ) + go func() { + c.Describe(descChan) + close(descChan) + }() + for desc := range descChan { + if _, exists := descIDs[desc.id]; !exists { + collectorID += desc.id + descIDs[desc.id] = struct{}{} + } + } + + r.mtx.RLock() + if _, exists := r.collectorsByID[collectorID]; !exists { + r.mtx.RUnlock() + return false + } + r.mtx.RUnlock() + + r.mtx.Lock() + defer r.mtx.Unlock() + + delete(r.collectorsByID, collectorID) + for id := range descIDs { + delete(r.descIDs, id) + } + // dimHashesByName is left untouched as those must be consistent + // throughout the lifetime of a program. + return true +} + +// MustRegister implements Registerer. +func (r *Registry) MustRegister(cs ...Collector) { + for _, c := range cs { + if err := r.Register(c); err != nil { + panic(err) + } + } +} + +// Gather implements Gatherer. +func (r *Registry) Gather() ([]*dto.MetricFamily, error) { + var ( + metricChan = make(chan Metric, capMetricChan) + metricHashes = map[uint64]struct{}{} + dimHashes = map[string]uint64{} + wg sync.WaitGroup + errs MultiError // The collected errors to return in the end. + registeredDescIDs map[uint64]struct{} // Only used for pedantic checks + ) + + r.mtx.RLock() + metricFamiliesByName := make(map[string]*dto.MetricFamily, len(r.dimHashesByName)) + + // Scatter. + // (Collectors could be complex and slow, so we call them all at once.) + wg.Add(len(r.collectorsByID)) + go func() { + wg.Wait() + close(metricChan) + }() + for _, collector := range r.collectorsByID { + go func(collector Collector) { + defer wg.Done() + collector.Collect(metricChan) + }(collector) + } + + // In case pedantic checks are enabled, we have to copy the map before + // giving up the RLock. + if r.pedanticChecksEnabled { + registeredDescIDs = make(map[uint64]struct{}, len(r.descIDs)) + for id := range r.descIDs { + registeredDescIDs[id] = struct{}{} + } + } + + r.mtx.RUnlock() + + // Drain metricChan in case of premature return. + defer func() { + for _ = range metricChan { + } + }() + + // Gather. + for metric := range metricChan { + // This could be done concurrently, too, but it required locking + // of metricFamiliesByName (and of metricHashes if checks are + // enabled). Most likely not worth it. + desc := metric.Desc() + dtoMetric := &dto.Metric{} + if err := metric.Write(dtoMetric); err != nil { + errs = append(errs, fmt.Errorf( + "error collecting metric %v: %s", desc, err, + )) + continue + } + metricFamily, ok := metricFamiliesByName[desc.fqName] + if ok { + if metricFamily.GetHelp() != desc.help { + errs = append(errs, fmt.Errorf( + "collected metric %s %s has help %q but should have %q", + desc.fqName, dtoMetric, desc.help, metricFamily.GetHelp(), + )) + continue + } + // TODO(beorn7): Simplify switch once Desc has type. + switch metricFamily.GetType() { + case dto.MetricType_COUNTER: + if dtoMetric.Counter == nil { + errs = append(errs, fmt.Errorf( + "collected metric %s %s should be a Counter", + desc.fqName, dtoMetric, + )) + continue + } + case dto.MetricType_GAUGE: + if dtoMetric.Gauge == nil { + errs = append(errs, fmt.Errorf( + "collected metric %s %s should be a Gauge", + desc.fqName, dtoMetric, + )) + continue + } + case dto.MetricType_SUMMARY: + if dtoMetric.Summary == nil { + errs = append(errs, fmt.Errorf( + "collected metric %s %s should be a Summary", + desc.fqName, dtoMetric, + )) + continue + } + case dto.MetricType_UNTYPED: + if dtoMetric.Untyped == nil { + errs = append(errs, fmt.Errorf( + "collected metric %s %s should be Untyped", + desc.fqName, dtoMetric, + )) + continue + } + case dto.MetricType_HISTOGRAM: + if dtoMetric.Histogram == nil { + errs = append(errs, fmt.Errorf( + "collected metric %s %s should be a Histogram", + desc.fqName, dtoMetric, + )) + continue + } + default: + panic("encountered MetricFamily with invalid type") + } + } else { + metricFamily = &dto.MetricFamily{} + metricFamily.Name = proto.String(desc.fqName) + metricFamily.Help = proto.String(desc.help) + // TODO(beorn7): Simplify switch once Desc has type. + switch { + case dtoMetric.Gauge != nil: + metricFamily.Type = dto.MetricType_GAUGE.Enum() + case dtoMetric.Counter != nil: + metricFamily.Type = dto.MetricType_COUNTER.Enum() + case dtoMetric.Summary != nil: + metricFamily.Type = dto.MetricType_SUMMARY.Enum() + case dtoMetric.Untyped != nil: + metricFamily.Type = dto.MetricType_UNTYPED.Enum() + case dtoMetric.Histogram != nil: + metricFamily.Type = dto.MetricType_HISTOGRAM.Enum() + default: + errs = append(errs, fmt.Errorf( + "empty metric collected: %s", dtoMetric, + )) + continue + } + metricFamiliesByName[desc.fqName] = metricFamily + } + if err := checkMetricConsistency(metricFamily, dtoMetric, metricHashes, dimHashes); err != nil { + errs = append(errs, err) + continue + } + if r.pedanticChecksEnabled { + // Is the desc registered at all? + if _, exist := registeredDescIDs[desc.id]; !exist { + errs = append(errs, fmt.Errorf( + "collected metric %s %s with unregistered descriptor %s", + metricFamily.GetName(), dtoMetric, desc, + )) + continue + } + if err := checkDescConsistency(metricFamily, dtoMetric, desc); err != nil { + errs = append(errs, err) + continue + } + } + metricFamily.Metric = append(metricFamily.Metric, dtoMetric) + } + return normalizeMetricFamilies(metricFamiliesByName), errs.MaybeUnwrap() +} + +// Gatherers is a slice of Gatherer instances that implements the Gatherer +// interface itself. Its Gather method calls Gather on all Gatherers in the +// slice in order and returns the merged results. Errors returned from the +// Gather calles are all returned in a flattened MultiError. Duplicate and +// inconsistent Metrics are skipped (first occurrence in slice order wins) and +// reported in the returned error. +// +// Gatherers can be used to merge the Gather results from multiple +// Registries. It also provides a way to directly inject existing MetricFamily +// protobufs into the gathering by creating a custom Gatherer with a Gather +// method that simply returns the existing MetricFamily protobufs. Note that no +// registration is involved (in contrast to Collector registration), so +// obviously registration-time checks cannot happen. Any inconsistencies between +// the gathered MetricFamilies are reported as errors by the Gather method, and +// inconsistent Metrics are dropped. Invalid parts of the MetricFamilies +// (e.g. syntactically invalid metric or label names) will go undetected. +type Gatherers []Gatherer + +// Gather implements Gatherer. +func (gs Gatherers) Gather() ([]*dto.MetricFamily, error) { + var ( + metricFamiliesByName = map[string]*dto.MetricFamily{} + metricHashes = map[uint64]struct{}{} + dimHashes = map[string]uint64{} + errs MultiError // The collected errors to return in the end. + ) + + for i, g := range gs { + mfs, err := g.Gather() + if err != nil { + if multiErr, ok := err.(MultiError); ok { + for _, err := range multiErr { + errs = append(errs, fmt.Errorf("[from Gatherer #%d] %s", i+1, err)) + } + } else { + errs = append(errs, fmt.Errorf("[from Gatherer #%d] %s", i+1, err)) + } + } + for _, mf := range mfs { + existingMF, exists := metricFamiliesByName[mf.GetName()] + if exists { + if existingMF.GetHelp() != mf.GetHelp() { + errs = append(errs, fmt.Errorf( + "gathered metric family %s has help %q but should have %q", + mf.GetName(), mf.GetHelp(), existingMF.GetHelp(), + )) + continue + } + if existingMF.GetType() != mf.GetType() { + errs = append(errs, fmt.Errorf( + "gathered metric family %s has type %s but should have %s", + mf.GetName(), mf.GetType(), existingMF.GetType(), + )) + continue + } + } else { + existingMF = &dto.MetricFamily{} + existingMF.Name = mf.Name + existingMF.Help = mf.Help + existingMF.Type = mf.Type + metricFamiliesByName[mf.GetName()] = existingMF + } + for _, m := range mf.Metric { + if err := checkMetricConsistency(existingMF, m, metricHashes, dimHashes); err != nil { + errs = append(errs, err) + continue + } + existingMF.Metric = append(existingMF.Metric, m) + } + } + } + return normalizeMetricFamilies(metricFamiliesByName), errs.MaybeUnwrap() +} + +// metricSorter is a sortable slice of *dto.Metric. +type metricSorter []*dto.Metric + +func (s metricSorter) Len() int { + return len(s) +} + +func (s metricSorter) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s metricSorter) Less(i, j int) bool { + if len(s[i].Label) != len(s[j].Label) { + // This should not happen. The metrics are + // inconsistent. However, we have to deal with the fact, as + // people might use custom collectors or metric family injection + // to create inconsistent metrics. So let's simply compare the + // number of labels in this case. That will still yield + // reproducible sorting. + return len(s[i].Label) < len(s[j].Label) + } + for n, lp := range s[i].Label { + vi := lp.GetValue() + vj := s[j].Label[n].GetValue() + if vi != vj { + return vi < vj + } + } + + // We should never arrive here. Multiple metrics with the same + // label set in the same scrape will lead to undefined ingestion + // behavior. However, as above, we have to provide stable sorting + // here, even for inconsistent metrics. So sort equal metrics + // by their timestamp, with missing timestamps (implying "now") + // coming last. + if s[i].TimestampMs == nil { + return false + } + if s[j].TimestampMs == nil { + return true + } + return s[i].GetTimestampMs() < s[j].GetTimestampMs() +} + +// normalizeMetricFamilies returns a MetricFamily slice whith empty +// MetricFamilies pruned and the remaining MetricFamilies sorted by name within +// the slice, with the contained Metrics sorted within each MetricFamily. +func normalizeMetricFamilies(metricFamiliesByName map[string]*dto.MetricFamily) []*dto.MetricFamily { + for _, mf := range metricFamiliesByName { + sort.Sort(metricSorter(mf.Metric)) + } + names := make([]string, 0, len(metricFamiliesByName)) + for name, mf := range metricFamiliesByName { + if len(mf.Metric) > 0 { + names = append(names, name) + } + } + sort.Strings(names) + result := make([]*dto.MetricFamily, 0, len(names)) + for _, name := range names { + result = append(result, metricFamiliesByName[name]) + } + return result +} + +// checkMetricConsistency checks if the provided Metric is consistent with the +// provided MetricFamily. It also hashed the Metric labels and the MetricFamily +// name. If the resulting hash is alread in the provided metricHashes, an error +// is returned. If not, it is added to metricHashes. The provided dimHashes maps +// MetricFamily names to their dimHash (hashed sorted label names). If dimHashes +// doesn't yet contain a hash for the provided MetricFamily, it is +// added. Otherwise, an error is returned if the existing dimHashes in not equal +// the calculated dimHash. +func checkMetricConsistency( + metricFamily *dto.MetricFamily, + dtoMetric *dto.Metric, + metricHashes map[uint64]struct{}, + dimHashes map[string]uint64, +) error { + // Type consistency with metric family. + if metricFamily.GetType() == dto.MetricType_GAUGE && dtoMetric.Gauge == nil || + metricFamily.GetType() == dto.MetricType_COUNTER && dtoMetric.Counter == nil || + metricFamily.GetType() == dto.MetricType_SUMMARY && dtoMetric.Summary == nil || + metricFamily.GetType() == dto.MetricType_HISTOGRAM && dtoMetric.Histogram == nil || + metricFamily.GetType() == dto.MetricType_UNTYPED && dtoMetric.Untyped == nil { + return fmt.Errorf( + "collected metric %s %s is not a %s", + metricFamily.GetName(), dtoMetric, metricFamily.GetType(), + ) + } + + // Is the metric unique (i.e. no other metric with the same name and the same label values)? + h := hashNew() + h = hashAdd(h, metricFamily.GetName()) + h = hashAddByte(h, separatorByte) + dh := hashNew() + // Make sure label pairs are sorted. We depend on it for the consistency + // check. + sort.Sort(LabelPairSorter(dtoMetric.Label)) + for _, lp := range dtoMetric.Label { + h = hashAdd(h, lp.GetValue()) + h = hashAddByte(h, separatorByte) + dh = hashAdd(dh, lp.GetName()) + dh = hashAddByte(dh, separatorByte) + } + if _, exists := metricHashes[h]; exists { + return fmt.Errorf( + "collected metric %s %s was collected before with the same name and label values", + metricFamily.GetName(), dtoMetric, + ) + } + if dimHash, ok := dimHashes[metricFamily.GetName()]; ok { + if dimHash != dh { + return fmt.Errorf( + "collected metric %s %s has label dimensions inconsistent with previously collected metrics in the same metric family", + metricFamily.GetName(), dtoMetric, + ) + } + } else { + dimHashes[metricFamily.GetName()] = dh + } + metricHashes[h] = struct{}{} + return nil +} + +func checkDescConsistency( + metricFamily *dto.MetricFamily, + dtoMetric *dto.Metric, + desc *Desc, +) error { + // Desc help consistency with metric family help. + if metricFamily.GetHelp() != desc.help { + return fmt.Errorf( + "collected metric %s %s has help %q but should have %q", + metricFamily.GetName(), dtoMetric, metricFamily.GetHelp(), desc.help, + ) + } + + // Is the desc consistent with the content of the metric? + lpsFromDesc := make([]*dto.LabelPair, 0, len(dtoMetric.Label)) + lpsFromDesc = append(lpsFromDesc, desc.constLabelPairs...) + for _, l := range desc.variableLabels { + lpsFromDesc = append(lpsFromDesc, &dto.LabelPair{ + Name: proto.String(l), + }) + } + if len(lpsFromDesc) != len(dtoMetric.Label) { + return fmt.Errorf( + "labels in collected metric %s %s are inconsistent with descriptor %s", + metricFamily.GetName(), dtoMetric, desc, + ) + } + sort.Sort(LabelPairSorter(lpsFromDesc)) + for i, lpFromDesc := range lpsFromDesc { + lpFromMetric := dtoMetric.Label[i] + if lpFromDesc.GetName() != lpFromMetric.GetName() || + lpFromDesc.Value != nil && lpFromDesc.GetValue() != lpFromMetric.GetValue() { + return fmt.Errorf( + "labels in collected metric %s %s are inconsistent with descriptor %s", + metricFamily.GetName(), dtoMetric, desc, + ) + } + } + return nil +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/summary.go b/vendor/github.com/prometheus/client_golang/prometheus/summary.go new file mode 100644 index 00000000..bce05bf9 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/summary.go @@ -0,0 +1,534 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "fmt" + "math" + "sort" + "sync" + "time" + + "github.com/beorn7/perks/quantile" + "github.com/golang/protobuf/proto" + + dto "github.com/prometheus/client_model/go" +) + +// quantileLabel is used for the label that defines the quantile in a +// summary. +const quantileLabel = "quantile" + +// A Summary captures individual observations from an event or sample stream and +// summarizes them in a manner similar to traditional summary statistics: 1. sum +// of observations, 2. observation count, 3. rank estimations. +// +// A typical use-case is the observation of request latencies. By default, a +// Summary provides the median, the 90th and the 99th percentile of the latency +// as rank estimations. +// +// Note that the rank estimations cannot be aggregated in a meaningful way with +// the Prometheus query language (i.e. you cannot average or add them). If you +// need aggregatable quantiles (e.g. you want the 99th percentile latency of all +// queries served across all instances of a service), consider the Histogram +// metric type. See the Prometheus documentation for more details. +// +// To create Summary instances, use NewSummary. +type Summary interface { + Metric + Collector + + // Observe adds a single observation to the summary. + Observe(float64) +} + +// DefObjectives are the default Summary quantile values. +var ( + DefObjectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001} + + errQuantileLabelNotAllowed = fmt.Errorf( + "%q is not allowed as label name in summaries", quantileLabel, + ) +) + +// Default values for SummaryOpts. +const ( + // DefMaxAge is the default duration for which observations stay + // relevant. + DefMaxAge time.Duration = 10 * time.Minute + // DefAgeBuckets is the default number of buckets used to calculate the + // age of observations. + DefAgeBuckets = 5 + // DefBufCap is the standard buffer size for collecting Summary observations. + DefBufCap = 500 +) + +// SummaryOpts bundles the options for creating a Summary metric. It is +// mandatory to set Name and Help to a non-empty string. All other fields are +// optional and can safely be left at their zero value. +type SummaryOpts struct { + // Namespace, Subsystem, and Name are components of the fully-qualified + // name of the Summary (created by joining these components with + // "_"). Only Name is mandatory, the others merely help structuring the + // name. Note that the fully-qualified name of the Summary must be a + // valid Prometheus metric name. + Namespace string + Subsystem string + Name string + + // Help provides information about this Summary. Mandatory! + // + // Metrics with the same fully-qualified name must have the same Help + // string. + Help string + + // ConstLabels are used to attach fixed labels to this + // Summary. Summaries with the same fully-qualified name must have the + // same label names in their ConstLabels. + // + // Note that in most cases, labels have a value that varies during the + // lifetime of a process. Those labels are usually managed with a + // SummaryVec. ConstLabels serve only special purposes. One is for the + // special case where the value of a label does not change during the + // lifetime of a process, e.g. if the revision of the running binary is + // put into a label. Another, more advanced purpose is if more than one + // Collector needs to collect Summaries with the same fully-qualified + // name. In that case, those Summaries must differ in the values of + // their ConstLabels. See the Collector examples. + // + // If the value of a label never changes (not even between binaries), + // that label most likely should not be a label at all (but part of the + // metric name). + ConstLabels Labels + + // Objectives defines the quantile rank estimates with their respective + // absolute error. If Objectives[q] = e, then the value reported + // for q will be the φ-quantile value for some φ between q-e and q+e. + // The default value is DefObjectives. + Objectives map[float64]float64 + + // MaxAge defines the duration for which an observation stays relevant + // for the summary. Must be positive. The default value is DefMaxAge. + MaxAge time.Duration + + // AgeBuckets is the number of buckets used to exclude observations that + // are older than MaxAge from the summary. A higher number has a + // resource penalty, so only increase it if the higher resolution is + // really required. For very high observation rates, you might want to + // reduce the number of age buckets. With only one age bucket, you will + // effectively see a complete reset of the summary each time MaxAge has + // passed. The default value is DefAgeBuckets. + AgeBuckets uint32 + + // BufCap defines the default sample stream buffer size. The default + // value of DefBufCap should suffice for most uses. If there is a need + // to increase the value, a multiple of 500 is recommended (because that + // is the internal buffer size of the underlying package + // "github.com/bmizerany/perks/quantile"). + BufCap uint32 +} + +// Great fuck-up with the sliding-window decay algorithm... The Merge method of +// perk/quantile is actually not working as advertised - and it might be +// unfixable, as the underlying algorithm is apparently not capable of merging +// summaries in the first place. To avoid using Merge, we are currently adding +// observations to _each_ age bucket, i.e. the effort to add a sample is +// essentially multiplied by the number of age buckets. When rotating age +// buckets, we empty the previous head stream. On scrape time, we simply take +// the quantiles from the head stream (no merging required). Result: More effort +// on observation time, less effort on scrape time, which is exactly the +// opposite of what we try to accomplish, but at least the results are correct. +// +// The quite elegant previous contraption to merge the age buckets efficiently +// on scrape time (see code up commit 6b9530d72ea715f0ba612c0120e6e09fbf1d49d0) +// can't be used anymore. + +// NewSummary creates a new Summary based on the provided SummaryOpts. +func NewSummary(opts SummaryOpts) Summary { + return newSummary( + NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), + opts, + ) +} + +func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary { + if len(desc.variableLabels) != len(labelValues) { + panic(errInconsistentCardinality) + } + + for _, n := range desc.variableLabels { + if n == quantileLabel { + panic(errQuantileLabelNotAllowed) + } + } + for _, lp := range desc.constLabelPairs { + if lp.GetName() == quantileLabel { + panic(errQuantileLabelNotAllowed) + } + } + + if len(opts.Objectives) == 0 { + opts.Objectives = DefObjectives + } + + if opts.MaxAge < 0 { + panic(fmt.Errorf("illegal max age MaxAge=%v", opts.MaxAge)) + } + if opts.MaxAge == 0 { + opts.MaxAge = DefMaxAge + } + + if opts.AgeBuckets == 0 { + opts.AgeBuckets = DefAgeBuckets + } + + if opts.BufCap == 0 { + opts.BufCap = DefBufCap + } + + s := &summary{ + desc: desc, + + objectives: opts.Objectives, + sortedObjectives: make([]float64, 0, len(opts.Objectives)), + + labelPairs: makeLabelPairs(desc, labelValues), + + hotBuf: make([]float64, 0, opts.BufCap), + coldBuf: make([]float64, 0, opts.BufCap), + streamDuration: opts.MaxAge / time.Duration(opts.AgeBuckets), + } + s.headStreamExpTime = time.Now().Add(s.streamDuration) + s.hotBufExpTime = s.headStreamExpTime + + for i := uint32(0); i < opts.AgeBuckets; i++ { + s.streams = append(s.streams, s.newStream()) + } + s.headStream = s.streams[0] + + for qu := range s.objectives { + s.sortedObjectives = append(s.sortedObjectives, qu) + } + sort.Float64s(s.sortedObjectives) + + s.init(s) // Init self-collection. + return s +} + +type summary struct { + selfCollector + + bufMtx sync.Mutex // Protects hotBuf and hotBufExpTime. + mtx sync.Mutex // Protects every other moving part. + // Lock bufMtx before mtx if both are needed. + + desc *Desc + + objectives map[float64]float64 + sortedObjectives []float64 + + labelPairs []*dto.LabelPair + + sum float64 + cnt uint64 + + hotBuf, coldBuf []float64 + + streams []*quantile.Stream + streamDuration time.Duration + headStream *quantile.Stream + headStreamIdx int + headStreamExpTime, hotBufExpTime time.Time +} + +func (s *summary) Desc() *Desc { + return s.desc +} + +func (s *summary) Observe(v float64) { + s.bufMtx.Lock() + defer s.bufMtx.Unlock() + + now := time.Now() + if now.After(s.hotBufExpTime) { + s.asyncFlush(now) + } + s.hotBuf = append(s.hotBuf, v) + if len(s.hotBuf) == cap(s.hotBuf) { + s.asyncFlush(now) + } +} + +func (s *summary) Write(out *dto.Metric) error { + sum := &dto.Summary{} + qs := make([]*dto.Quantile, 0, len(s.objectives)) + + s.bufMtx.Lock() + s.mtx.Lock() + // Swap bufs even if hotBuf is empty to set new hotBufExpTime. + s.swapBufs(time.Now()) + s.bufMtx.Unlock() + + s.flushColdBuf() + sum.SampleCount = proto.Uint64(s.cnt) + sum.SampleSum = proto.Float64(s.sum) + + for _, rank := range s.sortedObjectives { + var q float64 + if s.headStream.Count() == 0 { + q = math.NaN() + } else { + q = s.headStream.Query(rank) + } + qs = append(qs, &dto.Quantile{ + Quantile: proto.Float64(rank), + Value: proto.Float64(q), + }) + } + + s.mtx.Unlock() + + if len(qs) > 0 { + sort.Sort(quantSort(qs)) + } + sum.Quantile = qs + + out.Summary = sum + out.Label = s.labelPairs + return nil +} + +func (s *summary) newStream() *quantile.Stream { + return quantile.NewTargeted(s.objectives) +} + +// asyncFlush needs bufMtx locked. +func (s *summary) asyncFlush(now time.Time) { + s.mtx.Lock() + s.swapBufs(now) + + // Unblock the original goroutine that was responsible for the mutation + // that triggered the compaction. But hold onto the global non-buffer + // state mutex until the operation finishes. + go func() { + s.flushColdBuf() + s.mtx.Unlock() + }() +} + +// rotateStreams needs mtx AND bufMtx locked. +func (s *summary) maybeRotateStreams() { + for !s.hotBufExpTime.Equal(s.headStreamExpTime) { + s.headStream.Reset() + s.headStreamIdx++ + if s.headStreamIdx >= len(s.streams) { + s.headStreamIdx = 0 + } + s.headStream = s.streams[s.headStreamIdx] + s.headStreamExpTime = s.headStreamExpTime.Add(s.streamDuration) + } +} + +// flushColdBuf needs mtx locked. +func (s *summary) flushColdBuf() { + for _, v := range s.coldBuf { + for _, stream := range s.streams { + stream.Insert(v) + } + s.cnt++ + s.sum += v + } + s.coldBuf = s.coldBuf[0:0] + s.maybeRotateStreams() +} + +// swapBufs needs mtx AND bufMtx locked, coldBuf must be empty. +func (s *summary) swapBufs(now time.Time) { + if len(s.coldBuf) != 0 { + panic("coldBuf is not empty") + } + s.hotBuf, s.coldBuf = s.coldBuf, s.hotBuf + // hotBuf is now empty and gets new expiration set. + for now.After(s.hotBufExpTime) { + s.hotBufExpTime = s.hotBufExpTime.Add(s.streamDuration) + } +} + +type quantSort []*dto.Quantile + +func (s quantSort) Len() int { + return len(s) +} + +func (s quantSort) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s quantSort) Less(i, j int) bool { + return s[i].GetQuantile() < s[j].GetQuantile() +} + +// SummaryVec is a Collector that bundles a set of Summaries that all share the +// same Desc, but have different values for their variable labels. This is used +// if you want to count the same thing partitioned by various dimensions +// (e.g. HTTP request latencies, partitioned by status code and method). Create +// instances with NewSummaryVec. +type SummaryVec struct { + *MetricVec +} + +// NewSummaryVec creates a new SummaryVec based on the provided SummaryOpts and +// partitioned by the given label names. At least one label name must be +// provided. +func NewSummaryVec(opts SummaryOpts, labelNames []string) *SummaryVec { + desc := NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + labelNames, + opts.ConstLabels, + ) + return &SummaryVec{ + MetricVec: newMetricVec(desc, func(lvs ...string) Metric { + return newSummary(desc, opts, lvs...) + }), + } +} + +// GetMetricWithLabelValues replaces the method of the same name in +// MetricVec. The difference is that this method returns a Summary and not a +// Metric so that no type conversion is required. +func (m *SummaryVec) GetMetricWithLabelValues(lvs ...string) (Summary, error) { + metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...) + if metric != nil { + return metric.(Summary), err + } + return nil, err +} + +// GetMetricWith replaces the method of the same name in MetricVec. The +// difference is that this method returns a Summary and not a Metric so that no +// type conversion is required. +func (m *SummaryVec) GetMetricWith(labels Labels) (Summary, error) { + metric, err := m.MetricVec.GetMetricWith(labels) + if metric != nil { + return metric.(Summary), err + } + return nil, err +} + +// WithLabelValues works as GetMetricWithLabelValues, but panics where +// GetMetricWithLabelValues would have returned an error. By not returning an +// error, WithLabelValues allows shortcuts like +// myVec.WithLabelValues("404", "GET").Observe(42.21) +func (m *SummaryVec) WithLabelValues(lvs ...string) Summary { + return m.MetricVec.WithLabelValues(lvs...).(Summary) +} + +// With works as GetMetricWith, but panics where GetMetricWithLabels would have +// returned an error. By not returning an error, With allows shortcuts like +// myVec.With(Labels{"code": "404", "method": "GET"}).Observe(42.21) +func (m *SummaryVec) With(labels Labels) Summary { + return m.MetricVec.With(labels).(Summary) +} + +type constSummary struct { + desc *Desc + count uint64 + sum float64 + quantiles map[float64]float64 + labelPairs []*dto.LabelPair +} + +func (s *constSummary) Desc() *Desc { + return s.desc +} + +func (s *constSummary) Write(out *dto.Metric) error { + sum := &dto.Summary{} + qs := make([]*dto.Quantile, 0, len(s.quantiles)) + + sum.SampleCount = proto.Uint64(s.count) + sum.SampleSum = proto.Float64(s.sum) + + for rank, q := range s.quantiles { + qs = append(qs, &dto.Quantile{ + Quantile: proto.Float64(rank), + Value: proto.Float64(q), + }) + } + + if len(qs) > 0 { + sort.Sort(quantSort(qs)) + } + sum.Quantile = qs + + out.Summary = sum + out.Label = s.labelPairs + + return nil +} + +// NewConstSummary returns a metric representing a Prometheus summary with fixed +// values for the count, sum, and quantiles. As those parameters cannot be +// changed, the returned value does not implement the Summary interface (but +// only the Metric interface). Users of this package will not have much use for +// it in regular operations. However, when implementing custom Collectors, it is +// useful as a throw-away metric that is generated on the fly to send it to +// Prometheus in the Collect method. +// +// quantiles maps ranks to quantile values. For example, a median latency of +// 0.23s and a 99th percentile latency of 0.56s would be expressed as: +// map[float64]float64{0.5: 0.23, 0.99: 0.56} +// +// NewConstSummary returns an error if the length of labelValues is not +// consistent with the variable labels in Desc. +func NewConstSummary( + desc *Desc, + count uint64, + sum float64, + quantiles map[float64]float64, + labelValues ...string, +) (Metric, error) { + if len(desc.variableLabels) != len(labelValues) { + return nil, errInconsistentCardinality + } + return &constSummary{ + desc: desc, + count: count, + sum: sum, + quantiles: quantiles, + labelPairs: makeLabelPairs(desc, labelValues), + }, nil +} + +// MustNewConstSummary is a version of NewConstSummary that panics where +// NewConstMetric would have returned an error. +func MustNewConstSummary( + desc *Desc, + count uint64, + sum float64, + quantiles map[float64]float64, + labelValues ...string, +) Metric { + m, err := NewConstSummary(desc, count, sum, quantiles, labelValues...) + if err != nil { + panic(err) + } + return m +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/untyped.go b/vendor/github.com/prometheus/client_golang/prometheus/untyped.go new file mode 100644 index 00000000..5faf7e6e --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/untyped.go @@ -0,0 +1,138 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +// Untyped is a Metric that represents a single numerical value that can +// arbitrarily go up and down. +// +// An Untyped metric works the same as a Gauge. The only difference is that to +// no type information is implied. +// +// To create Untyped instances, use NewUntyped. +type Untyped interface { + Metric + Collector + + // Set sets the Untyped metric to an arbitrary value. + Set(float64) + // Inc increments the Untyped metric by 1. + Inc() + // Dec decrements the Untyped metric by 1. + Dec() + // Add adds the given value to the Untyped metric. (The value can be + // negative, resulting in a decrease.) + Add(float64) + // Sub subtracts the given value from the Untyped metric. (The value can + // be negative, resulting in an increase.) + Sub(float64) +} + +// UntypedOpts is an alias for Opts. See there for doc comments. +type UntypedOpts Opts + +// NewUntyped creates a new Untyped metric from the provided UntypedOpts. +func NewUntyped(opts UntypedOpts) Untyped { + return newValue(NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), UntypedValue, 0) +} + +// UntypedVec is a Collector that bundles a set of Untyped metrics that all +// share the same Desc, but have different values for their variable +// labels. This is used if you want to count the same thing partitioned by +// various dimensions. Create instances with NewUntypedVec. +type UntypedVec struct { + *MetricVec +} + +// NewUntypedVec creates a new UntypedVec based on the provided UntypedOpts and +// partitioned by the given label names. At least one label name must be +// provided. +func NewUntypedVec(opts UntypedOpts, labelNames []string) *UntypedVec { + desc := NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + labelNames, + opts.ConstLabels, + ) + return &UntypedVec{ + MetricVec: newMetricVec(desc, func(lvs ...string) Metric { + return newValue(desc, UntypedValue, 0, lvs...) + }), + } +} + +// GetMetricWithLabelValues replaces the method of the same name in +// MetricVec. The difference is that this method returns an Untyped and not a +// Metric so that no type conversion is required. +func (m *UntypedVec) GetMetricWithLabelValues(lvs ...string) (Untyped, error) { + metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...) + if metric != nil { + return metric.(Untyped), err + } + return nil, err +} + +// GetMetricWith replaces the method of the same name in MetricVec. The +// difference is that this method returns an Untyped and not a Metric so that no +// type conversion is required. +func (m *UntypedVec) GetMetricWith(labels Labels) (Untyped, error) { + metric, err := m.MetricVec.GetMetricWith(labels) + if metric != nil { + return metric.(Untyped), err + } + return nil, err +} + +// WithLabelValues works as GetMetricWithLabelValues, but panics where +// GetMetricWithLabelValues would have returned an error. By not returning an +// error, WithLabelValues allows shortcuts like +// myVec.WithLabelValues("404", "GET").Add(42) +func (m *UntypedVec) WithLabelValues(lvs ...string) Untyped { + return m.MetricVec.WithLabelValues(lvs...).(Untyped) +} + +// With works as GetMetricWith, but panics where GetMetricWithLabels would have +// returned an error. By not returning an error, With allows shortcuts like +// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42) +func (m *UntypedVec) With(labels Labels) Untyped { + return m.MetricVec.With(labels).(Untyped) +} + +// UntypedFunc is an Untyped whose value is determined at collect time by +// calling a provided function. +// +// To create UntypedFunc instances, use NewUntypedFunc. +type UntypedFunc interface { + Metric + Collector +} + +// NewUntypedFunc creates a new UntypedFunc based on the provided +// UntypedOpts. The value reported is determined by calling the given function +// from within the Write method. Take into account that metric collection may +// happen concurrently. If that results in concurrent calls to Write, like in +// the case where an UntypedFunc is directly registered with Prometheus, the +// provided function must be concurrency-safe. +func NewUntypedFunc(opts UntypedOpts, function func() float64) UntypedFunc { + return newValueFunc(NewDesc( + BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), + opts.Help, + nil, + opts.ConstLabels, + ), UntypedValue, function) +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/value.go b/vendor/github.com/prometheus/client_golang/prometheus/value.go new file mode 100644 index 00000000..a944c377 --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/value.go @@ -0,0 +1,234 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "errors" + "fmt" + "math" + "sort" + "sync/atomic" + + dto "github.com/prometheus/client_model/go" + + "github.com/golang/protobuf/proto" +) + +// ValueType is an enumeration of metric types that represent a simple value. +type ValueType int + +// Possible values for the ValueType enum. +const ( + _ ValueType = iota + CounterValue + GaugeValue + UntypedValue +) + +var errInconsistentCardinality = errors.New("inconsistent label cardinality") + +// value is a generic metric for simple values. It implements Metric, Collector, +// Counter, Gauge, and Untyped. Its effective type is determined by +// ValueType. This is a low-level building block used by the library to back the +// implementations of Counter, Gauge, and Untyped. +type value struct { + // valBits containst the bits of the represented float64 value. It has + // to go first in the struct to guarantee alignment for atomic + // operations. http://golang.org/pkg/sync/atomic/#pkg-note-BUG + valBits uint64 + + selfCollector + + desc *Desc + valType ValueType + labelPairs []*dto.LabelPair +} + +// newValue returns a newly allocated value with the given Desc, ValueType, +// sample value and label values. It panics if the number of label +// values is different from the number of variable labels in Desc. +func newValue(desc *Desc, valueType ValueType, val float64, labelValues ...string) *value { + if len(labelValues) != len(desc.variableLabels) { + panic(errInconsistentCardinality) + } + result := &value{ + desc: desc, + valType: valueType, + valBits: math.Float64bits(val), + labelPairs: makeLabelPairs(desc, labelValues), + } + result.init(result) + return result +} + +func (v *value) Desc() *Desc { + return v.desc +} + +func (v *value) Set(val float64) { + atomic.StoreUint64(&v.valBits, math.Float64bits(val)) +} + +func (v *value) Inc() { + v.Add(1) +} + +func (v *value) Dec() { + v.Add(-1) +} + +func (v *value) Add(val float64) { + for { + oldBits := atomic.LoadUint64(&v.valBits) + newBits := math.Float64bits(math.Float64frombits(oldBits) + val) + if atomic.CompareAndSwapUint64(&v.valBits, oldBits, newBits) { + return + } + } +} + +func (v *value) Sub(val float64) { + v.Add(val * -1) +} + +func (v *value) Write(out *dto.Metric) error { + val := math.Float64frombits(atomic.LoadUint64(&v.valBits)) + return populateMetric(v.valType, val, v.labelPairs, out) +} + +// valueFunc is a generic metric for simple values retrieved on collect time +// from a function. It implements Metric and Collector. Its effective type is +// determined by ValueType. This is a low-level building block used by the +// library to back the implementations of CounterFunc, GaugeFunc, and +// UntypedFunc. +type valueFunc struct { + selfCollector + + desc *Desc + valType ValueType + function func() float64 + labelPairs []*dto.LabelPair +} + +// newValueFunc returns a newly allocated valueFunc with the given Desc and +// ValueType. The value reported is determined by calling the given function +// from within the Write method. Take into account that metric collection may +// happen concurrently. If that results in concurrent calls to Write, like in +// the case where a valueFunc is directly registered with Prometheus, the +// provided function must be concurrency-safe. +func newValueFunc(desc *Desc, valueType ValueType, function func() float64) *valueFunc { + result := &valueFunc{ + desc: desc, + valType: valueType, + function: function, + labelPairs: makeLabelPairs(desc, nil), + } + result.init(result) + return result +} + +func (v *valueFunc) Desc() *Desc { + return v.desc +} + +func (v *valueFunc) Write(out *dto.Metric) error { + return populateMetric(v.valType, v.function(), v.labelPairs, out) +} + +// NewConstMetric returns a metric with one fixed value that cannot be +// changed. Users of this package will not have much use for it in regular +// operations. However, when implementing custom Collectors, it is useful as a +// throw-away metric that is generated on the fly to send it to Prometheus in +// the Collect method. NewConstMetric returns an error if the length of +// labelValues is not consistent with the variable labels in Desc. +func NewConstMetric(desc *Desc, valueType ValueType, value float64, labelValues ...string) (Metric, error) { + if len(desc.variableLabels) != len(labelValues) { + return nil, errInconsistentCardinality + } + return &constMetric{ + desc: desc, + valType: valueType, + val: value, + labelPairs: makeLabelPairs(desc, labelValues), + }, nil +} + +// MustNewConstMetric is a version of NewConstMetric that panics where +// NewConstMetric would have returned an error. +func MustNewConstMetric(desc *Desc, valueType ValueType, value float64, labelValues ...string) Metric { + m, err := NewConstMetric(desc, valueType, value, labelValues...) + if err != nil { + panic(err) + } + return m +} + +type constMetric struct { + desc *Desc + valType ValueType + val float64 + labelPairs []*dto.LabelPair +} + +func (m *constMetric) Desc() *Desc { + return m.desc +} + +func (m *constMetric) Write(out *dto.Metric) error { + return populateMetric(m.valType, m.val, m.labelPairs, out) +} + +func populateMetric( + t ValueType, + v float64, + labelPairs []*dto.LabelPair, + m *dto.Metric, +) error { + m.Label = labelPairs + switch t { + case CounterValue: + m.Counter = &dto.Counter{Value: proto.Float64(v)} + case GaugeValue: + m.Gauge = &dto.Gauge{Value: proto.Float64(v)} + case UntypedValue: + m.Untyped = &dto.Untyped{Value: proto.Float64(v)} + default: + return fmt.Errorf("encountered unknown type %v", t) + } + return nil +} + +func makeLabelPairs(desc *Desc, labelValues []string) []*dto.LabelPair { + totalLen := len(desc.variableLabels) + len(desc.constLabelPairs) + if totalLen == 0 { + // Super fast path. + return nil + } + if len(desc.variableLabels) == 0 { + // Moderately fast path. + return desc.constLabelPairs + } + labelPairs := make([]*dto.LabelPair, 0, totalLen) + for i, n := range desc.variableLabels { + labelPairs = append(labelPairs, &dto.LabelPair{ + Name: proto.String(n), + Value: proto.String(labelValues[i]), + }) + } + for _, lp := range desc.constLabelPairs { + labelPairs = append(labelPairs, lp) + } + sort.Sort(LabelPairSorter(labelPairs)) + return labelPairs +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/vec.go b/vendor/github.com/prometheus/client_golang/prometheus/vec.go new file mode 100644 index 00000000..7f3eef9a --- /dev/null +++ b/vendor/github.com/prometheus/client_golang/prometheus/vec.go @@ -0,0 +1,404 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +import ( + "fmt" + "sync" + + "github.com/prometheus/common/model" +) + +// MetricVec is a Collector to bundle metrics of the same name that +// differ in their label values. MetricVec is usually not used directly but as a +// building block for implementations of vectors of a given metric +// type. GaugeVec, CounterVec, SummaryVec, and UntypedVec are examples already +// provided in this package. +type MetricVec struct { + mtx sync.RWMutex // Protects the children. + children map[uint64][]metricWithLabelValues + desc *Desc + + newMetric func(labelValues ...string) Metric + hashAdd func(h uint64, s string) uint64 // replace hash function for testing collision handling + hashAddByte func(h uint64, b byte) uint64 +} + +// newMetricVec returns an initialized MetricVec. The concrete value is +// returned for embedding into another struct. +func newMetricVec(desc *Desc, newMetric func(lvs ...string) Metric) *MetricVec { + return &MetricVec{ + children: map[uint64][]metricWithLabelValues{}, + desc: desc, + newMetric: newMetric, + hashAdd: hashAdd, + hashAddByte: hashAddByte, + } +} + +// metricWithLabelValues provides the metric and its label values for +// disambiguation on hash collision. +type metricWithLabelValues struct { + values []string + metric Metric +} + +// Describe implements Collector. The length of the returned slice +// is always one. +func (m *MetricVec) Describe(ch chan<- *Desc) { + ch <- m.desc +} + +// Collect implements Collector. +func (m *MetricVec) Collect(ch chan<- Metric) { + m.mtx.RLock() + defer m.mtx.RUnlock() + + for _, metrics := range m.children { + for _, metric := range metrics { + ch <- metric.metric + } + } +} + +// GetMetricWithLabelValues returns the Metric for the given slice of label +// values (same order as the VariableLabels in Desc). If that combination of +// label values is accessed for the first time, a new Metric is created. +// +// It is possible to call this method without using the returned Metric to only +// create the new Metric but leave it at its start value (e.g. a Summary or +// Histogram without any observations). See also the SummaryVec example. +// +// Keeping the Metric for later use is possible (and should be considered if +// performance is critical), but keep in mind that Reset, DeleteLabelValues and +// Delete can be used to delete the Metric from the MetricVec. In that case, the +// Metric will still exist, but it will not be exported anymore, even if a +// Metric with the same label values is created later. See also the CounterVec +// example. +// +// An error is returned if the number of label values is not the same as the +// number of VariableLabels in Desc. +// +// Note that for more than one label value, this method is prone to mistakes +// caused by an incorrect order of arguments. Consider GetMetricWith(Labels) as +// an alternative to avoid that type of mistake. For higher label numbers, the +// latter has a much more readable (albeit more verbose) syntax, but it comes +// with a performance overhead (for creating and processing the Labels map). +// See also the GaugeVec example. +func (m *MetricVec) GetMetricWithLabelValues(lvs ...string) (Metric, error) { + h, err := m.hashLabelValues(lvs) + if err != nil { + return nil, err + } + + return m.getOrCreateMetricWithLabelValues(h, lvs), nil +} + +// GetMetricWith returns the Metric for the given Labels map (the label names +// must match those of the VariableLabels in Desc). If that label map is +// accessed for the first time, a new Metric is created. Implications of +// creating a Metric without using it and keeping the Metric for later use are +// the same as for GetMetricWithLabelValues. +// +// An error is returned if the number and names of the Labels are inconsistent +// with those of the VariableLabels in Desc. +// +// This method is used for the same purpose as +// GetMetricWithLabelValues(...string). See there for pros and cons of the two +// methods. +func (m *MetricVec) GetMetricWith(labels Labels) (Metric, error) { + h, err := m.hashLabels(labels) + if err != nil { + return nil, err + } + + return m.getOrCreateMetricWithLabels(h, labels), nil +} + +// WithLabelValues works as GetMetricWithLabelValues, but panics if an error +// occurs. The method allows neat syntax like: +// httpReqs.WithLabelValues("404", "POST").Inc() +func (m *MetricVec) WithLabelValues(lvs ...string) Metric { + metric, err := m.GetMetricWithLabelValues(lvs...) + if err != nil { + panic(err) + } + return metric +} + +// With works as GetMetricWith, but panics if an error occurs. The method allows +// neat syntax like: +// httpReqs.With(Labels{"status":"404", "method":"POST"}).Inc() +func (m *MetricVec) With(labels Labels) Metric { + metric, err := m.GetMetricWith(labels) + if err != nil { + panic(err) + } + return metric +} + +// DeleteLabelValues removes the metric where the variable labels are the same +// as those passed in as labels (same order as the VariableLabels in Desc). It +// returns true if a metric was deleted. +// +// It is not an error if the number of label values is not the same as the +// number of VariableLabels in Desc. However, such inconsistent label count can +// never match an actual Metric, so the method will always return false in that +// case. +// +// Note that for more than one label value, this method is prone to mistakes +// caused by an incorrect order of arguments. Consider Delete(Labels) as an +// alternative to avoid that type of mistake. For higher label numbers, the +// latter has a much more readable (albeit more verbose) syntax, but it comes +// with a performance overhead (for creating and processing the Labels map). +// See also the CounterVec example. +func (m *MetricVec) DeleteLabelValues(lvs ...string) bool { + m.mtx.Lock() + defer m.mtx.Unlock() + + h, err := m.hashLabelValues(lvs) + if err != nil { + return false + } + return m.deleteByHashWithLabelValues(h, lvs) +} + +// Delete deletes the metric where the variable labels are the same as those +// passed in as labels. It returns true if a metric was deleted. +// +// It is not an error if the number and names of the Labels are inconsistent +// with those of the VariableLabels in the Desc of the MetricVec. However, such +// inconsistent Labels can never match an actual Metric, so the method will +// always return false in that case. +// +// This method is used for the same purpose as DeleteLabelValues(...string). See +// there for pros and cons of the two methods. +func (m *MetricVec) Delete(labels Labels) bool { + m.mtx.Lock() + defer m.mtx.Unlock() + + h, err := m.hashLabels(labels) + if err != nil { + return false + } + + return m.deleteByHashWithLabels(h, labels) +} + +// deleteByHashWithLabelValues removes the metric from the hash bucket h. If +// there are multiple matches in the bucket, use lvs to select a metric and +// remove only that metric. +func (m *MetricVec) deleteByHashWithLabelValues(h uint64, lvs []string) bool { + metrics, ok := m.children[h] + if !ok { + return false + } + + i := m.findMetricWithLabelValues(metrics, lvs) + if i >= len(metrics) { + return false + } + + if len(metrics) > 1 { + m.children[h] = append(metrics[:i], metrics[i+1:]...) + } else { + delete(m.children, h) + } + return true +} + +// deleteByHashWithLabels removes the metric from the hash bucket h. If there +// are multiple matches in the bucket, use lvs to select a metric and remove +// only that metric. +func (m *MetricVec) deleteByHashWithLabels(h uint64, labels Labels) bool { + metrics, ok := m.children[h] + if !ok { + return false + } + i := m.findMetricWithLabels(metrics, labels) + if i >= len(metrics) { + return false + } + + if len(metrics) > 1 { + m.children[h] = append(metrics[:i], metrics[i+1:]...) + } else { + delete(m.children, h) + } + return true +} + +// Reset deletes all metrics in this vector. +func (m *MetricVec) Reset() { + m.mtx.Lock() + defer m.mtx.Unlock() + + for h := range m.children { + delete(m.children, h) + } +} + +func (m *MetricVec) hashLabelValues(vals []string) (uint64, error) { + if len(vals) != len(m.desc.variableLabels) { + return 0, errInconsistentCardinality + } + h := hashNew() + for _, val := range vals { + h = m.hashAdd(h, val) + h = m.hashAddByte(h, model.SeparatorByte) + } + return h, nil +} + +func (m *MetricVec) hashLabels(labels Labels) (uint64, error) { + if len(labels) != len(m.desc.variableLabels) { + return 0, errInconsistentCardinality + } + h := hashNew() + for _, label := range m.desc.variableLabels { + val, ok := labels[label] + if !ok { + return 0, fmt.Errorf("label name %q missing in label map", label) + } + h = m.hashAdd(h, val) + h = m.hashAddByte(h, model.SeparatorByte) + } + return h, nil +} + +// getOrCreateMetricWithLabelValues retrieves the metric by hash and label value +// or creates it and returns the new one. +// +// This function holds the mutex. +func (m *MetricVec) getOrCreateMetricWithLabelValues(hash uint64, lvs []string) Metric { + m.mtx.RLock() + metric, ok := m.getMetricWithLabelValues(hash, lvs) + m.mtx.RUnlock() + if ok { + return metric + } + + m.mtx.Lock() + defer m.mtx.Unlock() + metric, ok = m.getMetricWithLabelValues(hash, lvs) + if !ok { + // Copy to avoid allocation in case wo don't go down this code path. + copiedLVs := make([]string, len(lvs)) + copy(copiedLVs, lvs) + metric = m.newMetric(copiedLVs...) + m.children[hash] = append(m.children[hash], metricWithLabelValues{values: copiedLVs, metric: metric}) + } + return metric +} + +// getOrCreateMetricWithLabelValues retrieves the metric by hash and label value +// or creates it and returns the new one. +// +// This function holds the mutex. +func (m *MetricVec) getOrCreateMetricWithLabels(hash uint64, labels Labels) Metric { + m.mtx.RLock() + metric, ok := m.getMetricWithLabels(hash, labels) + m.mtx.RUnlock() + if ok { + return metric + } + + m.mtx.Lock() + defer m.mtx.Unlock() + metric, ok = m.getMetricWithLabels(hash, labels) + if !ok { + lvs := m.extractLabelValues(labels) + metric = m.newMetric(lvs...) + m.children[hash] = append(m.children[hash], metricWithLabelValues{values: lvs, metric: metric}) + } + return metric +} + +// getMetricWithLabelValues gets a metric while handling possible collisions in +// the hash space. Must be called while holding read mutex. +func (m *MetricVec) getMetricWithLabelValues(h uint64, lvs []string) (Metric, bool) { + metrics, ok := m.children[h] + if ok { + if i := m.findMetricWithLabelValues(metrics, lvs); i < len(metrics) { + return metrics[i].metric, true + } + } + return nil, false +} + +// getMetricWithLabels gets a metric while handling possible collisions in +// the hash space. Must be called while holding read mutex. +func (m *MetricVec) getMetricWithLabels(h uint64, labels Labels) (Metric, bool) { + metrics, ok := m.children[h] + if ok { + if i := m.findMetricWithLabels(metrics, labels); i < len(metrics) { + return metrics[i].metric, true + } + } + return nil, false +} + +// findMetricWithLabelValues returns the index of the matching metric or +// len(metrics) if not found. +func (m *MetricVec) findMetricWithLabelValues(metrics []metricWithLabelValues, lvs []string) int { + for i, metric := range metrics { + if m.matchLabelValues(metric.values, lvs) { + return i + } + } + return len(metrics) +} + +// findMetricWithLabels returns the index of the matching metric or len(metrics) +// if not found. +func (m *MetricVec) findMetricWithLabels(metrics []metricWithLabelValues, labels Labels) int { + for i, metric := range metrics { + if m.matchLabels(metric.values, labels) { + return i + } + } + return len(metrics) +} + +func (m *MetricVec) matchLabelValues(values []string, lvs []string) bool { + if len(values) != len(lvs) { + return false + } + for i, v := range values { + if v != lvs[i] { + return false + } + } + return true +} + +func (m *MetricVec) matchLabels(values []string, labels Labels) bool { + if len(labels) != len(values) { + return false + } + for i, k := range m.desc.variableLabels { + if values[i] != labels[k] { + return false + } + } + return true +} + +func (m *MetricVec) extractLabelValues(labels Labels) []string { + labelValues := make([]string, len(labels)) + for i, k := range m.desc.variableLabels { + labelValues[i] = labels[k] + } + return labelValues +} diff --git a/vendor/github.com/prometheus/client_model/LICENSE b/vendor/github.com/prometheus/client_model/LICENSE new file mode 100644 index 00000000..261eeb9e --- /dev/null +++ b/vendor/github.com/prometheus/client_model/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/prometheus/client_model/NOTICE b/vendor/github.com/prometheus/client_model/NOTICE new file mode 100644 index 00000000..20110e41 --- /dev/null +++ b/vendor/github.com/prometheus/client_model/NOTICE @@ -0,0 +1,5 @@ +Data model artifacts for Prometheus. +Copyright 2012-2015 The Prometheus Authors + +This product includes software developed at +SoundCloud Ltd. (http://soundcloud.com/). diff --git a/vendor/github.com/prometheus/client_model/go/metrics.pb.go b/vendor/github.com/prometheus/client_model/go/metrics.pb.go new file mode 100644 index 00000000..b065f868 --- /dev/null +++ b/vendor/github.com/prometheus/client_model/go/metrics.pb.go @@ -0,0 +1,364 @@ +// Code generated by protoc-gen-go. +// source: metrics.proto +// DO NOT EDIT! + +/* +Package io_prometheus_client is a generated protocol buffer package. + +It is generated from these files: + metrics.proto + +It has these top-level messages: + LabelPair + Gauge + Counter + Quantile + Summary + Untyped + Histogram + Bucket + Metric + MetricFamily +*/ +package io_prometheus_client + +import proto "github.com/golang/protobuf/proto" +import math "math" + +// Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = math.Inf + +type MetricType int32 + +const ( + MetricType_COUNTER MetricType = 0 + MetricType_GAUGE MetricType = 1 + MetricType_SUMMARY MetricType = 2 + MetricType_UNTYPED MetricType = 3 + MetricType_HISTOGRAM MetricType = 4 +) + +var MetricType_name = map[int32]string{ + 0: "COUNTER", + 1: "GAUGE", + 2: "SUMMARY", + 3: "UNTYPED", + 4: "HISTOGRAM", +} +var MetricType_value = map[string]int32{ + "COUNTER": 0, + "GAUGE": 1, + "SUMMARY": 2, + "UNTYPED": 3, + "HISTOGRAM": 4, +} + +func (x MetricType) Enum() *MetricType { + p := new(MetricType) + *p = x + return p +} +func (x MetricType) String() string { + return proto.EnumName(MetricType_name, int32(x)) +} +func (x *MetricType) UnmarshalJSON(data []byte) error { + value, err := proto.UnmarshalJSONEnum(MetricType_value, data, "MetricType") + if err != nil { + return err + } + *x = MetricType(value) + return nil +} + +type LabelPair struct { + Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` + Value *string `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *LabelPair) Reset() { *m = LabelPair{} } +func (m *LabelPair) String() string { return proto.CompactTextString(m) } +func (*LabelPair) ProtoMessage() {} + +func (m *LabelPair) GetName() string { + if m != nil && m.Name != nil { + return *m.Name + } + return "" +} + +func (m *LabelPair) GetValue() string { + if m != nil && m.Value != nil { + return *m.Value + } + return "" +} + +type Gauge struct { + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Gauge) Reset() { *m = Gauge{} } +func (m *Gauge) String() string { return proto.CompactTextString(m) } +func (*Gauge) ProtoMessage() {} + +func (m *Gauge) GetValue() float64 { + if m != nil && m.Value != nil { + return *m.Value + } + return 0 +} + +type Counter struct { + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Counter) Reset() { *m = Counter{} } +func (m *Counter) String() string { return proto.CompactTextString(m) } +func (*Counter) ProtoMessage() {} + +func (m *Counter) GetValue() float64 { + if m != nil && m.Value != nil { + return *m.Value + } + return 0 +} + +type Quantile struct { + Quantile *float64 `protobuf:"fixed64,1,opt,name=quantile" json:"quantile,omitempty"` + Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Quantile) Reset() { *m = Quantile{} } +func (m *Quantile) String() string { return proto.CompactTextString(m) } +func (*Quantile) ProtoMessage() {} + +func (m *Quantile) GetQuantile() float64 { + if m != nil && m.Quantile != nil { + return *m.Quantile + } + return 0 +} + +func (m *Quantile) GetValue() float64 { + if m != nil && m.Value != nil { + return *m.Value + } + return 0 +} + +type Summary struct { + SampleCount *uint64 `protobuf:"varint,1,opt,name=sample_count" json:"sample_count,omitempty"` + SampleSum *float64 `protobuf:"fixed64,2,opt,name=sample_sum" json:"sample_sum,omitempty"` + Quantile []*Quantile `protobuf:"bytes,3,rep,name=quantile" json:"quantile,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Summary) Reset() { *m = Summary{} } +func (m *Summary) String() string { return proto.CompactTextString(m) } +func (*Summary) ProtoMessage() {} + +func (m *Summary) GetSampleCount() uint64 { + if m != nil && m.SampleCount != nil { + return *m.SampleCount + } + return 0 +} + +func (m *Summary) GetSampleSum() float64 { + if m != nil && m.SampleSum != nil { + return *m.SampleSum + } + return 0 +} + +func (m *Summary) GetQuantile() []*Quantile { + if m != nil { + return m.Quantile + } + return nil +} + +type Untyped struct { + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Untyped) Reset() { *m = Untyped{} } +func (m *Untyped) String() string { return proto.CompactTextString(m) } +func (*Untyped) ProtoMessage() {} + +func (m *Untyped) GetValue() float64 { + if m != nil && m.Value != nil { + return *m.Value + } + return 0 +} + +type Histogram struct { + SampleCount *uint64 `protobuf:"varint,1,opt,name=sample_count" json:"sample_count,omitempty"` + SampleSum *float64 `protobuf:"fixed64,2,opt,name=sample_sum" json:"sample_sum,omitempty"` + Bucket []*Bucket `protobuf:"bytes,3,rep,name=bucket" json:"bucket,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Histogram) Reset() { *m = Histogram{} } +func (m *Histogram) String() string { return proto.CompactTextString(m) } +func (*Histogram) ProtoMessage() {} + +func (m *Histogram) GetSampleCount() uint64 { + if m != nil && m.SampleCount != nil { + return *m.SampleCount + } + return 0 +} + +func (m *Histogram) GetSampleSum() float64 { + if m != nil && m.SampleSum != nil { + return *m.SampleSum + } + return 0 +} + +func (m *Histogram) GetBucket() []*Bucket { + if m != nil { + return m.Bucket + } + return nil +} + +type Bucket struct { + CumulativeCount *uint64 `protobuf:"varint,1,opt,name=cumulative_count" json:"cumulative_count,omitempty"` + UpperBound *float64 `protobuf:"fixed64,2,opt,name=upper_bound" json:"upper_bound,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Bucket) Reset() { *m = Bucket{} } +func (m *Bucket) String() string { return proto.CompactTextString(m) } +func (*Bucket) ProtoMessage() {} + +func (m *Bucket) GetCumulativeCount() uint64 { + if m != nil && m.CumulativeCount != nil { + return *m.CumulativeCount + } + return 0 +} + +func (m *Bucket) GetUpperBound() float64 { + if m != nil && m.UpperBound != nil { + return *m.UpperBound + } + return 0 +} + +type Metric struct { + Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` + Gauge *Gauge `protobuf:"bytes,2,opt,name=gauge" json:"gauge,omitempty"` + Counter *Counter `protobuf:"bytes,3,opt,name=counter" json:"counter,omitempty"` + Summary *Summary `protobuf:"bytes,4,opt,name=summary" json:"summary,omitempty"` + Untyped *Untyped `protobuf:"bytes,5,opt,name=untyped" json:"untyped,omitempty"` + Histogram *Histogram `protobuf:"bytes,7,opt,name=histogram" json:"histogram,omitempty"` + TimestampMs *int64 `protobuf:"varint,6,opt,name=timestamp_ms" json:"timestamp_ms,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *Metric) Reset() { *m = Metric{} } +func (m *Metric) String() string { return proto.CompactTextString(m) } +func (*Metric) ProtoMessage() {} + +func (m *Metric) GetLabel() []*LabelPair { + if m != nil { + return m.Label + } + return nil +} + +func (m *Metric) GetGauge() *Gauge { + if m != nil { + return m.Gauge + } + return nil +} + +func (m *Metric) GetCounter() *Counter { + if m != nil { + return m.Counter + } + return nil +} + +func (m *Metric) GetSummary() *Summary { + if m != nil { + return m.Summary + } + return nil +} + +func (m *Metric) GetUntyped() *Untyped { + if m != nil { + return m.Untyped + } + return nil +} + +func (m *Metric) GetHistogram() *Histogram { + if m != nil { + return m.Histogram + } + return nil +} + +func (m *Metric) GetTimestampMs() int64 { + if m != nil && m.TimestampMs != nil { + return *m.TimestampMs + } + return 0 +} + +type MetricFamily struct { + Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` + Help *string `protobuf:"bytes,2,opt,name=help" json:"help,omitempty"` + Type *MetricType `protobuf:"varint,3,opt,name=type,enum=io.prometheus.client.MetricType" json:"type,omitempty"` + Metric []*Metric `protobuf:"bytes,4,rep,name=metric" json:"metric,omitempty"` + XXX_unrecognized []byte `json:"-"` +} + +func (m *MetricFamily) Reset() { *m = MetricFamily{} } +func (m *MetricFamily) String() string { return proto.CompactTextString(m) } +func (*MetricFamily) ProtoMessage() {} + +func (m *MetricFamily) GetName() string { + if m != nil && m.Name != nil { + return *m.Name + } + return "" +} + +func (m *MetricFamily) GetHelp() string { + if m != nil && m.Help != nil { + return *m.Help + } + return "" +} + +func (m *MetricFamily) GetType() MetricType { + if m != nil && m.Type != nil { + return *m.Type + } + return MetricType_COUNTER +} + +func (m *MetricFamily) GetMetric() []*Metric { + if m != nil { + return m.Metric + } + return nil +} + +func init() { + proto.RegisterEnum("io.prometheus.client.MetricType", MetricType_name, MetricType_value) +} diff --git a/vendor/github.com/prometheus/client_model/ruby/LICENSE b/vendor/github.com/prometheus/client_model/ruby/LICENSE new file mode 100644 index 00000000..11069edd --- /dev/null +++ b/vendor/github.com/prometheus/client_model/ruby/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/vendor/github.com/prometheus/common/LICENSE b/vendor/github.com/prometheus/common/LICENSE new file mode 100644 index 00000000..261eeb9e --- /dev/null +++ b/vendor/github.com/prometheus/common/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/prometheus/common/NOTICE b/vendor/github.com/prometheus/common/NOTICE new file mode 100644 index 00000000..636a2c1a --- /dev/null +++ b/vendor/github.com/prometheus/common/NOTICE @@ -0,0 +1,5 @@ +Common libraries shared by Prometheus Go components. +Copyright 2015 The Prometheus Authors + +This product includes software developed at +SoundCloud Ltd. (http://soundcloud.com/). diff --git a/vendor/github.com/prometheus/common/expfmt/decode.go b/vendor/github.com/prometheus/common/expfmt/decode.go new file mode 100644 index 00000000..c092723e --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/decode.go @@ -0,0 +1,429 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package expfmt + +import ( + "fmt" + "io" + "math" + "mime" + "net/http" + + dto "github.com/prometheus/client_model/go" + + "github.com/matttproud/golang_protobuf_extensions/pbutil" + "github.com/prometheus/common/model" +) + +// Decoder types decode an input stream into metric families. +type Decoder interface { + Decode(*dto.MetricFamily) error +} + +// DecodeOptions contains options used by the Decoder and in sample extraction. +type DecodeOptions struct { + // Timestamp is added to each value from the stream that has no explicit timestamp set. + Timestamp model.Time +} + +// ResponseFormat extracts the correct format from a HTTP response header. +// If no matching format can be found FormatUnknown is returned. +func ResponseFormat(h http.Header) Format { + ct := h.Get(hdrContentType) + + mediatype, params, err := mime.ParseMediaType(ct) + if err != nil { + return FmtUnknown + } + + const textType = "text/plain" + + switch mediatype { + case ProtoType: + if p, ok := params["proto"]; ok && p != ProtoProtocol { + return FmtUnknown + } + if e, ok := params["encoding"]; ok && e != "delimited" { + return FmtUnknown + } + return FmtProtoDelim + + case textType: + if v, ok := params["version"]; ok && v != TextVersion { + return FmtUnknown + } + return FmtText + } + + return FmtUnknown +} + +// NewDecoder returns a new decoder based on the given input format. +// If the input format does not imply otherwise, a text format decoder is returned. +func NewDecoder(r io.Reader, format Format) Decoder { + switch format { + case FmtProtoDelim: + return &protoDecoder{r: r} + } + return &textDecoder{r: r} +} + +// protoDecoder implements the Decoder interface for protocol buffers. +type protoDecoder struct { + r io.Reader +} + +// Decode implements the Decoder interface. +func (d *protoDecoder) Decode(v *dto.MetricFamily) error { + _, err := pbutil.ReadDelimited(d.r, v) + if err != nil { + return err + } + if !model.IsValidMetricName(model.LabelValue(v.GetName())) { + return fmt.Errorf("invalid metric name %q", v.GetName()) + } + for _, m := range v.GetMetric() { + if m == nil { + continue + } + for _, l := range m.GetLabel() { + if l == nil { + continue + } + if !model.LabelValue(l.GetValue()).IsValid() { + return fmt.Errorf("invalid label value %q", l.GetValue()) + } + if !model.LabelName(l.GetName()).IsValid() { + return fmt.Errorf("invalid label name %q", l.GetName()) + } + } + } + return nil +} + +// textDecoder implements the Decoder interface for the text protocol. +type textDecoder struct { + r io.Reader + p TextParser + fams []*dto.MetricFamily +} + +// Decode implements the Decoder interface. +func (d *textDecoder) Decode(v *dto.MetricFamily) error { + // TODO(fabxc): Wrap this as a line reader to make streaming safer. + if len(d.fams) == 0 { + // No cached metric families, read everything and parse metrics. + fams, err := d.p.TextToMetricFamilies(d.r) + if err != nil { + return err + } + if len(fams) == 0 { + return io.EOF + } + d.fams = make([]*dto.MetricFamily, 0, len(fams)) + for _, f := range fams { + d.fams = append(d.fams, f) + } + } + + *v = *d.fams[0] + d.fams = d.fams[1:] + + return nil +} + +// SampleDecoder wraps a Decoder to extract samples from the metric families +// decoded by the wrapped Decoder. +type SampleDecoder struct { + Dec Decoder + Opts *DecodeOptions + + f dto.MetricFamily +} + +// Decode calls the Decode method of the wrapped Decoder and then extracts the +// samples from the decoded MetricFamily into the provided model.Vector. +func (sd *SampleDecoder) Decode(s *model.Vector) error { + err := sd.Dec.Decode(&sd.f) + if err != nil { + return err + } + *s, err = extractSamples(&sd.f, sd.Opts) + return err +} + +// ExtractSamples builds a slice of samples from the provided metric +// families. If an error occurrs during sample extraction, it continues to +// extract from the remaining metric families. The returned error is the last +// error that has occurred. +func ExtractSamples(o *DecodeOptions, fams ...*dto.MetricFamily) (model.Vector, error) { + var ( + all model.Vector + lastErr error + ) + for _, f := range fams { + some, err := extractSamples(f, o) + if err != nil { + lastErr = err + continue + } + all = append(all, some...) + } + return all, lastErr +} + +func extractSamples(f *dto.MetricFamily, o *DecodeOptions) (model.Vector, error) { + switch f.GetType() { + case dto.MetricType_COUNTER: + return extractCounter(o, f), nil + case dto.MetricType_GAUGE: + return extractGauge(o, f), nil + case dto.MetricType_SUMMARY: + return extractSummary(o, f), nil + case dto.MetricType_UNTYPED: + return extractUntyped(o, f), nil + case dto.MetricType_HISTOGRAM: + return extractHistogram(o, f), nil + } + return nil, fmt.Errorf("expfmt.extractSamples: unknown metric family type %v", f.GetType()) +} + +func extractCounter(o *DecodeOptions, f *dto.MetricFamily) model.Vector { + samples := make(model.Vector, 0, len(f.Metric)) + + for _, m := range f.Metric { + if m.Counter == nil { + continue + } + + lset := make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName()) + + smpl := &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Counter.GetValue()), + } + + if m.TimestampMs != nil { + smpl.Timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000) + } else { + smpl.Timestamp = o.Timestamp + } + + samples = append(samples, smpl) + } + + return samples +} + +func extractGauge(o *DecodeOptions, f *dto.MetricFamily) model.Vector { + samples := make(model.Vector, 0, len(f.Metric)) + + for _, m := range f.Metric { + if m.Gauge == nil { + continue + } + + lset := make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName()) + + smpl := &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Gauge.GetValue()), + } + + if m.TimestampMs != nil { + smpl.Timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000) + } else { + smpl.Timestamp = o.Timestamp + } + + samples = append(samples, smpl) + } + + return samples +} + +func extractUntyped(o *DecodeOptions, f *dto.MetricFamily) model.Vector { + samples := make(model.Vector, 0, len(f.Metric)) + + for _, m := range f.Metric { + if m.Untyped == nil { + continue + } + + lset := make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName()) + + smpl := &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Untyped.GetValue()), + } + + if m.TimestampMs != nil { + smpl.Timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000) + } else { + smpl.Timestamp = o.Timestamp + } + + samples = append(samples, smpl) + } + + return samples +} + +func extractSummary(o *DecodeOptions, f *dto.MetricFamily) model.Vector { + samples := make(model.Vector, 0, len(f.Metric)) + + for _, m := range f.Metric { + if m.Summary == nil { + continue + } + + timestamp := o.Timestamp + if m.TimestampMs != nil { + timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000) + } + + for _, q := range m.Summary.Quantile { + lset := make(model.LabelSet, len(m.Label)+2) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + // BUG(matt): Update other names to "quantile". + lset[model.LabelName(model.QuantileLabel)] = model.LabelValue(fmt.Sprint(q.GetQuantile())) + lset[model.MetricNameLabel] = model.LabelValue(f.GetName()) + + samples = append(samples, &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(q.GetValue()), + Timestamp: timestamp, + }) + } + + lset := make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_sum") + + samples = append(samples, &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Summary.GetSampleSum()), + Timestamp: timestamp, + }) + + lset = make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_count") + + samples = append(samples, &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Summary.GetSampleCount()), + Timestamp: timestamp, + }) + } + + return samples +} + +func extractHistogram(o *DecodeOptions, f *dto.MetricFamily) model.Vector { + samples := make(model.Vector, 0, len(f.Metric)) + + for _, m := range f.Metric { + if m.Histogram == nil { + continue + } + + timestamp := o.Timestamp + if m.TimestampMs != nil { + timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000) + } + + infSeen := false + + for _, q := range m.Histogram.Bucket { + lset := make(model.LabelSet, len(m.Label)+2) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.LabelName(model.BucketLabel)] = model.LabelValue(fmt.Sprint(q.GetUpperBound())) + lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_bucket") + + if math.IsInf(q.GetUpperBound(), +1) { + infSeen = true + } + + samples = append(samples, &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(q.GetCumulativeCount()), + Timestamp: timestamp, + }) + } + + lset := make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_sum") + + samples = append(samples, &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Histogram.GetSampleSum()), + Timestamp: timestamp, + }) + + lset = make(model.LabelSet, len(m.Label)+1) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_count") + + count := &model.Sample{ + Metric: model.Metric(lset), + Value: model.SampleValue(m.Histogram.GetSampleCount()), + Timestamp: timestamp, + } + samples = append(samples, count) + + if !infSeen { + // Append an infinity bucket sample. + lset := make(model.LabelSet, len(m.Label)+2) + for _, p := range m.Label { + lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue()) + } + lset[model.LabelName(model.BucketLabel)] = model.LabelValue("+Inf") + lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_bucket") + + samples = append(samples, &model.Sample{ + Metric: model.Metric(lset), + Value: count.Value, + Timestamp: timestamp, + }) + } + } + + return samples +} diff --git a/vendor/github.com/prometheus/common/expfmt/encode.go b/vendor/github.com/prometheus/common/expfmt/encode.go new file mode 100644 index 00000000..11839ed6 --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/encode.go @@ -0,0 +1,88 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package expfmt + +import ( + "fmt" + "io" + "net/http" + + "github.com/golang/protobuf/proto" + "github.com/matttproud/golang_protobuf_extensions/pbutil" + "github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg" + + dto "github.com/prometheus/client_model/go" +) + +// Encoder types encode metric families into an underlying wire protocol. +type Encoder interface { + Encode(*dto.MetricFamily) error +} + +type encoder func(*dto.MetricFamily) error + +func (e encoder) Encode(v *dto.MetricFamily) error { + return e(v) +} + +// Negotiate returns the Content-Type based on the given Accept header. +// If no appropriate accepted type is found, FmtText is returned. +func Negotiate(h http.Header) Format { + for _, ac := range goautoneg.ParseAccept(h.Get(hdrAccept)) { + // Check for protocol buffer + if ac.Type+"/"+ac.SubType == ProtoType && ac.Params["proto"] == ProtoProtocol { + switch ac.Params["encoding"] { + case "delimited": + return FmtProtoDelim + case "text": + return FmtProtoText + case "compact-text": + return FmtProtoCompact + } + } + // Check for text format. + ver := ac.Params["version"] + if ac.Type == "text" && ac.SubType == "plain" && (ver == TextVersion || ver == "") { + return FmtText + } + } + return FmtText +} + +// NewEncoder returns a new encoder based on content type negotiation. +func NewEncoder(w io.Writer, format Format) Encoder { + switch format { + case FmtProtoDelim: + return encoder(func(v *dto.MetricFamily) error { + _, err := pbutil.WriteDelimited(w, v) + return err + }) + case FmtProtoCompact: + return encoder(func(v *dto.MetricFamily) error { + _, err := fmt.Fprintln(w, v.String()) + return err + }) + case FmtProtoText: + return encoder(func(v *dto.MetricFamily) error { + _, err := fmt.Fprintln(w, proto.MarshalTextString(v)) + return err + }) + case FmtText: + return encoder(func(v *dto.MetricFamily) error { + _, err := MetricFamilyToText(w, v) + return err + }) + } + panic("expfmt.NewEncoder: unknown format") +} diff --git a/vendor/github.com/prometheus/common/expfmt/expfmt.go b/vendor/github.com/prometheus/common/expfmt/expfmt.go new file mode 100644 index 00000000..c71bcb98 --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/expfmt.go @@ -0,0 +1,38 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package expfmt contains tools for reading and writing Prometheus metrics. +package expfmt + +// Format specifies the HTTP content type of the different wire protocols. +type Format string + +// Constants to assemble the Content-Type values for the different wire protocols. +const ( + TextVersion = "0.0.4" + ProtoType = `application/vnd.google.protobuf` + ProtoProtocol = `io.prometheus.client.MetricFamily` + ProtoFmt = ProtoType + "; proto=" + ProtoProtocol + ";" + + // The Content-Type values for the different wire protocols. + FmtUnknown Format = `` + FmtText Format = `text/plain; version=` + TextVersion + `; charset=utf-8` + FmtProtoDelim Format = ProtoFmt + ` encoding=delimited` + FmtProtoText Format = ProtoFmt + ` encoding=text` + FmtProtoCompact Format = ProtoFmt + ` encoding=compact-text` +) + +const ( + hdrContentType = "Content-Type" + hdrAccept = "Accept" +) diff --git a/vendor/github.com/prometheus/common/expfmt/fuzz.go b/vendor/github.com/prometheus/common/expfmt/fuzz.go new file mode 100644 index 00000000..dc2eedee --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/fuzz.go @@ -0,0 +1,36 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Build only when actually fuzzing +// +build gofuzz + +package expfmt + +import "bytes" + +// Fuzz text metric parser with with github.com/dvyukov/go-fuzz: +// +// go-fuzz-build github.com/prometheus/common/expfmt +// go-fuzz -bin expfmt-fuzz.zip -workdir fuzz +// +// Further input samples should go in the folder fuzz/corpus. +func Fuzz(in []byte) int { + parser := TextParser{} + _, err := parser.TextToMetricFamilies(bytes.NewReader(in)) + + if err != nil { + return 0 + } + + return 1 +} diff --git a/vendor/github.com/prometheus/common/expfmt/text_create.go b/vendor/github.com/prometheus/common/expfmt/text_create.go new file mode 100644 index 00000000..f11321cd --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/text_create.go @@ -0,0 +1,303 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package expfmt + +import ( + "fmt" + "io" + "math" + "strings" + + dto "github.com/prometheus/client_model/go" + "github.com/prometheus/common/model" +) + +// MetricFamilyToText converts a MetricFamily proto message into text format and +// writes the resulting lines to 'out'. It returns the number of bytes written +// and any error encountered. The output will have the same order as the input, +// no further sorting is performed. Furthermore, this function assumes the input +// is already sanitized and does not perform any sanity checks. If the input +// contains duplicate metrics or invalid metric or label names, the conversion +// will result in invalid text format output. +// +// This method fulfills the type 'prometheus.encoder'. +func MetricFamilyToText(out io.Writer, in *dto.MetricFamily) (int, error) { + var written int + + // Fail-fast checks. + if len(in.Metric) == 0 { + return written, fmt.Errorf("MetricFamily has no metrics: %s", in) + } + name := in.GetName() + if name == "" { + return written, fmt.Errorf("MetricFamily has no name: %s", in) + } + + // Comments, first HELP, then TYPE. + if in.Help != nil { + n, err := fmt.Fprintf( + out, "# HELP %s %s\n", + name, escapeString(*in.Help, false), + ) + written += n + if err != nil { + return written, err + } + } + metricType := in.GetType() + n, err := fmt.Fprintf( + out, "# TYPE %s %s\n", + name, strings.ToLower(metricType.String()), + ) + written += n + if err != nil { + return written, err + } + + // Finally the samples, one line for each. + for _, metric := range in.Metric { + switch metricType { + case dto.MetricType_COUNTER: + if metric.Counter == nil { + return written, fmt.Errorf( + "expected counter in metric %s %s", name, metric, + ) + } + n, err = writeSample( + name, metric, "", "", + metric.Counter.GetValue(), + out, + ) + case dto.MetricType_GAUGE: + if metric.Gauge == nil { + return written, fmt.Errorf( + "expected gauge in metric %s %s", name, metric, + ) + } + n, err = writeSample( + name, metric, "", "", + metric.Gauge.GetValue(), + out, + ) + case dto.MetricType_UNTYPED: + if metric.Untyped == nil { + return written, fmt.Errorf( + "expected untyped in metric %s %s", name, metric, + ) + } + n, err = writeSample( + name, metric, "", "", + metric.Untyped.GetValue(), + out, + ) + case dto.MetricType_SUMMARY: + if metric.Summary == nil { + return written, fmt.Errorf( + "expected summary in metric %s %s", name, metric, + ) + } + for _, q := range metric.Summary.Quantile { + n, err = writeSample( + name, metric, + model.QuantileLabel, fmt.Sprint(q.GetQuantile()), + q.GetValue(), + out, + ) + written += n + if err != nil { + return written, err + } + } + n, err = writeSample( + name+"_sum", metric, "", "", + metric.Summary.GetSampleSum(), + out, + ) + if err != nil { + return written, err + } + written += n + n, err = writeSample( + name+"_count", metric, "", "", + float64(metric.Summary.GetSampleCount()), + out, + ) + case dto.MetricType_HISTOGRAM: + if metric.Histogram == nil { + return written, fmt.Errorf( + "expected histogram in metric %s %s", name, metric, + ) + } + infSeen := false + for _, q := range metric.Histogram.Bucket { + n, err = writeSample( + name+"_bucket", metric, + model.BucketLabel, fmt.Sprint(q.GetUpperBound()), + float64(q.GetCumulativeCount()), + out, + ) + written += n + if err != nil { + return written, err + } + if math.IsInf(q.GetUpperBound(), +1) { + infSeen = true + } + } + if !infSeen { + n, err = writeSample( + name+"_bucket", metric, + model.BucketLabel, "+Inf", + float64(metric.Histogram.GetSampleCount()), + out, + ) + if err != nil { + return written, err + } + written += n + } + n, err = writeSample( + name+"_sum", metric, "", "", + metric.Histogram.GetSampleSum(), + out, + ) + if err != nil { + return written, err + } + written += n + n, err = writeSample( + name+"_count", metric, "", "", + float64(metric.Histogram.GetSampleCount()), + out, + ) + default: + return written, fmt.Errorf( + "unexpected type in metric %s %s", name, metric, + ) + } + written += n + if err != nil { + return written, err + } + } + return written, nil +} + +// writeSample writes a single sample in text format to out, given the metric +// name, the metric proto message itself, optionally an additional label name +// and value (use empty strings if not required), and the value. The function +// returns the number of bytes written and any error encountered. +func writeSample( + name string, + metric *dto.Metric, + additionalLabelName, additionalLabelValue string, + value float64, + out io.Writer, +) (int, error) { + var written int + n, err := fmt.Fprint(out, name) + written += n + if err != nil { + return written, err + } + n, err = labelPairsToText( + metric.Label, + additionalLabelName, additionalLabelValue, + out, + ) + written += n + if err != nil { + return written, err + } + n, err = fmt.Fprintf(out, " %v", value) + written += n + if err != nil { + return written, err + } + if metric.TimestampMs != nil { + n, err = fmt.Fprintf(out, " %v", *metric.TimestampMs) + written += n + if err != nil { + return written, err + } + } + n, err = out.Write([]byte{'\n'}) + written += n + if err != nil { + return written, err + } + return written, nil +} + +// labelPairsToText converts a slice of LabelPair proto messages plus the +// explicitly given additional label pair into text formatted as required by the +// text format and writes it to 'out'. An empty slice in combination with an +// empty string 'additionalLabelName' results in nothing being +// written. Otherwise, the label pairs are written, escaped as required by the +// text format, and enclosed in '{...}'. The function returns the number of +// bytes written and any error encountered. +func labelPairsToText( + in []*dto.LabelPair, + additionalLabelName, additionalLabelValue string, + out io.Writer, +) (int, error) { + if len(in) == 0 && additionalLabelName == "" { + return 0, nil + } + var written int + separator := '{' + for _, lp := range in { + n, err := fmt.Fprintf( + out, `%c%s="%s"`, + separator, lp.GetName(), escapeString(lp.GetValue(), true), + ) + written += n + if err != nil { + return written, err + } + separator = ',' + } + if additionalLabelName != "" { + n, err := fmt.Fprintf( + out, `%c%s="%s"`, + separator, additionalLabelName, + escapeString(additionalLabelValue, true), + ) + written += n + if err != nil { + return written, err + } + } + n, err := out.Write([]byte{'}'}) + written += n + if err != nil { + return written, err + } + return written, nil +} + +var ( + escape = strings.NewReplacer("\\", `\\`, "\n", `\n`) + escapeWithDoubleQuote = strings.NewReplacer("\\", `\\`, "\n", `\n`, "\"", `\"`) +) + +// escapeString replaces '\' by '\\', new line character by '\n', and - if +// includeDoubleQuote is true - '"' by '\"'. +func escapeString(v string, includeDoubleQuote bool) string { + if includeDoubleQuote { + return escapeWithDoubleQuote.Replace(v) + } + + return escape.Replace(v) +} diff --git a/vendor/github.com/prometheus/common/expfmt/text_parse.go b/vendor/github.com/prometheus/common/expfmt/text_parse.go new file mode 100644 index 00000000..b86290af --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/text_parse.go @@ -0,0 +1,757 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package expfmt + +import ( + "bufio" + "bytes" + "fmt" + "io" + "math" + "strconv" + "strings" + + dto "github.com/prometheus/client_model/go" + + "github.com/golang/protobuf/proto" + "github.com/prometheus/common/model" +) + +// A stateFn is a function that represents a state in a state machine. By +// executing it, the state is progressed to the next state. The stateFn returns +// another stateFn, which represents the new state. The end state is represented +// by nil. +type stateFn func() stateFn + +// ParseError signals errors while parsing the simple and flat text-based +// exchange format. +type ParseError struct { + Line int + Msg string +} + +// Error implements the error interface. +func (e ParseError) Error() string { + return fmt.Sprintf("text format parsing error in line %d: %s", e.Line, e.Msg) +} + +// TextParser is used to parse the simple and flat text-based exchange format. Its +// zero value is ready to use. +type TextParser struct { + metricFamiliesByName map[string]*dto.MetricFamily + buf *bufio.Reader // Where the parsed input is read through. + err error // Most recent error. + lineCount int // Tracks the line count for error messages. + currentByte byte // The most recent byte read. + currentToken bytes.Buffer // Re-used each time a token has to be gathered from multiple bytes. + currentMF *dto.MetricFamily + currentMetric *dto.Metric + currentLabelPair *dto.LabelPair + + // The remaining member variables are only used for summaries/histograms. + currentLabels map[string]string // All labels including '__name__' but excluding 'quantile'/'le' + // Summary specific. + summaries map[uint64]*dto.Metric // Key is created with LabelsToSignature. + currentQuantile float64 + // Histogram specific. + histograms map[uint64]*dto.Metric // Key is created with LabelsToSignature. + currentBucket float64 + // These tell us if the currently processed line ends on '_count' or + // '_sum' respectively and belong to a summary/histogram, representing the sample + // count and sum of that summary/histogram. + currentIsSummaryCount, currentIsSummarySum bool + currentIsHistogramCount, currentIsHistogramSum bool +} + +// TextToMetricFamilies reads 'in' as the simple and flat text-based exchange +// format and creates MetricFamily proto messages. It returns the MetricFamily +// proto messages in a map where the metric names are the keys, along with any +// error encountered. +// +// If the input contains duplicate metrics (i.e. lines with the same metric name +// and exactly the same label set), the resulting MetricFamily will contain +// duplicate Metric proto messages. Similar is true for duplicate label +// names. Checks for duplicates have to be performed separately, if required. +// Also note that neither the metrics within each MetricFamily are sorted nor +// the label pairs within each Metric. Sorting is not required for the most +// frequent use of this method, which is sample ingestion in the Prometheus +// server. However, for presentation purposes, you might want to sort the +// metrics, and in some cases, you must sort the labels, e.g. for consumption by +// the metric family injection hook of the Prometheus registry. +// +// Summaries and histograms are rather special beasts. You would probably not +// use them in the simple text format anyway. This method can deal with +// summaries and histograms if they are presented in exactly the way the +// text.Create function creates them. +// +// This method must not be called concurrently. If you want to parse different +// input concurrently, instantiate a separate Parser for each goroutine. +func (p *TextParser) TextToMetricFamilies(in io.Reader) (map[string]*dto.MetricFamily, error) { + p.reset(in) + for nextState := p.startOfLine; nextState != nil; nextState = nextState() { + // Magic happens here... + } + // Get rid of empty metric families. + for k, mf := range p.metricFamiliesByName { + if len(mf.GetMetric()) == 0 { + delete(p.metricFamiliesByName, k) + } + } + // If p.err is io.EOF now, we have run into a premature end of the input + // stream. Turn this error into something nicer and more + // meaningful. (io.EOF is often used as a signal for the legitimate end + // of an input stream.) + if p.err == io.EOF { + p.parseError("unexpected end of input stream") + } + return p.metricFamiliesByName, p.err +} + +func (p *TextParser) reset(in io.Reader) { + p.metricFamiliesByName = map[string]*dto.MetricFamily{} + if p.buf == nil { + p.buf = bufio.NewReader(in) + } else { + p.buf.Reset(in) + } + p.err = nil + p.lineCount = 0 + if p.summaries == nil || len(p.summaries) > 0 { + p.summaries = map[uint64]*dto.Metric{} + } + if p.histograms == nil || len(p.histograms) > 0 { + p.histograms = map[uint64]*dto.Metric{} + } + p.currentQuantile = math.NaN() + p.currentBucket = math.NaN() +} + +// startOfLine represents the state where the next byte read from p.buf is the +// start of a line (or whitespace leading up to it). +func (p *TextParser) startOfLine() stateFn { + p.lineCount++ + if p.skipBlankTab(); p.err != nil { + // End of input reached. This is the only case where + // that is not an error but a signal that we are done. + p.err = nil + return nil + } + switch p.currentByte { + case '#': + return p.startComment + case '\n': + return p.startOfLine // Empty line, start the next one. + } + return p.readingMetricName +} + +// startComment represents the state where the next byte read from p.buf is the +// start of a comment (or whitespace leading up to it). +func (p *TextParser) startComment() stateFn { + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentByte == '\n' { + return p.startOfLine + } + if p.readTokenUntilWhitespace(); p.err != nil { + return nil // Unexpected end of input. + } + // If we have hit the end of line already, there is nothing left + // to do. This is not considered a syntax error. + if p.currentByte == '\n' { + return p.startOfLine + } + keyword := p.currentToken.String() + if keyword != "HELP" && keyword != "TYPE" { + // Generic comment, ignore by fast forwarding to end of line. + for p.currentByte != '\n' { + if p.currentByte, p.err = p.buf.ReadByte(); p.err != nil { + return nil // Unexpected end of input. + } + } + return p.startOfLine + } + // There is something. Next has to be a metric name. + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.readTokenAsMetricName(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentByte == '\n' { + // At the end of the line already. + // Again, this is not considered a syntax error. + return p.startOfLine + } + if !isBlankOrTab(p.currentByte) { + p.parseError("invalid metric name in comment") + return nil + } + p.setOrCreateCurrentMF() + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentByte == '\n' { + // At the end of the line already. + // Again, this is not considered a syntax error. + return p.startOfLine + } + switch keyword { + case "HELP": + return p.readingHelp + case "TYPE": + return p.readingType + } + panic(fmt.Sprintf("code error: unexpected keyword %q", keyword)) +} + +// readingMetricName represents the state where the last byte read (now in +// p.currentByte) is the first byte of a metric name. +func (p *TextParser) readingMetricName() stateFn { + if p.readTokenAsMetricName(); p.err != nil { + return nil + } + if p.currentToken.Len() == 0 { + p.parseError("invalid metric name") + return nil + } + p.setOrCreateCurrentMF() + // Now is the time to fix the type if it hasn't happened yet. + if p.currentMF.Type == nil { + p.currentMF.Type = dto.MetricType_UNTYPED.Enum() + } + p.currentMetric = &dto.Metric{} + // Do not append the newly created currentMetric to + // currentMF.Metric right now. First wait if this is a summary, + // and the metric exists already, which we can only know after + // having read all the labels. + if p.skipBlankTabIfCurrentBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + return p.readingLabels +} + +// readingLabels represents the state where the last byte read (now in +// p.currentByte) is either the first byte of the label set (i.e. a '{'), or the +// first byte of the value (otherwise). +func (p *TextParser) readingLabels() stateFn { + // Summaries/histograms are special. We have to reset the + // currentLabels map, currentQuantile and currentBucket before starting to + // read labels. + if p.currentMF.GetType() == dto.MetricType_SUMMARY || p.currentMF.GetType() == dto.MetricType_HISTOGRAM { + p.currentLabels = map[string]string{} + p.currentLabels[string(model.MetricNameLabel)] = p.currentMF.GetName() + p.currentQuantile = math.NaN() + p.currentBucket = math.NaN() + } + if p.currentByte != '{' { + return p.readingValue + } + return p.startLabelName +} + +// startLabelName represents the state where the next byte read from p.buf is +// the start of a label name (or whitespace leading up to it). +func (p *TextParser) startLabelName() stateFn { + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentByte == '}' { + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + return p.readingValue + } + if p.readTokenAsLabelName(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentToken.Len() == 0 { + p.parseError(fmt.Sprintf("invalid label name for metric %q", p.currentMF.GetName())) + return nil + } + p.currentLabelPair = &dto.LabelPair{Name: proto.String(p.currentToken.String())} + if p.currentLabelPair.GetName() == string(model.MetricNameLabel) { + p.parseError(fmt.Sprintf("label name %q is reserved", model.MetricNameLabel)) + return nil + } + // Special summary/histogram treatment. Don't add 'quantile' and 'le' + // labels to 'real' labels. + if !(p.currentMF.GetType() == dto.MetricType_SUMMARY && p.currentLabelPair.GetName() == model.QuantileLabel) && + !(p.currentMF.GetType() == dto.MetricType_HISTOGRAM && p.currentLabelPair.GetName() == model.BucketLabel) { + p.currentMetric.Label = append(p.currentMetric.Label, p.currentLabelPair) + } + if p.skipBlankTabIfCurrentBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentByte != '=' { + p.parseError(fmt.Sprintf("expected '=' after label name, found %q", p.currentByte)) + return nil + } + return p.startLabelValue +} + +// startLabelValue represents the state where the next byte read from p.buf is +// the start of a (quoted) label value (or whitespace leading up to it). +func (p *TextParser) startLabelValue() stateFn { + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentByte != '"' { + p.parseError(fmt.Sprintf("expected '\"' at start of label value, found %q", p.currentByte)) + return nil + } + if p.readTokenAsLabelValue(); p.err != nil { + return nil + } + if !model.LabelValue(p.currentToken.String()).IsValid() { + p.parseError(fmt.Sprintf("invalid label value %q", p.currentToken.String())) + return nil + } + p.currentLabelPair.Value = proto.String(p.currentToken.String()) + // Special treatment of summaries: + // - Quantile labels are special, will result in dto.Quantile later. + // - Other labels have to be added to currentLabels for signature calculation. + if p.currentMF.GetType() == dto.MetricType_SUMMARY { + if p.currentLabelPair.GetName() == model.QuantileLabel { + if p.currentQuantile, p.err = strconv.ParseFloat(p.currentLabelPair.GetValue(), 64); p.err != nil { + // Create a more helpful error message. + p.parseError(fmt.Sprintf("expected float as value for 'quantile' label, got %q", p.currentLabelPair.GetValue())) + return nil + } + } else { + p.currentLabels[p.currentLabelPair.GetName()] = p.currentLabelPair.GetValue() + } + } + // Similar special treatment of histograms. + if p.currentMF.GetType() == dto.MetricType_HISTOGRAM { + if p.currentLabelPair.GetName() == model.BucketLabel { + if p.currentBucket, p.err = strconv.ParseFloat(p.currentLabelPair.GetValue(), 64); p.err != nil { + // Create a more helpful error message. + p.parseError(fmt.Sprintf("expected float as value for 'le' label, got %q", p.currentLabelPair.GetValue())) + return nil + } + } else { + p.currentLabels[p.currentLabelPair.GetName()] = p.currentLabelPair.GetValue() + } + } + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + switch p.currentByte { + case ',': + return p.startLabelName + + case '}': + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + return p.readingValue + default: + p.parseError(fmt.Sprintf("unexpected end of label value %q", p.currentLabelPair.Value)) + return nil + } +} + +// readingValue represents the state where the last byte read (now in +// p.currentByte) is the first byte of the sample value (i.e. a float). +func (p *TextParser) readingValue() stateFn { + // When we are here, we have read all the labels, so for the + // special case of a summary/histogram, we can finally find out + // if the metric already exists. + if p.currentMF.GetType() == dto.MetricType_SUMMARY { + signature := model.LabelsToSignature(p.currentLabels) + if summary := p.summaries[signature]; summary != nil { + p.currentMetric = summary + } else { + p.summaries[signature] = p.currentMetric + p.currentMF.Metric = append(p.currentMF.Metric, p.currentMetric) + } + } else if p.currentMF.GetType() == dto.MetricType_HISTOGRAM { + signature := model.LabelsToSignature(p.currentLabels) + if histogram := p.histograms[signature]; histogram != nil { + p.currentMetric = histogram + } else { + p.histograms[signature] = p.currentMetric + p.currentMF.Metric = append(p.currentMF.Metric, p.currentMetric) + } + } else { + p.currentMF.Metric = append(p.currentMF.Metric, p.currentMetric) + } + if p.readTokenUntilWhitespace(); p.err != nil { + return nil // Unexpected end of input. + } + value, err := strconv.ParseFloat(p.currentToken.String(), 64) + if err != nil { + // Create a more helpful error message. + p.parseError(fmt.Sprintf("expected float as value, got %q", p.currentToken.String())) + return nil + } + switch p.currentMF.GetType() { + case dto.MetricType_COUNTER: + p.currentMetric.Counter = &dto.Counter{Value: proto.Float64(value)} + case dto.MetricType_GAUGE: + p.currentMetric.Gauge = &dto.Gauge{Value: proto.Float64(value)} + case dto.MetricType_UNTYPED: + p.currentMetric.Untyped = &dto.Untyped{Value: proto.Float64(value)} + case dto.MetricType_SUMMARY: + // *sigh* + if p.currentMetric.Summary == nil { + p.currentMetric.Summary = &dto.Summary{} + } + switch { + case p.currentIsSummaryCount: + p.currentMetric.Summary.SampleCount = proto.Uint64(uint64(value)) + case p.currentIsSummarySum: + p.currentMetric.Summary.SampleSum = proto.Float64(value) + case !math.IsNaN(p.currentQuantile): + p.currentMetric.Summary.Quantile = append( + p.currentMetric.Summary.Quantile, + &dto.Quantile{ + Quantile: proto.Float64(p.currentQuantile), + Value: proto.Float64(value), + }, + ) + } + case dto.MetricType_HISTOGRAM: + // *sigh* + if p.currentMetric.Histogram == nil { + p.currentMetric.Histogram = &dto.Histogram{} + } + switch { + case p.currentIsHistogramCount: + p.currentMetric.Histogram.SampleCount = proto.Uint64(uint64(value)) + case p.currentIsHistogramSum: + p.currentMetric.Histogram.SampleSum = proto.Float64(value) + case !math.IsNaN(p.currentBucket): + p.currentMetric.Histogram.Bucket = append( + p.currentMetric.Histogram.Bucket, + &dto.Bucket{ + UpperBound: proto.Float64(p.currentBucket), + CumulativeCount: proto.Uint64(uint64(value)), + }, + ) + } + default: + p.err = fmt.Errorf("unexpected type for metric name %q", p.currentMF.GetName()) + } + if p.currentByte == '\n' { + return p.startOfLine + } + return p.startTimestamp +} + +// startTimestamp represents the state where the next byte read from p.buf is +// the start of the timestamp (or whitespace leading up to it). +func (p *TextParser) startTimestamp() stateFn { + if p.skipBlankTab(); p.err != nil { + return nil // Unexpected end of input. + } + if p.readTokenUntilWhitespace(); p.err != nil { + return nil // Unexpected end of input. + } + timestamp, err := strconv.ParseInt(p.currentToken.String(), 10, 64) + if err != nil { + // Create a more helpful error message. + p.parseError(fmt.Sprintf("expected integer as timestamp, got %q", p.currentToken.String())) + return nil + } + p.currentMetric.TimestampMs = proto.Int64(timestamp) + if p.readTokenUntilNewline(false); p.err != nil { + return nil // Unexpected end of input. + } + if p.currentToken.Len() > 0 { + p.parseError(fmt.Sprintf("spurious string after timestamp: %q", p.currentToken.String())) + return nil + } + return p.startOfLine +} + +// readingHelp represents the state where the last byte read (now in +// p.currentByte) is the first byte of the docstring after 'HELP'. +func (p *TextParser) readingHelp() stateFn { + if p.currentMF.Help != nil { + p.parseError(fmt.Sprintf("second HELP line for metric name %q", p.currentMF.GetName())) + return nil + } + // Rest of line is the docstring. + if p.readTokenUntilNewline(true); p.err != nil { + return nil // Unexpected end of input. + } + p.currentMF.Help = proto.String(p.currentToken.String()) + return p.startOfLine +} + +// readingType represents the state where the last byte read (now in +// p.currentByte) is the first byte of the type hint after 'HELP'. +func (p *TextParser) readingType() stateFn { + if p.currentMF.Type != nil { + p.parseError(fmt.Sprintf("second TYPE line for metric name %q, or TYPE reported after samples", p.currentMF.GetName())) + return nil + } + // Rest of line is the type. + if p.readTokenUntilNewline(false); p.err != nil { + return nil // Unexpected end of input. + } + metricType, ok := dto.MetricType_value[strings.ToUpper(p.currentToken.String())] + if !ok { + p.parseError(fmt.Sprintf("unknown metric type %q", p.currentToken.String())) + return nil + } + p.currentMF.Type = dto.MetricType(metricType).Enum() + return p.startOfLine +} + +// parseError sets p.err to a ParseError at the current line with the given +// message. +func (p *TextParser) parseError(msg string) { + p.err = ParseError{ + Line: p.lineCount, + Msg: msg, + } +} + +// skipBlankTab reads (and discards) bytes from p.buf until it encounters a byte +// that is neither ' ' nor '\t'. That byte is left in p.currentByte. +func (p *TextParser) skipBlankTab() { + for { + if p.currentByte, p.err = p.buf.ReadByte(); p.err != nil || !isBlankOrTab(p.currentByte) { + return + } + } +} + +// skipBlankTabIfCurrentBlankTab works exactly as skipBlankTab but doesn't do +// anything if p.currentByte is neither ' ' nor '\t'. +func (p *TextParser) skipBlankTabIfCurrentBlankTab() { + if isBlankOrTab(p.currentByte) { + p.skipBlankTab() + } +} + +// readTokenUntilWhitespace copies bytes from p.buf into p.currentToken. The +// first byte considered is the byte already read (now in p.currentByte). The +// first whitespace byte encountered is still copied into p.currentByte, but not +// into p.currentToken. +func (p *TextParser) readTokenUntilWhitespace() { + p.currentToken.Reset() + for p.err == nil && !isBlankOrTab(p.currentByte) && p.currentByte != '\n' { + p.currentToken.WriteByte(p.currentByte) + p.currentByte, p.err = p.buf.ReadByte() + } +} + +// readTokenUntilNewline copies bytes from p.buf into p.currentToken. The first +// byte considered is the byte already read (now in p.currentByte). The first +// newline byte encountered is still copied into p.currentByte, but not into +// p.currentToken. If recognizeEscapeSequence is true, two escape sequences are +// recognized: '\\' translates into '\', and '\n' into a line-feed character. +// All other escape sequences are invalid and cause an error. +func (p *TextParser) readTokenUntilNewline(recognizeEscapeSequence bool) { + p.currentToken.Reset() + escaped := false + for p.err == nil { + if recognizeEscapeSequence && escaped { + switch p.currentByte { + case '\\': + p.currentToken.WriteByte(p.currentByte) + case 'n': + p.currentToken.WriteByte('\n') + default: + p.parseError(fmt.Sprintf("invalid escape sequence '\\%c'", p.currentByte)) + return + } + escaped = false + } else { + switch p.currentByte { + case '\n': + return + case '\\': + escaped = true + default: + p.currentToken.WriteByte(p.currentByte) + } + } + p.currentByte, p.err = p.buf.ReadByte() + } +} + +// readTokenAsMetricName copies a metric name from p.buf into p.currentToken. +// The first byte considered is the byte already read (now in p.currentByte). +// The first byte not part of a metric name is still copied into p.currentByte, +// but not into p.currentToken. +func (p *TextParser) readTokenAsMetricName() { + p.currentToken.Reset() + if !isValidMetricNameStart(p.currentByte) { + return + } + for { + p.currentToken.WriteByte(p.currentByte) + p.currentByte, p.err = p.buf.ReadByte() + if p.err != nil || !isValidMetricNameContinuation(p.currentByte) { + return + } + } +} + +// readTokenAsLabelName copies a label name from p.buf into p.currentToken. +// The first byte considered is the byte already read (now in p.currentByte). +// The first byte not part of a label name is still copied into p.currentByte, +// but not into p.currentToken. +func (p *TextParser) readTokenAsLabelName() { + p.currentToken.Reset() + if !isValidLabelNameStart(p.currentByte) { + return + } + for { + p.currentToken.WriteByte(p.currentByte) + p.currentByte, p.err = p.buf.ReadByte() + if p.err != nil || !isValidLabelNameContinuation(p.currentByte) { + return + } + } +} + +// readTokenAsLabelValue copies a label value from p.buf into p.currentToken. +// In contrast to the other 'readTokenAs...' functions, which start with the +// last read byte in p.currentByte, this method ignores p.currentByte and starts +// with reading a new byte from p.buf. The first byte not part of a label value +// is still copied into p.currentByte, but not into p.currentToken. +func (p *TextParser) readTokenAsLabelValue() { + p.currentToken.Reset() + escaped := false + for { + if p.currentByte, p.err = p.buf.ReadByte(); p.err != nil { + return + } + if escaped { + switch p.currentByte { + case '"', '\\': + p.currentToken.WriteByte(p.currentByte) + case 'n': + p.currentToken.WriteByte('\n') + default: + p.parseError(fmt.Sprintf("invalid escape sequence '\\%c'", p.currentByte)) + return + } + escaped = false + continue + } + switch p.currentByte { + case '"': + return + case '\n': + p.parseError(fmt.Sprintf("label value %q contains unescaped new-line", p.currentToken.String())) + return + case '\\': + escaped = true + default: + p.currentToken.WriteByte(p.currentByte) + } + } +} + +func (p *TextParser) setOrCreateCurrentMF() { + p.currentIsSummaryCount = false + p.currentIsSummarySum = false + p.currentIsHistogramCount = false + p.currentIsHistogramSum = false + name := p.currentToken.String() + if p.currentMF = p.metricFamiliesByName[name]; p.currentMF != nil { + return + } + // Try out if this is a _sum or _count for a summary/histogram. + summaryName := summaryMetricName(name) + if p.currentMF = p.metricFamiliesByName[summaryName]; p.currentMF != nil { + if p.currentMF.GetType() == dto.MetricType_SUMMARY { + if isCount(name) { + p.currentIsSummaryCount = true + } + if isSum(name) { + p.currentIsSummarySum = true + } + return + } + } + histogramName := histogramMetricName(name) + if p.currentMF = p.metricFamiliesByName[histogramName]; p.currentMF != nil { + if p.currentMF.GetType() == dto.MetricType_HISTOGRAM { + if isCount(name) { + p.currentIsHistogramCount = true + } + if isSum(name) { + p.currentIsHistogramSum = true + } + return + } + } + p.currentMF = &dto.MetricFamily{Name: proto.String(name)} + p.metricFamiliesByName[name] = p.currentMF +} + +func isValidLabelNameStart(b byte) bool { + return (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') || b == '_' +} + +func isValidLabelNameContinuation(b byte) bool { + return isValidLabelNameStart(b) || (b >= '0' && b <= '9') +} + +func isValidMetricNameStart(b byte) bool { + return isValidLabelNameStart(b) || b == ':' +} + +func isValidMetricNameContinuation(b byte) bool { + return isValidLabelNameContinuation(b) || b == ':' +} + +func isBlankOrTab(b byte) bool { + return b == ' ' || b == '\t' +} + +func isCount(name string) bool { + return len(name) > 6 && name[len(name)-6:] == "_count" +} + +func isSum(name string) bool { + return len(name) > 4 && name[len(name)-4:] == "_sum" +} + +func isBucket(name string) bool { + return len(name) > 7 && name[len(name)-7:] == "_bucket" +} + +func summaryMetricName(name string) string { + switch { + case isCount(name): + return name[:len(name)-6] + case isSum(name): + return name[:len(name)-4] + default: + return name + } +} + +func histogramMetricName(name string) string { + switch { + case isCount(name): + return name[:len(name)-6] + case isSum(name): + return name[:len(name)-4] + case isBucket(name): + return name[:len(name)-7] + default: + return name + } +} diff --git a/vendor/github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg/autoneg.go b/vendor/github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg/autoneg.go new file mode 100644 index 00000000..648b38cb --- /dev/null +++ b/vendor/github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg/autoneg.go @@ -0,0 +1,162 @@ +/* +HTTP Content-Type Autonegotiation. + +The functions in this package implement the behaviour specified in +http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html + +Copyright (c) 2011, Open Knowledge Foundation Ltd. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + Neither the name of the Open Knowledge Foundation Ltd. nor the + names of its contributors may be used to endorse or promote + products derived from this software without specific prior written + permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +*/ +package goautoneg + +import ( + "sort" + "strconv" + "strings" +) + +// Structure to represent a clause in an HTTP Accept Header +type Accept struct { + Type, SubType string + Q float64 + Params map[string]string +} + +// For internal use, so that we can use the sort interface +type accept_slice []Accept + +func (accept accept_slice) Len() int { + slice := []Accept(accept) + return len(slice) +} + +func (accept accept_slice) Less(i, j int) bool { + slice := []Accept(accept) + ai, aj := slice[i], slice[j] + if ai.Q > aj.Q { + return true + } + if ai.Type != "*" && aj.Type == "*" { + return true + } + if ai.SubType != "*" && aj.SubType == "*" { + return true + } + return false +} + +func (accept accept_slice) Swap(i, j int) { + slice := []Accept(accept) + slice[i], slice[j] = slice[j], slice[i] +} + +// Parse an Accept Header string returning a sorted list +// of clauses +func ParseAccept(header string) (accept []Accept) { + parts := strings.Split(header, ",") + accept = make([]Accept, 0, len(parts)) + for _, part := range parts { + part := strings.Trim(part, " ") + + a := Accept{} + a.Params = make(map[string]string) + a.Q = 1.0 + + mrp := strings.Split(part, ";") + + media_range := mrp[0] + sp := strings.Split(media_range, "/") + a.Type = strings.Trim(sp[0], " ") + + switch { + case len(sp) == 1 && a.Type == "*": + a.SubType = "*" + case len(sp) == 2: + a.SubType = strings.Trim(sp[1], " ") + default: + continue + } + + if len(mrp) == 1 { + accept = append(accept, a) + continue + } + + for _, param := range mrp[1:] { + sp := strings.SplitN(param, "=", 2) + if len(sp) != 2 { + continue + } + token := strings.Trim(sp[0], " ") + if token == "q" { + a.Q, _ = strconv.ParseFloat(sp[1], 32) + } else { + a.Params[token] = strings.Trim(sp[1], " ") + } + } + + accept = append(accept, a) + } + + slice := accept_slice(accept) + sort.Sort(slice) + + return +} + +// Negotiate the most appropriate content_type given the accept header +// and a list of alternatives. +func Negotiate(header string, alternatives []string) (content_type string) { + asp := make([][]string, 0, len(alternatives)) + for _, ctype := range alternatives { + asp = append(asp, strings.SplitN(ctype, "/", 2)) + } + for _, clause := range ParseAccept(header) { + for i, ctsp := range asp { + if clause.Type == ctsp[0] && clause.SubType == ctsp[1] { + content_type = alternatives[i] + return + } + if clause.Type == ctsp[0] && clause.SubType == "*" { + content_type = alternatives[i] + return + } + if clause.Type == "*" && clause.SubType == "*" { + content_type = alternatives[i] + return + } + } + } + return +} diff --git a/vendor/github.com/prometheus/common/model/alert.go b/vendor/github.com/prometheus/common/model/alert.go new file mode 100644 index 00000000..35e739c7 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/alert.go @@ -0,0 +1,136 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "fmt" + "time" +) + +type AlertStatus string + +const ( + AlertFiring AlertStatus = "firing" + AlertResolved AlertStatus = "resolved" +) + +// Alert is a generic representation of an alert in the Prometheus eco-system. +type Alert struct { + // Label value pairs for purpose of aggregation, matching, and disposition + // dispatching. This must minimally include an "alertname" label. + Labels LabelSet `json:"labels"` + + // Extra key/value information which does not define alert identity. + Annotations LabelSet `json:"annotations"` + + // The known time range for this alert. Both ends are optional. + StartsAt time.Time `json:"startsAt,omitempty"` + EndsAt time.Time `json:"endsAt,omitempty"` + GeneratorURL string `json:"generatorURL"` +} + +// Name returns the name of the alert. It is equivalent to the "alertname" label. +func (a *Alert) Name() string { + return string(a.Labels[AlertNameLabel]) +} + +// Fingerprint returns a unique hash for the alert. It is equivalent to +// the fingerprint of the alert's label set. +func (a *Alert) Fingerprint() Fingerprint { + return a.Labels.Fingerprint() +} + +func (a *Alert) String() string { + s := fmt.Sprintf("%s[%s]", a.Name(), a.Fingerprint().String()[:7]) + if a.Resolved() { + return s + "[resolved]" + } + return s + "[active]" +} + +// Resolved returns true iff the activity interval ended in the past. +func (a *Alert) Resolved() bool { + return a.ResolvedAt(time.Now()) +} + +// ResolvedAt returns true off the activity interval ended before +// the given timestamp. +func (a *Alert) ResolvedAt(ts time.Time) bool { + if a.EndsAt.IsZero() { + return false + } + return !a.EndsAt.After(ts) +} + +// Status returns the status of the alert. +func (a *Alert) Status() AlertStatus { + if a.Resolved() { + return AlertResolved + } + return AlertFiring +} + +// Validate checks whether the alert data is inconsistent. +func (a *Alert) Validate() error { + if a.StartsAt.IsZero() { + return fmt.Errorf("start time missing") + } + if !a.EndsAt.IsZero() && a.EndsAt.Before(a.StartsAt) { + return fmt.Errorf("start time must be before end time") + } + if err := a.Labels.Validate(); err != nil { + return fmt.Errorf("invalid label set: %s", err) + } + if len(a.Labels) == 0 { + return fmt.Errorf("at least one label pair required") + } + if err := a.Annotations.Validate(); err != nil { + return fmt.Errorf("invalid annotations: %s", err) + } + return nil +} + +// Alert is a list of alerts that can be sorted in chronological order. +type Alerts []*Alert + +func (as Alerts) Len() int { return len(as) } +func (as Alerts) Swap(i, j int) { as[i], as[j] = as[j], as[i] } + +func (as Alerts) Less(i, j int) bool { + if as[i].StartsAt.Before(as[j].StartsAt) { + return true + } + if as[i].EndsAt.Before(as[j].EndsAt) { + return true + } + return as[i].Fingerprint() < as[j].Fingerprint() +} + +// HasFiring returns true iff one of the alerts is not resolved. +func (as Alerts) HasFiring() bool { + for _, a := range as { + if !a.Resolved() { + return true + } + } + return false +} + +// Status returns StatusFiring iff at least one of the alerts is firing. +func (as Alerts) Status() AlertStatus { + if as.HasFiring() { + return AlertFiring + } + return AlertResolved +} diff --git a/vendor/github.com/prometheus/common/model/fingerprinting.go b/vendor/github.com/prometheus/common/model/fingerprinting.go new file mode 100644 index 00000000..fc4de410 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/fingerprinting.go @@ -0,0 +1,105 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "fmt" + "strconv" +) + +// Fingerprint provides a hash-capable representation of a Metric. +// For our purposes, FNV-1A 64-bit is used. +type Fingerprint uint64 + +// FingerprintFromString transforms a string representation into a Fingerprint. +func FingerprintFromString(s string) (Fingerprint, error) { + num, err := strconv.ParseUint(s, 16, 64) + return Fingerprint(num), err +} + +// ParseFingerprint parses the input string into a fingerprint. +func ParseFingerprint(s string) (Fingerprint, error) { + num, err := strconv.ParseUint(s, 16, 64) + if err != nil { + return 0, err + } + return Fingerprint(num), nil +} + +func (f Fingerprint) String() string { + return fmt.Sprintf("%016x", uint64(f)) +} + +// Fingerprints represents a collection of Fingerprint subject to a given +// natural sorting scheme. It implements sort.Interface. +type Fingerprints []Fingerprint + +// Len implements sort.Interface. +func (f Fingerprints) Len() int { + return len(f) +} + +// Less implements sort.Interface. +func (f Fingerprints) Less(i, j int) bool { + return f[i] < f[j] +} + +// Swap implements sort.Interface. +func (f Fingerprints) Swap(i, j int) { + f[i], f[j] = f[j], f[i] +} + +// FingerprintSet is a set of Fingerprints. +type FingerprintSet map[Fingerprint]struct{} + +// Equal returns true if both sets contain the same elements (and not more). +func (s FingerprintSet) Equal(o FingerprintSet) bool { + if len(s) != len(o) { + return false + } + + for k := range s { + if _, ok := o[k]; !ok { + return false + } + } + + return true +} + +// Intersection returns the elements contained in both sets. +func (s FingerprintSet) Intersection(o FingerprintSet) FingerprintSet { + myLength, otherLength := len(s), len(o) + if myLength == 0 || otherLength == 0 { + return FingerprintSet{} + } + + subSet := s + superSet := o + + if otherLength < myLength { + subSet = o + superSet = s + } + + out := FingerprintSet{} + + for k := range subSet { + if _, ok := superSet[k]; ok { + out[k] = struct{}{} + } + } + + return out +} diff --git a/vendor/github.com/prometheus/common/model/fnv.go b/vendor/github.com/prometheus/common/model/fnv.go new file mode 100644 index 00000000..038fc1c9 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/fnv.go @@ -0,0 +1,42 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +// Inline and byte-free variant of hash/fnv's fnv64a. + +const ( + offset64 = 14695981039346656037 + prime64 = 1099511628211 +) + +// hashNew initializies a new fnv64a hash value. +func hashNew() uint64 { + return offset64 +} + +// hashAdd adds a string to a fnv64a hash value, returning the updated hash. +func hashAdd(h uint64, s string) uint64 { + for i := 0; i < len(s); i++ { + h ^= uint64(s[i]) + h *= prime64 + } + return h +} + +// hashAddByte adds a byte to a fnv64a hash value, returning the updated hash. +func hashAddByte(h uint64, b byte) uint64 { + h ^= uint64(b) + h *= prime64 + return h +} diff --git a/vendor/github.com/prometheus/common/model/labels.go b/vendor/github.com/prometheus/common/model/labels.go new file mode 100644 index 00000000..41051a01 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/labels.go @@ -0,0 +1,210 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "encoding/json" + "fmt" + "regexp" + "strings" + "unicode/utf8" +) + +const ( + // AlertNameLabel is the name of the label containing the an alert's name. + AlertNameLabel = "alertname" + + // ExportedLabelPrefix is the prefix to prepend to the label names present in + // exported metrics if a label of the same name is added by the server. + ExportedLabelPrefix = "exported_" + + // MetricNameLabel is the label name indicating the metric name of a + // timeseries. + MetricNameLabel = "__name__" + + // SchemeLabel is the name of the label that holds the scheme on which to + // scrape a target. + SchemeLabel = "__scheme__" + + // AddressLabel is the name of the label that holds the address of + // a scrape target. + AddressLabel = "__address__" + + // MetricsPathLabel is the name of the label that holds the path on which to + // scrape a target. + MetricsPathLabel = "__metrics_path__" + + // ReservedLabelPrefix is a prefix which is not legal in user-supplied + // label names. + ReservedLabelPrefix = "__" + + // MetaLabelPrefix is a prefix for labels that provide meta information. + // Labels with this prefix are used for intermediate label processing and + // will not be attached to time series. + MetaLabelPrefix = "__meta_" + + // TmpLabelPrefix is a prefix for temporary labels as part of relabelling. + // Labels with this prefix are used for intermediate label processing and + // will not be attached to time series. This is reserved for use in + // Prometheus configuration files by users. + TmpLabelPrefix = "__tmp_" + + // ParamLabelPrefix is a prefix for labels that provide URL parameters + // used to scrape a target. + ParamLabelPrefix = "__param_" + + // JobLabel is the label name indicating the job from which a timeseries + // was scraped. + JobLabel = "job" + + // InstanceLabel is the label name used for the instance label. + InstanceLabel = "instance" + + // BucketLabel is used for the label that defines the upper bound of a + // bucket of a histogram ("le" -> "less or equal"). + BucketLabel = "le" + + // QuantileLabel is used for the label that defines the quantile in a + // summary. + QuantileLabel = "quantile" +) + +// LabelNameRE is a regular expression matching valid label names. Note that the +// IsValid method of LabelName performs the same check but faster than a match +// with this regular expression. +var LabelNameRE = regexp.MustCompile("^[a-zA-Z_][a-zA-Z0-9_]*$") + +// A LabelName is a key for a LabelSet or Metric. It has a value associated +// therewith. +type LabelName string + +// IsValid is true iff the label name matches the pattern of LabelNameRE. This +// method, however, does not use LabelNameRE for the check but a much faster +// hardcoded implementation. +func (ln LabelName) IsValid() bool { + if len(ln) == 0 { + return false + } + for i, b := range ln { + if !((b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') || b == '_' || (b >= '0' && b <= '9' && i > 0)) { + return false + } + } + return true +} + +// UnmarshalYAML implements the yaml.Unmarshaler interface. +func (ln *LabelName) UnmarshalYAML(unmarshal func(interface{}) error) error { + var s string + if err := unmarshal(&s); err != nil { + return err + } + if !LabelName(s).IsValid() { + return fmt.Errorf("%q is not a valid label name", s) + } + *ln = LabelName(s) + return nil +} + +// UnmarshalJSON implements the json.Unmarshaler interface. +func (ln *LabelName) UnmarshalJSON(b []byte) error { + var s string + if err := json.Unmarshal(b, &s); err != nil { + return err + } + if !LabelName(s).IsValid() { + return fmt.Errorf("%q is not a valid label name", s) + } + *ln = LabelName(s) + return nil +} + +// LabelNames is a sortable LabelName slice. In implements sort.Interface. +type LabelNames []LabelName + +func (l LabelNames) Len() int { + return len(l) +} + +func (l LabelNames) Less(i, j int) bool { + return l[i] < l[j] +} + +func (l LabelNames) Swap(i, j int) { + l[i], l[j] = l[j], l[i] +} + +func (l LabelNames) String() string { + labelStrings := make([]string, 0, len(l)) + for _, label := range l { + labelStrings = append(labelStrings, string(label)) + } + return strings.Join(labelStrings, ", ") +} + +// A LabelValue is an associated value for a LabelName. +type LabelValue string + +// IsValid returns true iff the string is a valid UTF8. +func (lv LabelValue) IsValid() bool { + return utf8.ValidString(string(lv)) +} + +// LabelValues is a sortable LabelValue slice. It implements sort.Interface. +type LabelValues []LabelValue + +func (l LabelValues) Len() int { + return len(l) +} + +func (l LabelValues) Less(i, j int) bool { + return string(l[i]) < string(l[j]) +} + +func (l LabelValues) Swap(i, j int) { + l[i], l[j] = l[j], l[i] +} + +// LabelPair pairs a name with a value. +type LabelPair struct { + Name LabelName + Value LabelValue +} + +// LabelPairs is a sortable slice of LabelPair pointers. It implements +// sort.Interface. +type LabelPairs []*LabelPair + +func (l LabelPairs) Len() int { + return len(l) +} + +func (l LabelPairs) Less(i, j int) bool { + switch { + case l[i].Name > l[j].Name: + return false + case l[i].Name < l[j].Name: + return true + case l[i].Value > l[j].Value: + return false + case l[i].Value < l[j].Value: + return true + default: + return false + } +} + +func (l LabelPairs) Swap(i, j int) { + l[i], l[j] = l[j], l[i] +} diff --git a/vendor/github.com/prometheus/common/model/labelset.go b/vendor/github.com/prometheus/common/model/labelset.go new file mode 100644 index 00000000..6eda08a7 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/labelset.go @@ -0,0 +1,169 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "encoding/json" + "fmt" + "sort" + "strings" +) + +// A LabelSet is a collection of LabelName and LabelValue pairs. The LabelSet +// may be fully-qualified down to the point where it may resolve to a single +// Metric in the data store or not. All operations that occur within the realm +// of a LabelSet can emit a vector of Metric entities to which the LabelSet may +// match. +type LabelSet map[LabelName]LabelValue + +// Validate checks whether all names and values in the label set +// are valid. +func (ls LabelSet) Validate() error { + for ln, lv := range ls { + if !ln.IsValid() { + return fmt.Errorf("invalid name %q", ln) + } + if !lv.IsValid() { + return fmt.Errorf("invalid value %q", lv) + } + } + return nil +} + +// Equal returns true iff both label sets have exactly the same key/value pairs. +func (ls LabelSet) Equal(o LabelSet) bool { + if len(ls) != len(o) { + return false + } + for ln, lv := range ls { + olv, ok := o[ln] + if !ok { + return false + } + if olv != lv { + return false + } + } + return true +} + +// Before compares the metrics, using the following criteria: +// +// If m has fewer labels than o, it is before o. If it has more, it is not. +// +// If the number of labels is the same, the superset of all label names is +// sorted alphanumerically. The first differing label pair found in that order +// determines the outcome: If the label does not exist at all in m, then m is +// before o, and vice versa. Otherwise the label value is compared +// alphanumerically. +// +// If m and o are equal, the method returns false. +func (ls LabelSet) Before(o LabelSet) bool { + if len(ls) < len(o) { + return true + } + if len(ls) > len(o) { + return false + } + + lns := make(LabelNames, 0, len(ls)+len(o)) + for ln := range ls { + lns = append(lns, ln) + } + for ln := range o { + lns = append(lns, ln) + } + // It's probably not worth it to de-dup lns. + sort.Sort(lns) + for _, ln := range lns { + mlv, ok := ls[ln] + if !ok { + return true + } + olv, ok := o[ln] + if !ok { + return false + } + if mlv < olv { + return true + } + if mlv > olv { + return false + } + } + return false +} + +// Clone returns a copy of the label set. +func (ls LabelSet) Clone() LabelSet { + lsn := make(LabelSet, len(ls)) + for ln, lv := range ls { + lsn[ln] = lv + } + return lsn +} + +// Merge is a helper function to non-destructively merge two label sets. +func (l LabelSet) Merge(other LabelSet) LabelSet { + result := make(LabelSet, len(l)) + + for k, v := range l { + result[k] = v + } + + for k, v := range other { + result[k] = v + } + + return result +} + +func (l LabelSet) String() string { + lstrs := make([]string, 0, len(l)) + for l, v := range l { + lstrs = append(lstrs, fmt.Sprintf("%s=%q", l, v)) + } + + sort.Strings(lstrs) + return fmt.Sprintf("{%s}", strings.Join(lstrs, ", ")) +} + +// Fingerprint returns the LabelSet's fingerprint. +func (ls LabelSet) Fingerprint() Fingerprint { + return labelSetToFingerprint(ls) +} + +// FastFingerprint returns the LabelSet's Fingerprint calculated by a faster hashing +// algorithm, which is, however, more susceptible to hash collisions. +func (ls LabelSet) FastFingerprint() Fingerprint { + return labelSetToFastFingerprint(ls) +} + +// UnmarshalJSON implements the json.Unmarshaler interface. +func (l *LabelSet) UnmarshalJSON(b []byte) error { + var m map[LabelName]LabelValue + if err := json.Unmarshal(b, &m); err != nil { + return err + } + // encoding/json only unmarshals maps of the form map[string]T. It treats + // LabelName as a string and does not call its UnmarshalJSON method. + // Thus, we have to replicate the behavior here. + for ln := range m { + if !ln.IsValid() { + return fmt.Errorf("%q is not a valid label name", ln) + } + } + *l = LabelSet(m) + return nil +} diff --git a/vendor/github.com/prometheus/common/model/metric.go b/vendor/github.com/prometheus/common/model/metric.go new file mode 100644 index 00000000..f7250909 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/metric.go @@ -0,0 +1,103 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "fmt" + "regexp" + "sort" + "strings" +) + +var ( + separator = []byte{0} + // MetricNameRE is a regular expression matching valid metric + // names. Note that the IsValidMetricName function performs the same + // check but faster than a match with this regular expression. + MetricNameRE = regexp.MustCompile(`^[a-zA-Z_:][a-zA-Z0-9_:]*$`) +) + +// A Metric is similar to a LabelSet, but the key difference is that a Metric is +// a singleton and refers to one and only one stream of samples. +type Metric LabelSet + +// Equal compares the metrics. +func (m Metric) Equal(o Metric) bool { + return LabelSet(m).Equal(LabelSet(o)) +} + +// Before compares the metrics' underlying label sets. +func (m Metric) Before(o Metric) bool { + return LabelSet(m).Before(LabelSet(o)) +} + +// Clone returns a copy of the Metric. +func (m Metric) Clone() Metric { + clone := make(Metric, len(m)) + for k, v := range m { + clone[k] = v + } + return clone +} + +func (m Metric) String() string { + metricName, hasName := m[MetricNameLabel] + numLabels := len(m) - 1 + if !hasName { + numLabels = len(m) + } + labelStrings := make([]string, 0, numLabels) + for label, value := range m { + if label != MetricNameLabel { + labelStrings = append(labelStrings, fmt.Sprintf("%s=%q", label, value)) + } + } + + switch numLabels { + case 0: + if hasName { + return string(metricName) + } + return "{}" + default: + sort.Strings(labelStrings) + return fmt.Sprintf("%s{%s}", metricName, strings.Join(labelStrings, ", ")) + } +} + +// Fingerprint returns a Metric's Fingerprint. +func (m Metric) Fingerprint() Fingerprint { + return LabelSet(m).Fingerprint() +} + +// FastFingerprint returns a Metric's Fingerprint calculated by a faster hashing +// algorithm, which is, however, more susceptible to hash collisions. +func (m Metric) FastFingerprint() Fingerprint { + return LabelSet(m).FastFingerprint() +} + +// IsValidMetricName returns true iff name matches the pattern of MetricNameRE. +// This function, however, does not use MetricNameRE for the check but a much +// faster hardcoded implementation. +func IsValidMetricName(n LabelValue) bool { + if len(n) == 0 { + return false + } + for i, b := range n { + if !((b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') || b == '_' || b == ':' || (b >= '0' && b <= '9' && i > 0)) { + return false + } + } + return true +} diff --git a/vendor/github.com/prometheus/common/model/model.go b/vendor/github.com/prometheus/common/model/model.go new file mode 100644 index 00000000..a7b96917 --- /dev/null +++ b/vendor/github.com/prometheus/common/model/model.go @@ -0,0 +1,16 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package model contains common data structures that are shared across +// Prometheus components and libraries. +package model diff --git a/vendor/github.com/prometheus/common/model/signature.go b/vendor/github.com/prometheus/common/model/signature.go new file mode 100644 index 00000000..8762b13c --- /dev/null +++ b/vendor/github.com/prometheus/common/model/signature.go @@ -0,0 +1,144 @@ +// Copyright 2014 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "sort" +) + +// SeparatorByte is a byte that cannot occur in valid UTF-8 sequences and is +// used to separate label names, label values, and other strings from each other +// when calculating their combined hash value (aka signature aka fingerprint). +const SeparatorByte byte = 255 + +var ( + // cache the signature of an empty label set. + emptyLabelSignature = hashNew() +) + +// LabelsToSignature returns a quasi-unique signature (i.e., fingerprint) for a +// given label set. (Collisions are possible but unlikely if the number of label +// sets the function is applied to is small.) +func LabelsToSignature(labels map[string]string) uint64 { + if len(labels) == 0 { + return emptyLabelSignature + } + + labelNames := make([]string, 0, len(labels)) + for labelName := range labels { + labelNames = append(labelNames, labelName) + } + sort.Strings(labelNames) + + sum := hashNew() + for _, labelName := range labelNames { + sum = hashAdd(sum, labelName) + sum = hashAddByte(sum, SeparatorByte) + sum = hashAdd(sum, labels[labelName]) + sum = hashAddByte(sum, SeparatorByte) + } + return sum +} + +// labelSetToFingerprint works exactly as LabelsToSignature but takes a LabelSet as +// parameter (rather than a label map) and returns a Fingerprint. +func labelSetToFingerprint(ls LabelSet) Fingerprint { + if len(ls) == 0 { + return Fingerprint(emptyLabelSignature) + } + + labelNames := make(LabelNames, 0, len(ls)) + for labelName := range ls { + labelNames = append(labelNames, labelName) + } + sort.Sort(labelNames) + + sum := hashNew() + for _, labelName := range labelNames { + sum = hashAdd(sum, string(labelName)) + sum = hashAddByte(sum, SeparatorByte) + sum = hashAdd(sum, string(ls[labelName])) + sum = hashAddByte(sum, SeparatorByte) + } + return Fingerprint(sum) +} + +// labelSetToFastFingerprint works similar to labelSetToFingerprint but uses a +// faster and less allocation-heavy hash function, which is more susceptible to +// create hash collisions. Therefore, collision detection should be applied. +func labelSetToFastFingerprint(ls LabelSet) Fingerprint { + if len(ls) == 0 { + return Fingerprint(emptyLabelSignature) + } + + var result uint64 + for labelName, labelValue := range ls { + sum := hashNew() + sum = hashAdd(sum, string(labelName)) + sum = hashAddByte(sum, SeparatorByte) + sum = hashAdd(sum, string(labelValue)) + result ^= sum + } + return Fingerprint(result) +} + +// SignatureForLabels works like LabelsToSignature but takes a Metric as +// parameter (rather than a label map) and only includes the labels with the +// specified LabelNames into the signature calculation. The labels passed in +// will be sorted by this function. +func SignatureForLabels(m Metric, labels ...LabelName) uint64 { + if len(labels) == 0 { + return emptyLabelSignature + } + + sort.Sort(LabelNames(labels)) + + sum := hashNew() + for _, label := range labels { + sum = hashAdd(sum, string(label)) + sum = hashAddByte(sum, SeparatorByte) + sum = hashAdd(sum, string(m[label])) + sum = hashAddByte(sum, SeparatorByte) + } + return sum +} + +// SignatureWithoutLabels works like LabelsToSignature but takes a Metric as +// parameter (rather than a label map) and excludes the labels with any of the +// specified LabelNames from the signature calculation. +func SignatureWithoutLabels(m Metric, labels map[LabelName]struct{}) uint64 { + if len(m) == 0 { + return emptyLabelSignature + } + + labelNames := make(LabelNames, 0, len(m)) + for labelName := range m { + if _, exclude := labels[labelName]; !exclude { + labelNames = append(labelNames, labelName) + } + } + if len(labelNames) == 0 { + return emptyLabelSignature + } + sort.Sort(labelNames) + + sum := hashNew() + for _, labelName := range labelNames { + sum = hashAdd(sum, string(labelName)) + sum = hashAddByte(sum, SeparatorByte) + sum = hashAdd(sum, string(m[labelName])) + sum = hashAddByte(sum, SeparatorByte) + } + return sum +} diff --git a/vendor/github.com/prometheus/common/model/silence.go b/vendor/github.com/prometheus/common/model/silence.go new file mode 100644 index 00000000..bb99889d --- /dev/null +++ b/vendor/github.com/prometheus/common/model/silence.go @@ -0,0 +1,106 @@ +// Copyright 2015 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "encoding/json" + "fmt" + "regexp" + "time" +) + +// Matcher describes a matches the value of a given label. +type Matcher struct { + Name LabelName `json:"name"` + Value string `json:"value"` + IsRegex bool `json:"isRegex"` +} + +func (m *Matcher) UnmarshalJSON(b []byte) error { + type plain Matcher + if err := json.Unmarshal(b, (*plain)(m)); err != nil { + return err + } + + if len(m.Name) == 0 { + return fmt.Errorf("label name in matcher must not be empty") + } + if m.IsRegex { + if _, err := regexp.Compile(m.Value); err != nil { + return err + } + } + return nil +} + +// Validate returns true iff all fields of the matcher have valid values. +func (m *Matcher) Validate() error { + if !m.Name.IsValid() { + return fmt.Errorf("invalid name %q", m.Name) + } + if m.IsRegex { + if _, err := regexp.Compile(m.Value); err != nil { + return fmt.Errorf("invalid regular expression %q", m.Value) + } + } else if !LabelValue(m.Value).IsValid() || len(m.Value) == 0 { + return fmt.Errorf("invalid value %q", m.Value) + } + return nil +} + +// Silence defines the representation of a silence definition in the Prometheus +// eco-system. +type Silence struct { + ID uint64 `json:"id,omitempty"` + + Matchers []*Matcher `json:"matchers"` + + StartsAt time.Time `json:"startsAt"` + EndsAt time.Time `json:"endsAt"` + + CreatedAt time.Time `json:"createdAt,omitempty"` + CreatedBy string `json:"createdBy"` + Comment string `json:"comment,omitempty"` +} + +// Validate returns true iff all fields of the silence have valid values. +func (s *Silence) Validate() error { + if len(s.Matchers) == 0 { + return fmt.Errorf("at least one matcher required") + } + for _, m := range s.Matchers { + if err := m.Validate(); err != nil { + return fmt.Errorf("invalid matcher: %s", err) + } + } + if s.StartsAt.IsZero() { + return fmt.Errorf("start time missing") + } + if s.EndsAt.IsZero() { + return fmt.Errorf("end time missing") + } + if s.EndsAt.Before(s.StartsAt) { + return fmt.Errorf("start time must be before end time") + } + if s.CreatedBy == "" { + return fmt.Errorf("creator information missing") + } + if s.Comment == "" { + return fmt.Errorf("comment missing") + } + if s.CreatedAt.IsZero() { + return fmt.Errorf("creation timestamp missing") + } + return nil +} diff --git a/vendor/github.com/prometheus/common/model/time.go b/vendor/github.com/prometheus/common/model/time.go new file mode 100644 index 00000000..74ed5a9f --- /dev/null +++ b/vendor/github.com/prometheus/common/model/time.go @@ -0,0 +1,264 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "fmt" + "math" + "regexp" + "strconv" + "strings" + "time" +) + +const ( + // MinimumTick is the minimum supported time resolution. This has to be + // at least time.Second in order for the code below to work. + minimumTick = time.Millisecond + // second is the Time duration equivalent to one second. + second = int64(time.Second / minimumTick) + // The number of nanoseconds per minimum tick. + nanosPerTick = int64(minimumTick / time.Nanosecond) + + // Earliest is the earliest Time representable. Handy for + // initializing a high watermark. + Earliest = Time(math.MinInt64) + // Latest is the latest Time representable. Handy for initializing + // a low watermark. + Latest = Time(math.MaxInt64) +) + +// Time is the number of milliseconds since the epoch +// (1970-01-01 00:00 UTC) excluding leap seconds. +type Time int64 + +// Interval describes and interval between two timestamps. +type Interval struct { + Start, End Time +} + +// Now returns the current time as a Time. +func Now() Time { + return TimeFromUnixNano(time.Now().UnixNano()) +} + +// TimeFromUnix returns the Time equivalent to the Unix Time t +// provided in seconds. +func TimeFromUnix(t int64) Time { + return Time(t * second) +} + +// TimeFromUnixNano returns the Time equivalent to the Unix Time +// t provided in nanoseconds. +func TimeFromUnixNano(t int64) Time { + return Time(t / nanosPerTick) +} + +// Equal reports whether two Times represent the same instant. +func (t Time) Equal(o Time) bool { + return t == o +} + +// Before reports whether the Time t is before o. +func (t Time) Before(o Time) bool { + return t < o +} + +// After reports whether the Time t is after o. +func (t Time) After(o Time) bool { + return t > o +} + +// Add returns the Time t + d. +func (t Time) Add(d time.Duration) Time { + return t + Time(d/minimumTick) +} + +// Sub returns the Duration t - o. +func (t Time) Sub(o Time) time.Duration { + return time.Duration(t-o) * minimumTick +} + +// Time returns the time.Time representation of t. +func (t Time) Time() time.Time { + return time.Unix(int64(t)/second, (int64(t)%second)*nanosPerTick) +} + +// Unix returns t as a Unix time, the number of seconds elapsed +// since January 1, 1970 UTC. +func (t Time) Unix() int64 { + return int64(t) / second +} + +// UnixNano returns t as a Unix time, the number of nanoseconds elapsed +// since January 1, 1970 UTC. +func (t Time) UnixNano() int64 { + return int64(t) * nanosPerTick +} + +// The number of digits after the dot. +var dotPrecision = int(math.Log10(float64(second))) + +// String returns a string representation of the Time. +func (t Time) String() string { + return strconv.FormatFloat(float64(t)/float64(second), 'f', -1, 64) +} + +// MarshalJSON implements the json.Marshaler interface. +func (t Time) MarshalJSON() ([]byte, error) { + return []byte(t.String()), nil +} + +// UnmarshalJSON implements the json.Unmarshaler interface. +func (t *Time) UnmarshalJSON(b []byte) error { + p := strings.Split(string(b), ".") + switch len(p) { + case 1: + v, err := strconv.ParseInt(string(p[0]), 10, 64) + if err != nil { + return err + } + *t = Time(v * second) + + case 2: + v, err := strconv.ParseInt(string(p[0]), 10, 64) + if err != nil { + return err + } + v *= second + + prec := dotPrecision - len(p[1]) + if prec < 0 { + p[1] = p[1][:dotPrecision] + } else if prec > 0 { + p[1] = p[1] + strings.Repeat("0", prec) + } + + va, err := strconv.ParseInt(p[1], 10, 32) + if err != nil { + return err + } + + *t = Time(v + va) + + default: + return fmt.Errorf("invalid time %q", string(b)) + } + return nil +} + +// Duration wraps time.Duration. It is used to parse the custom duration format +// from YAML. +// This type should not propagate beyond the scope of input/output processing. +type Duration time.Duration + +// Set implements pflag/flag.Value +func (d *Duration) Set(s string) error { + var err error + *d, err = ParseDuration(s) + return err +} + +// Type implements pflag.Value +func (d *Duration) Type() string { + return "duration" +} + +var durationRE = regexp.MustCompile("^([0-9]+)(y|w|d|h|m|s|ms)$") + +// ParseDuration parses a string into a time.Duration, assuming that a year +// always has 365d, a week always has 7d, and a day always has 24h. +func ParseDuration(durationStr string) (Duration, error) { + matches := durationRE.FindStringSubmatch(durationStr) + if len(matches) != 3 { + return 0, fmt.Errorf("not a valid duration string: %q", durationStr) + } + var ( + n, _ = strconv.Atoi(matches[1]) + dur = time.Duration(n) * time.Millisecond + ) + switch unit := matches[2]; unit { + case "y": + dur *= 1000 * 60 * 60 * 24 * 365 + case "w": + dur *= 1000 * 60 * 60 * 24 * 7 + case "d": + dur *= 1000 * 60 * 60 * 24 + case "h": + dur *= 1000 * 60 * 60 + case "m": + dur *= 1000 * 60 + case "s": + dur *= 1000 + case "ms": + // Value already correct + default: + return 0, fmt.Errorf("invalid time unit in duration string: %q", unit) + } + return Duration(dur), nil +} + +func (d Duration) String() string { + var ( + ms = int64(time.Duration(d) / time.Millisecond) + unit = "ms" + ) + if ms == 0 { + return "0s" + } + factors := map[string]int64{ + "y": 1000 * 60 * 60 * 24 * 365, + "w": 1000 * 60 * 60 * 24 * 7, + "d": 1000 * 60 * 60 * 24, + "h": 1000 * 60 * 60, + "m": 1000 * 60, + "s": 1000, + "ms": 1, + } + + switch int64(0) { + case ms % factors["y"]: + unit = "y" + case ms % factors["w"]: + unit = "w" + case ms % factors["d"]: + unit = "d" + case ms % factors["h"]: + unit = "h" + case ms % factors["m"]: + unit = "m" + case ms % factors["s"]: + unit = "s" + } + return fmt.Sprintf("%v%v", ms/factors[unit], unit) +} + +// MarshalYAML implements the yaml.Marshaler interface. +func (d Duration) MarshalYAML() (interface{}, error) { + return d.String(), nil +} + +// UnmarshalYAML implements the yaml.Unmarshaler interface. +func (d *Duration) UnmarshalYAML(unmarshal func(interface{}) error) error { + var s string + if err := unmarshal(&s); err != nil { + return err + } + dur, err := ParseDuration(s) + if err != nil { + return err + } + *d = dur + return nil +} diff --git a/vendor/github.com/prometheus/common/model/value.go b/vendor/github.com/prometheus/common/model/value.go new file mode 100644 index 00000000..c9d8fb1a --- /dev/null +++ b/vendor/github.com/prometheus/common/model/value.go @@ -0,0 +1,416 @@ +// Copyright 2013 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "encoding/json" + "fmt" + "math" + "sort" + "strconv" + "strings" +) + +var ( + // ZeroSamplePair is the pseudo zero-value of SamplePair used to signal a + // non-existing sample pair. It is a SamplePair with timestamp Earliest and + // value 0.0. Note that the natural zero value of SamplePair has a timestamp + // of 0, which is possible to appear in a real SamplePair and thus not + // suitable to signal a non-existing SamplePair. + ZeroSamplePair = SamplePair{Timestamp: Earliest} + + // ZeroSample is the pseudo zero-value of Sample used to signal a + // non-existing sample. It is a Sample with timestamp Earliest, value 0.0, + // and metric nil. Note that the natural zero value of Sample has a timestamp + // of 0, which is possible to appear in a real Sample and thus not suitable + // to signal a non-existing Sample. + ZeroSample = Sample{Timestamp: Earliest} +) + +// A SampleValue is a representation of a value for a given sample at a given +// time. +type SampleValue float64 + +// MarshalJSON implements json.Marshaler. +func (v SampleValue) MarshalJSON() ([]byte, error) { + return json.Marshal(v.String()) +} + +// UnmarshalJSON implements json.Unmarshaler. +func (v *SampleValue) UnmarshalJSON(b []byte) error { + if len(b) < 2 || b[0] != '"' || b[len(b)-1] != '"' { + return fmt.Errorf("sample value must be a quoted string") + } + f, err := strconv.ParseFloat(string(b[1:len(b)-1]), 64) + if err != nil { + return err + } + *v = SampleValue(f) + return nil +} + +// Equal returns true if the value of v and o is equal or if both are NaN. Note +// that v==o is false if both are NaN. If you want the conventional float +// behavior, use == to compare two SampleValues. +func (v SampleValue) Equal(o SampleValue) bool { + if v == o { + return true + } + return math.IsNaN(float64(v)) && math.IsNaN(float64(o)) +} + +func (v SampleValue) String() string { + return strconv.FormatFloat(float64(v), 'f', -1, 64) +} + +// SamplePair pairs a SampleValue with a Timestamp. +type SamplePair struct { + Timestamp Time + Value SampleValue +} + +// MarshalJSON implements json.Marshaler. +func (s SamplePair) MarshalJSON() ([]byte, error) { + t, err := json.Marshal(s.Timestamp) + if err != nil { + return nil, err + } + v, err := json.Marshal(s.Value) + if err != nil { + return nil, err + } + return []byte(fmt.Sprintf("[%s,%s]", t, v)), nil +} + +// UnmarshalJSON implements json.Unmarshaler. +func (s *SamplePair) UnmarshalJSON(b []byte) error { + v := [...]json.Unmarshaler{&s.Timestamp, &s.Value} + return json.Unmarshal(b, &v) +} + +// Equal returns true if this SamplePair and o have equal Values and equal +// Timestamps. The semantics of Value equality is defined by SampleValue.Equal. +func (s *SamplePair) Equal(o *SamplePair) bool { + return s == o || (s.Value.Equal(o.Value) && s.Timestamp.Equal(o.Timestamp)) +} + +func (s SamplePair) String() string { + return fmt.Sprintf("%s @[%s]", s.Value, s.Timestamp) +} + +// Sample is a sample pair associated with a metric. +type Sample struct { + Metric Metric `json:"metric"` + Value SampleValue `json:"value"` + Timestamp Time `json:"timestamp"` +} + +// Equal compares first the metrics, then the timestamp, then the value. The +// semantics of value equality is defined by SampleValue.Equal. +func (s *Sample) Equal(o *Sample) bool { + if s == o { + return true + } + + if !s.Metric.Equal(o.Metric) { + return false + } + if !s.Timestamp.Equal(o.Timestamp) { + return false + } + + return s.Value.Equal(o.Value) +} + +func (s Sample) String() string { + return fmt.Sprintf("%s => %s", s.Metric, SamplePair{ + Timestamp: s.Timestamp, + Value: s.Value, + }) +} + +// MarshalJSON implements json.Marshaler. +func (s Sample) MarshalJSON() ([]byte, error) { + v := struct { + Metric Metric `json:"metric"` + Value SamplePair `json:"value"` + }{ + Metric: s.Metric, + Value: SamplePair{ + Timestamp: s.Timestamp, + Value: s.Value, + }, + } + + return json.Marshal(&v) +} + +// UnmarshalJSON implements json.Unmarshaler. +func (s *Sample) UnmarshalJSON(b []byte) error { + v := struct { + Metric Metric `json:"metric"` + Value SamplePair `json:"value"` + }{ + Metric: s.Metric, + Value: SamplePair{ + Timestamp: s.Timestamp, + Value: s.Value, + }, + } + + if err := json.Unmarshal(b, &v); err != nil { + return err + } + + s.Metric = v.Metric + s.Timestamp = v.Value.Timestamp + s.Value = v.Value.Value + + return nil +} + +// Samples is a sortable Sample slice. It implements sort.Interface. +type Samples []*Sample + +func (s Samples) Len() int { + return len(s) +} + +// Less compares first the metrics, then the timestamp. +func (s Samples) Less(i, j int) bool { + switch { + case s[i].Metric.Before(s[j].Metric): + return true + case s[j].Metric.Before(s[i].Metric): + return false + case s[i].Timestamp.Before(s[j].Timestamp): + return true + default: + return false + } +} + +func (s Samples) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +// Equal compares two sets of samples and returns true if they are equal. +func (s Samples) Equal(o Samples) bool { + if len(s) != len(o) { + return false + } + + for i, sample := range s { + if !sample.Equal(o[i]) { + return false + } + } + return true +} + +// SampleStream is a stream of Values belonging to an attached COWMetric. +type SampleStream struct { + Metric Metric `json:"metric"` + Values []SamplePair `json:"values"` +} + +func (ss SampleStream) String() string { + vals := make([]string, len(ss.Values)) + for i, v := range ss.Values { + vals[i] = v.String() + } + return fmt.Sprintf("%s =>\n%s", ss.Metric, strings.Join(vals, "\n")) +} + +// Value is a generic interface for values resulting from a query evaluation. +type Value interface { + Type() ValueType + String() string +} + +func (Matrix) Type() ValueType { return ValMatrix } +func (Vector) Type() ValueType { return ValVector } +func (*Scalar) Type() ValueType { return ValScalar } +func (*String) Type() ValueType { return ValString } + +type ValueType int + +const ( + ValNone ValueType = iota + ValScalar + ValVector + ValMatrix + ValString +) + +// MarshalJSON implements json.Marshaler. +func (et ValueType) MarshalJSON() ([]byte, error) { + return json.Marshal(et.String()) +} + +func (et *ValueType) UnmarshalJSON(b []byte) error { + var s string + if err := json.Unmarshal(b, &s); err != nil { + return err + } + switch s { + case "": + *et = ValNone + case "scalar": + *et = ValScalar + case "vector": + *et = ValVector + case "matrix": + *et = ValMatrix + case "string": + *et = ValString + default: + return fmt.Errorf("unknown value type %q", s) + } + return nil +} + +func (e ValueType) String() string { + switch e { + case ValNone: + return "" + case ValScalar: + return "scalar" + case ValVector: + return "vector" + case ValMatrix: + return "matrix" + case ValString: + return "string" + } + panic("ValueType.String: unhandled value type") +} + +// Scalar is a scalar value evaluated at the set timestamp. +type Scalar struct { + Value SampleValue `json:"value"` + Timestamp Time `json:"timestamp"` +} + +func (s Scalar) String() string { + return fmt.Sprintf("scalar: %v @[%v]", s.Value, s.Timestamp) +} + +// MarshalJSON implements json.Marshaler. +func (s Scalar) MarshalJSON() ([]byte, error) { + v := strconv.FormatFloat(float64(s.Value), 'f', -1, 64) + return json.Marshal([...]interface{}{s.Timestamp, string(v)}) +} + +// UnmarshalJSON implements json.Unmarshaler. +func (s *Scalar) UnmarshalJSON(b []byte) error { + var f string + v := [...]interface{}{&s.Timestamp, &f} + + if err := json.Unmarshal(b, &v); err != nil { + return err + } + + value, err := strconv.ParseFloat(f, 64) + if err != nil { + return fmt.Errorf("error parsing sample value: %s", err) + } + s.Value = SampleValue(value) + return nil +} + +// String is a string value evaluated at the set timestamp. +type String struct { + Value string `json:"value"` + Timestamp Time `json:"timestamp"` +} + +func (s *String) String() string { + return s.Value +} + +// MarshalJSON implements json.Marshaler. +func (s String) MarshalJSON() ([]byte, error) { + return json.Marshal([]interface{}{s.Timestamp, s.Value}) +} + +// UnmarshalJSON implements json.Unmarshaler. +func (s *String) UnmarshalJSON(b []byte) error { + v := [...]interface{}{&s.Timestamp, &s.Value} + return json.Unmarshal(b, &v) +} + +// Vector is basically only an alias for Samples, but the +// contract is that in a Vector, all Samples have the same timestamp. +type Vector []*Sample + +func (vec Vector) String() string { + entries := make([]string, len(vec)) + for i, s := range vec { + entries[i] = s.String() + } + return strings.Join(entries, "\n") +} + +func (vec Vector) Len() int { return len(vec) } +func (vec Vector) Swap(i, j int) { vec[i], vec[j] = vec[j], vec[i] } + +// Less compares first the metrics, then the timestamp. +func (vec Vector) Less(i, j int) bool { + switch { + case vec[i].Metric.Before(vec[j].Metric): + return true + case vec[j].Metric.Before(vec[i].Metric): + return false + case vec[i].Timestamp.Before(vec[j].Timestamp): + return true + default: + return false + } +} + +// Equal compares two sets of samples and returns true if they are equal. +func (vec Vector) Equal(o Vector) bool { + if len(vec) != len(o) { + return false + } + + for i, sample := range vec { + if !sample.Equal(o[i]) { + return false + } + } + return true +} + +// Matrix is a list of time series. +type Matrix []*SampleStream + +func (m Matrix) Len() int { return len(m) } +func (m Matrix) Less(i, j int) bool { return m[i].Metric.Before(m[j].Metric) } +func (m Matrix) Swap(i, j int) { m[i], m[j] = m[j], m[i] } + +func (mat Matrix) String() string { + matCp := make(Matrix, len(mat)) + copy(matCp, mat) + sort.Sort(matCp) + + strs := make([]string, len(matCp)) + + for i, ss := range matCp { + strs[i] = ss.String() + } + + return strings.Join(strs, "\n") +} diff --git a/vendor/github.com/prometheus/procfs/LICENSE b/vendor/github.com/prometheus/procfs/LICENSE new file mode 100644 index 00000000..261eeb9e --- /dev/null +++ b/vendor/github.com/prometheus/procfs/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/prometheus/procfs/NOTICE b/vendor/github.com/prometheus/procfs/NOTICE new file mode 100644 index 00000000..53c5e9aa --- /dev/null +++ b/vendor/github.com/prometheus/procfs/NOTICE @@ -0,0 +1,7 @@ +procfs provides functions to retrieve system, kernel and process +metrics from the pseudo-filesystem proc. + +Copyright 2014-2015 The Prometheus Authors + +This product includes software developed at +SoundCloud Ltd. (http://soundcloud.com/). diff --git a/vendor/github.com/prometheus/procfs/buddyinfo.go b/vendor/github.com/prometheus/procfs/buddyinfo.go new file mode 100644 index 00000000..d3a82680 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/buddyinfo.go @@ -0,0 +1,95 @@ +// Copyright 2017 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "fmt" + "io" + "os" + "strconv" + "strings" +) + +// A BuddyInfo is the details parsed from /proc/buddyinfo. +// The data is comprised of an array of free fragments of each size. +// The sizes are 2^n*PAGE_SIZE, where n is the array index. +type BuddyInfo struct { + Node string + Zone string + Sizes []float64 +} + +// NewBuddyInfo reads the buddyinfo statistics. +func NewBuddyInfo() ([]BuddyInfo, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return nil, err + } + + return fs.NewBuddyInfo() +} + +// NewBuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem. +func (fs FS) NewBuddyInfo() ([]BuddyInfo, error) { + file, err := os.Open(fs.Path("buddyinfo")) + if err != nil { + return nil, err + } + defer file.Close() + + return parseBuddyInfo(file) +} + +func parseBuddyInfo(r io.Reader) ([]BuddyInfo, error) { + var ( + buddyInfo = []BuddyInfo{} + scanner = bufio.NewScanner(r) + bucketCount = -1 + ) + + for scanner.Scan() { + var err error + line := scanner.Text() + parts := strings.Fields(line) + + if len(parts) < 4 { + return nil, fmt.Errorf("invalid number of fields when parsing buddyinfo") + } + + node := strings.TrimRight(parts[1], ",") + zone := strings.TrimRight(parts[3], ",") + arraySize := len(parts[4:]) + + if bucketCount == -1 { + bucketCount = arraySize + } else { + if bucketCount != arraySize { + return nil, fmt.Errorf("mismatch in number of buddyinfo buckets, previous count %d, new count %d", bucketCount, arraySize) + } + } + + sizes := make([]float64, arraySize) + for i := 0; i < arraySize; i++ { + sizes[i], err = strconv.ParseFloat(parts[i+4], 64) + if err != nil { + return nil, fmt.Errorf("invalid value in buddyinfo: %s", err) + } + } + + buddyInfo = append(buddyInfo, BuddyInfo{node, zone, sizes}) + } + + return buddyInfo, scanner.Err() +} diff --git a/vendor/github.com/prometheus/procfs/doc.go b/vendor/github.com/prometheus/procfs/doc.go new file mode 100644 index 00000000..e2acd6d4 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/doc.go @@ -0,0 +1,45 @@ +// Copyright 2014 Prometheus Team +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package procfs provides functions to retrieve system, kernel and process +// metrics from the pseudo-filesystem proc. +// +// Example: +// +// package main +// +// import ( +// "fmt" +// "log" +// +// "github.com/prometheus/procfs" +// ) +// +// func main() { +// p, err := procfs.Self() +// if err != nil { +// log.Fatalf("could not get process: %s", err) +// } +// +// stat, err := p.NewStat() +// if err != nil { +// log.Fatalf("could not get process stat: %s", err) +// } +// +// fmt.Printf("command: %s\n", stat.Comm) +// fmt.Printf("cpu time: %fs\n", stat.CPUTime()) +// fmt.Printf("vsize: %dB\n", stat.VirtualMemory()) +// fmt.Printf("rss: %dB\n", stat.ResidentMemory()) +// } +// +package procfs diff --git a/vendor/github.com/prometheus/procfs/fs.go b/vendor/github.com/prometheus/procfs/fs.go new file mode 100644 index 00000000..b6c6b2ce --- /dev/null +++ b/vendor/github.com/prometheus/procfs/fs.go @@ -0,0 +1,82 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "fmt" + "os" + "path" + + "github.com/prometheus/procfs/nfs" + "github.com/prometheus/procfs/xfs" +) + +// FS represents the pseudo-filesystem proc, which provides an interface to +// kernel data structures. +type FS string + +// DefaultMountPoint is the common mount point of the proc filesystem. +const DefaultMountPoint = "/proc" + +// NewFS returns a new FS mounted under the given mountPoint. It will error +// if the mount point can't be read. +func NewFS(mountPoint string) (FS, error) { + info, err := os.Stat(mountPoint) + if err != nil { + return "", fmt.Errorf("could not read %s: %s", mountPoint, err) + } + if !info.IsDir() { + return "", fmt.Errorf("mount point %s is not a directory", mountPoint) + } + + return FS(mountPoint), nil +} + +// Path returns the path of the given subsystem relative to the procfs root. +func (fs FS) Path(p ...string) string { + return path.Join(append([]string{string(fs)}, p...)...) +} + +// XFSStats retrieves XFS filesystem runtime statistics. +func (fs FS) XFSStats() (*xfs.Stats, error) { + f, err := os.Open(fs.Path("fs/xfs/stat")) + if err != nil { + return nil, err + } + defer f.Close() + + return xfs.ParseStats(f) +} + +// NFSClientRPCStats retrieves NFS client RPC statistics. +func (fs FS) NFSClientRPCStats() (*nfs.ClientRPCStats, error) { + f, err := os.Open(fs.Path("net/rpc/nfs")) + if err != nil { + return nil, err + } + defer f.Close() + + return nfs.ParseClientRPCStats(f) +} + +// NFSdServerRPCStats retrieves NFS daemon RPC statistics. +func (fs FS) NFSdServerRPCStats() (*nfs.ServerRPCStats, error) { + f, err := os.Open(fs.Path("net/rpc/nfsd")) + if err != nil { + return nil, err + } + defer f.Close() + + return nfs.ParseServerRPCStats(f) +} diff --git a/vendor/github.com/prometheus/procfs/internal/util/parse.go b/vendor/github.com/prometheus/procfs/internal/util/parse.go new file mode 100644 index 00000000..1ad21c91 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/internal/util/parse.go @@ -0,0 +1,46 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package util + +import "strconv" + +// ParseUint32s parses a slice of strings into a slice of uint32s. +func ParseUint32s(ss []string) ([]uint32, error) { + us := make([]uint32, 0, len(ss)) + for _, s := range ss { + u, err := strconv.ParseUint(s, 10, 32) + if err != nil { + return nil, err + } + + us = append(us, uint32(u)) + } + + return us, nil +} + +// ParseUint64s parses a slice of strings into a slice of uint64s. +func ParseUint64s(ss []string) ([]uint64, error) { + us := make([]uint64, 0, len(ss)) + for _, s := range ss { + u, err := strconv.ParseUint(s, 10, 64) + if err != nil { + return nil, err + } + + us = append(us, u) + } + + return us, nil +} diff --git a/vendor/github.com/prometheus/procfs/ipvs.go b/vendor/github.com/prometheus/procfs/ipvs.go new file mode 100644 index 00000000..e36d4a3b --- /dev/null +++ b/vendor/github.com/prometheus/procfs/ipvs.go @@ -0,0 +1,259 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "encoding/hex" + "errors" + "fmt" + "io" + "io/ioutil" + "net" + "os" + "strconv" + "strings" +) + +// IPVSStats holds IPVS statistics, as exposed by the kernel in `/proc/net/ip_vs_stats`. +type IPVSStats struct { + // Total count of connections. + Connections uint64 + // Total incoming packages processed. + IncomingPackets uint64 + // Total outgoing packages processed. + OutgoingPackets uint64 + // Total incoming traffic. + IncomingBytes uint64 + // Total outgoing traffic. + OutgoingBytes uint64 +} + +// IPVSBackendStatus holds current metrics of one virtual / real address pair. +type IPVSBackendStatus struct { + // The local (virtual) IP address. + LocalAddress net.IP + // The remote (real) IP address. + RemoteAddress net.IP + // The local (virtual) port. + LocalPort uint16 + // The remote (real) port. + RemotePort uint16 + // The local firewall mark + LocalMark string + // The transport protocol (TCP, UDP). + Proto string + // The current number of active connections for this virtual/real address pair. + ActiveConn uint64 + // The current number of inactive connections for this virtual/real address pair. + InactConn uint64 + // The current weight of this virtual/real address pair. + Weight uint64 +} + +// NewIPVSStats reads the IPVS statistics. +func NewIPVSStats() (IPVSStats, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return IPVSStats{}, err + } + + return fs.NewIPVSStats() +} + +// NewIPVSStats reads the IPVS statistics from the specified `proc` filesystem. +func (fs FS) NewIPVSStats() (IPVSStats, error) { + file, err := os.Open(fs.Path("net/ip_vs_stats")) + if err != nil { + return IPVSStats{}, err + } + defer file.Close() + + return parseIPVSStats(file) +} + +// parseIPVSStats performs the actual parsing of `ip_vs_stats`. +func parseIPVSStats(file io.Reader) (IPVSStats, error) { + var ( + statContent []byte + statLines []string + statFields []string + stats IPVSStats + ) + + statContent, err := ioutil.ReadAll(file) + if err != nil { + return IPVSStats{}, err + } + + statLines = strings.SplitN(string(statContent), "\n", 4) + if len(statLines) != 4 { + return IPVSStats{}, errors.New("ip_vs_stats corrupt: too short") + } + + statFields = strings.Fields(statLines[2]) + if len(statFields) != 5 { + return IPVSStats{}, errors.New("ip_vs_stats corrupt: unexpected number of fields") + } + + stats.Connections, err = strconv.ParseUint(statFields[0], 16, 64) + if err != nil { + return IPVSStats{}, err + } + stats.IncomingPackets, err = strconv.ParseUint(statFields[1], 16, 64) + if err != nil { + return IPVSStats{}, err + } + stats.OutgoingPackets, err = strconv.ParseUint(statFields[2], 16, 64) + if err != nil { + return IPVSStats{}, err + } + stats.IncomingBytes, err = strconv.ParseUint(statFields[3], 16, 64) + if err != nil { + return IPVSStats{}, err + } + stats.OutgoingBytes, err = strconv.ParseUint(statFields[4], 16, 64) + if err != nil { + return IPVSStats{}, err + } + + return stats, nil +} + +// NewIPVSBackendStatus reads and returns the status of all (virtual,real) server pairs. +func NewIPVSBackendStatus() ([]IPVSBackendStatus, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return []IPVSBackendStatus{}, err + } + + return fs.NewIPVSBackendStatus() +} + +// NewIPVSBackendStatus reads and returns the status of all (virtual,real) server pairs from the specified `proc` filesystem. +func (fs FS) NewIPVSBackendStatus() ([]IPVSBackendStatus, error) { + file, err := os.Open(fs.Path("net/ip_vs")) + if err != nil { + return nil, err + } + defer file.Close() + + return parseIPVSBackendStatus(file) +} + +func parseIPVSBackendStatus(file io.Reader) ([]IPVSBackendStatus, error) { + var ( + status []IPVSBackendStatus + scanner = bufio.NewScanner(file) + proto string + localMark string + localAddress net.IP + localPort uint16 + err error + ) + + for scanner.Scan() { + fields := strings.Fields(scanner.Text()) + if len(fields) == 0 { + continue + } + switch { + case fields[0] == "IP" || fields[0] == "Prot" || fields[1] == "RemoteAddress:Port": + continue + case fields[0] == "TCP" || fields[0] == "UDP": + if len(fields) < 2 { + continue + } + proto = fields[0] + localMark = "" + localAddress, localPort, err = parseIPPort(fields[1]) + if err != nil { + return nil, err + } + case fields[0] == "FWM": + if len(fields) < 2 { + continue + } + proto = fields[0] + localMark = fields[1] + localAddress = nil + localPort = 0 + case fields[0] == "->": + if len(fields) < 6 { + continue + } + remoteAddress, remotePort, err := parseIPPort(fields[1]) + if err != nil { + return nil, err + } + weight, err := strconv.ParseUint(fields[3], 10, 64) + if err != nil { + return nil, err + } + activeConn, err := strconv.ParseUint(fields[4], 10, 64) + if err != nil { + return nil, err + } + inactConn, err := strconv.ParseUint(fields[5], 10, 64) + if err != nil { + return nil, err + } + status = append(status, IPVSBackendStatus{ + LocalAddress: localAddress, + LocalPort: localPort, + LocalMark: localMark, + RemoteAddress: remoteAddress, + RemotePort: remotePort, + Proto: proto, + Weight: weight, + ActiveConn: activeConn, + InactConn: inactConn, + }) + } + } + return status, nil +} + +func parseIPPort(s string) (net.IP, uint16, error) { + var ( + ip net.IP + err error + ) + + switch len(s) { + case 13: + ip, err = hex.DecodeString(s[0:8]) + if err != nil { + return nil, 0, err + } + case 46: + ip = net.ParseIP(s[1:40]) + if ip == nil { + return nil, 0, fmt.Errorf("invalid IPv6 address: %s", s[1:40]) + } + default: + return nil, 0, fmt.Errorf("unexpected IP:Port: %s", s) + } + + portString := s[len(s)-4:] + if len(portString) != 4 { + return nil, 0, fmt.Errorf("unexpected port string format: %s", portString) + } + port, err := strconv.ParseUint(portString, 16, 16) + if err != nil { + return nil, 0, err + } + + return ip, uint16(port), nil +} diff --git a/vendor/github.com/prometheus/procfs/mdstat.go b/vendor/github.com/prometheus/procfs/mdstat.go new file mode 100644 index 00000000..9dc19583 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/mdstat.go @@ -0,0 +1,151 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "fmt" + "io/ioutil" + "regexp" + "strconv" + "strings" +) + +var ( + statuslineRE = regexp.MustCompile(`(\d+) blocks .*\[(\d+)/(\d+)\] \[[U_]+\]`) + buildlineRE = regexp.MustCompile(`\((\d+)/\d+\)`) +) + +// MDStat holds info parsed from /proc/mdstat. +type MDStat struct { + // Name of the device. + Name string + // activity-state of the device. + ActivityState string + // Number of active disks. + DisksActive int64 + // Total number of disks the device consists of. + DisksTotal int64 + // Number of blocks the device holds. + BlocksTotal int64 + // Number of blocks on the device that are in sync. + BlocksSynced int64 +} + +// ParseMDStat parses an mdstat-file and returns a struct with the relevant infos. +func (fs FS) ParseMDStat() (mdstates []MDStat, err error) { + mdStatusFilePath := fs.Path("mdstat") + content, err := ioutil.ReadFile(mdStatusFilePath) + if err != nil { + return []MDStat{}, fmt.Errorf("error parsing %s: %s", mdStatusFilePath, err) + } + + mdStates := []MDStat{} + lines := strings.Split(string(content), "\n") + for i, l := range lines { + if l == "" { + continue + } + if l[0] == ' ' { + continue + } + if strings.HasPrefix(l, "Personalities") || strings.HasPrefix(l, "unused") { + continue + } + + mainLine := strings.Split(l, " ") + if len(mainLine) < 3 { + return mdStates, fmt.Errorf("error parsing mdline: %s", l) + } + mdName := mainLine[0] + activityState := mainLine[2] + + if len(lines) <= i+3 { + return mdStates, fmt.Errorf( + "error parsing %s: too few lines for md device %s", + mdStatusFilePath, + mdName, + ) + } + + active, total, size, err := evalStatusline(lines[i+1]) + if err != nil { + return mdStates, fmt.Errorf("error parsing %s: %s", mdStatusFilePath, err) + } + + // j is the line number of the syncing-line. + j := i + 2 + if strings.Contains(lines[i+2], "bitmap") { // skip bitmap line + j = i + 3 + } + + // If device is syncing at the moment, get the number of currently + // synced bytes, otherwise that number equals the size of the device. + syncedBlocks := size + if strings.Contains(lines[j], "recovery") || strings.Contains(lines[j], "resync") { + syncedBlocks, err = evalBuildline(lines[j]) + if err != nil { + return mdStates, fmt.Errorf("error parsing %s: %s", mdStatusFilePath, err) + } + } + + mdStates = append(mdStates, MDStat{ + Name: mdName, + ActivityState: activityState, + DisksActive: active, + DisksTotal: total, + BlocksTotal: size, + BlocksSynced: syncedBlocks, + }) + } + + return mdStates, nil +} + +func evalStatusline(statusline string) (active, total, size int64, err error) { + matches := statuslineRE.FindStringSubmatch(statusline) + if len(matches) != 4 { + return 0, 0, 0, fmt.Errorf("unexpected statusline: %s", statusline) + } + + size, err = strconv.ParseInt(matches[1], 10, 64) + if err != nil { + return 0, 0, 0, fmt.Errorf("unexpected statusline %s: %s", statusline, err) + } + + total, err = strconv.ParseInt(matches[2], 10, 64) + if err != nil { + return 0, 0, 0, fmt.Errorf("unexpected statusline %s: %s", statusline, err) + } + + active, err = strconv.ParseInt(matches[3], 10, 64) + if err != nil { + return 0, 0, 0, fmt.Errorf("unexpected statusline %s: %s", statusline, err) + } + + return active, total, size, nil +} + +func evalBuildline(buildline string) (syncedBlocks int64, err error) { + matches := buildlineRE.FindStringSubmatch(buildline) + if len(matches) != 2 { + return 0, fmt.Errorf("unexpected buildline: %s", buildline) + } + + syncedBlocks, err = strconv.ParseInt(matches[1], 10, 64) + if err != nil { + return 0, fmt.Errorf("%s in buildline: %s", err, buildline) + } + + return syncedBlocks, nil +} diff --git a/vendor/github.com/prometheus/procfs/mountstats.go b/vendor/github.com/prometheus/procfs/mountstats.go new file mode 100644 index 00000000..e95ddbc6 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/mountstats.go @@ -0,0 +1,569 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +// While implementing parsing of /proc/[pid]/mountstats, this blog was used +// heavily as a reference: +// https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex +// +// Special thanks to Chris Siebenmann for all of his posts explaining the +// various statistics available for NFS. + +import ( + "bufio" + "fmt" + "io" + "strconv" + "strings" + "time" +) + +// Constants shared between multiple functions. +const ( + deviceEntryLen = 8 + + fieldBytesLen = 8 + fieldEventsLen = 27 + + statVersion10 = "1.0" + statVersion11 = "1.1" + + fieldTransport10Len = 10 + fieldTransport11Len = 13 +) + +// A Mount is a device mount parsed from /proc/[pid]/mountstats. +type Mount struct { + // Name of the device. + Device string + // The mount point of the device. + Mount string + // The filesystem type used by the device. + Type string + // If available additional statistics related to this Mount. + // Use a type assertion to determine if additional statistics are available. + Stats MountStats +} + +// A MountStats is a type which contains detailed statistics for a specific +// type of Mount. +type MountStats interface { + mountStats() +} + +// A MountStatsNFS is a MountStats implementation for NFSv3 and v4 mounts. +type MountStatsNFS struct { + // The version of statistics provided. + StatVersion string + // The age of the NFS mount. + Age time.Duration + // Statistics related to byte counters for various operations. + Bytes NFSBytesStats + // Statistics related to various NFS event occurrences. + Events NFSEventsStats + // Statistics broken down by filesystem operation. + Operations []NFSOperationStats + // Statistics about the NFS RPC transport. + Transport NFSTransportStats +} + +// mountStats implements MountStats. +func (m MountStatsNFS) mountStats() {} + +// A NFSBytesStats contains statistics about the number of bytes read and written +// by an NFS client to and from an NFS server. +type NFSBytesStats struct { + // Number of bytes read using the read() syscall. + Read uint64 + // Number of bytes written using the write() syscall. + Write uint64 + // Number of bytes read using the read() syscall in O_DIRECT mode. + DirectRead uint64 + // Number of bytes written using the write() syscall in O_DIRECT mode. + DirectWrite uint64 + // Number of bytes read from the NFS server, in total. + ReadTotal uint64 + // Number of bytes written to the NFS server, in total. + WriteTotal uint64 + // Number of pages read directly via mmap()'d files. + ReadPages uint64 + // Number of pages written directly via mmap()'d files. + WritePages uint64 +} + +// A NFSEventsStats contains statistics about NFS event occurrences. +type NFSEventsStats struct { + // Number of times cached inode attributes are re-validated from the server. + InodeRevalidate uint64 + // Number of times cached dentry nodes are re-validated from the server. + DnodeRevalidate uint64 + // Number of times an inode cache is cleared. + DataInvalidate uint64 + // Number of times cached inode attributes are invalidated. + AttributeInvalidate uint64 + // Number of times files or directories have been open()'d. + VFSOpen uint64 + // Number of times a directory lookup has occurred. + VFSLookup uint64 + // Number of times permissions have been checked. + VFSAccess uint64 + // Number of updates (and potential writes) to pages. + VFSUpdatePage uint64 + // Number of pages read directly via mmap()'d files. + VFSReadPage uint64 + // Number of times a group of pages have been read. + VFSReadPages uint64 + // Number of pages written directly via mmap()'d files. + VFSWritePage uint64 + // Number of times a group of pages have been written. + VFSWritePages uint64 + // Number of times directory entries have been read with getdents(). + VFSGetdents uint64 + // Number of times attributes have been set on inodes. + VFSSetattr uint64 + // Number of pending writes that have been forcefully flushed to the server. + VFSFlush uint64 + // Number of times fsync() has been called on directories and files. + VFSFsync uint64 + // Number of times locking has been attempted on a file. + VFSLock uint64 + // Number of times files have been closed and released. + VFSFileRelease uint64 + // Unknown. Possibly unused. + CongestionWait uint64 + // Number of times files have been truncated. + Truncation uint64 + // Number of times a file has been grown due to writes beyond its existing end. + WriteExtension uint64 + // Number of times a file was removed while still open by another process. + SillyRename uint64 + // Number of times the NFS server gave less data than expected while reading. + ShortRead uint64 + // Number of times the NFS server wrote less data than expected while writing. + ShortWrite uint64 + // Number of times the NFS server indicated EJUKEBOX; retrieving data from + // offline storage. + JukeboxDelay uint64 + // Number of NFS v4.1+ pNFS reads. + PNFSRead uint64 + // Number of NFS v4.1+ pNFS writes. + PNFSWrite uint64 +} + +// A NFSOperationStats contains statistics for a single operation. +type NFSOperationStats struct { + // The name of the operation. + Operation string + // Number of requests performed for this operation. + Requests uint64 + // Number of times an actual RPC request has been transmitted for this operation. + Transmissions uint64 + // Number of times a request has had a major timeout. + MajorTimeouts uint64 + // Number of bytes sent for this operation, including RPC headers and payload. + BytesSent uint64 + // Number of bytes received for this operation, including RPC headers and payload. + BytesReceived uint64 + // Duration all requests spent queued for transmission before they were sent. + CumulativeQueueTime time.Duration + // Duration it took to get a reply back after the request was transmitted. + CumulativeTotalResponseTime time.Duration + // Duration from when a request was enqueued to when it was completely handled. + CumulativeTotalRequestTime time.Duration +} + +// A NFSTransportStats contains statistics for the NFS mount RPC requests and +// responses. +type NFSTransportStats struct { + // The local port used for the NFS mount. + Port uint64 + // Number of times the client has had to establish a connection from scratch + // to the NFS server. + Bind uint64 + // Number of times the client has made a TCP connection to the NFS server. + Connect uint64 + // Duration (in jiffies, a kernel internal unit of time) the NFS mount has + // spent waiting for connections to the server to be established. + ConnectIdleTime uint64 + // Duration since the NFS mount last saw any RPC traffic. + IdleTime time.Duration + // Number of RPC requests for this mount sent to the NFS server. + Sends uint64 + // Number of RPC responses for this mount received from the NFS server. + Receives uint64 + // Number of times the NFS server sent a response with a transaction ID + // unknown to this client. + BadTransactionIDs uint64 + // A running counter, incremented on each request as the current difference + // ebetween sends and receives. + CumulativeActiveRequests uint64 + // A running counter, incremented on each request by the current backlog + // queue size. + CumulativeBacklog uint64 + + // Stats below only available with stat version 1.1. + + // Maximum number of simultaneously active RPC requests ever used. + MaximumRPCSlotsUsed uint64 + // A running counter, incremented on each request as the current size of the + // sending queue. + CumulativeSendingQueue uint64 + // A running counter, incremented on each request as the current size of the + // pending queue. + CumulativePendingQueue uint64 +} + +// parseMountStats parses a /proc/[pid]/mountstats file and returns a slice +// of Mount structures containing detailed information about each mount. +// If available, statistics for each mount are parsed as well. +func parseMountStats(r io.Reader) ([]*Mount, error) { + const ( + device = "device" + statVersionPrefix = "statvers=" + + nfs3Type = "nfs" + nfs4Type = "nfs4" + ) + + var mounts []*Mount + + s := bufio.NewScanner(r) + for s.Scan() { + // Only look for device entries in this function + ss := strings.Fields(string(s.Bytes())) + if len(ss) == 0 || ss[0] != device { + continue + } + + m, err := parseMount(ss) + if err != nil { + return nil, err + } + + // Does this mount also possess statistics information? + if len(ss) > deviceEntryLen { + // Only NFSv3 and v4 are supported for parsing statistics + if m.Type != nfs3Type && m.Type != nfs4Type { + return nil, fmt.Errorf("cannot parse MountStats for fstype %q", m.Type) + } + + statVersion := strings.TrimPrefix(ss[8], statVersionPrefix) + + stats, err := parseMountStatsNFS(s, statVersion) + if err != nil { + return nil, err + } + + m.Stats = stats + } + + mounts = append(mounts, m) + } + + return mounts, s.Err() +} + +// parseMount parses an entry in /proc/[pid]/mountstats in the format: +// device [device] mounted on [mount] with fstype [type] +func parseMount(ss []string) (*Mount, error) { + if len(ss) < deviceEntryLen { + return nil, fmt.Errorf("invalid device entry: %v", ss) + } + + // Check for specific words appearing at specific indices to ensure + // the format is consistent with what we expect + format := []struct { + i int + s string + }{ + {i: 0, s: "device"}, + {i: 2, s: "mounted"}, + {i: 3, s: "on"}, + {i: 5, s: "with"}, + {i: 6, s: "fstype"}, + } + + for _, f := range format { + if ss[f.i] != f.s { + return nil, fmt.Errorf("invalid device entry: %v", ss) + } + } + + return &Mount{ + Device: ss[1], + Mount: ss[4], + Type: ss[7], + }, nil +} + +// parseMountStatsNFS parses a MountStatsNFS by scanning additional information +// related to NFS statistics. +func parseMountStatsNFS(s *bufio.Scanner, statVersion string) (*MountStatsNFS, error) { + // Field indicators for parsing specific types of data + const ( + fieldAge = "age:" + fieldBytes = "bytes:" + fieldEvents = "events:" + fieldPerOpStats = "per-op" + fieldTransport = "xprt:" + ) + + stats := &MountStatsNFS{ + StatVersion: statVersion, + } + + for s.Scan() { + ss := strings.Fields(string(s.Bytes())) + if len(ss) == 0 { + break + } + if len(ss) < 2 { + return nil, fmt.Errorf("not enough information for NFS stats: %v", ss) + } + + switch ss[0] { + case fieldAge: + // Age integer is in seconds + d, err := time.ParseDuration(ss[1] + "s") + if err != nil { + return nil, err + } + + stats.Age = d + case fieldBytes: + bstats, err := parseNFSBytesStats(ss[1:]) + if err != nil { + return nil, err + } + + stats.Bytes = *bstats + case fieldEvents: + estats, err := parseNFSEventsStats(ss[1:]) + if err != nil { + return nil, err + } + + stats.Events = *estats + case fieldTransport: + if len(ss) < 3 { + return nil, fmt.Errorf("not enough information for NFS transport stats: %v", ss) + } + + tstats, err := parseNFSTransportStats(ss[2:], statVersion) + if err != nil { + return nil, err + } + + stats.Transport = *tstats + } + + // When encountering "per-operation statistics", we must break this + // loop and parse them separately to ensure we can terminate parsing + // before reaching another device entry; hence why this 'if' statement + // is not just another switch case + if ss[0] == fieldPerOpStats { + break + } + } + + if err := s.Err(); err != nil { + return nil, err + } + + // NFS per-operation stats appear last before the next device entry + perOpStats, err := parseNFSOperationStats(s) + if err != nil { + return nil, err + } + + stats.Operations = perOpStats + + return stats, nil +} + +// parseNFSBytesStats parses a NFSBytesStats line using an input set of +// integer fields. +func parseNFSBytesStats(ss []string) (*NFSBytesStats, error) { + if len(ss) != fieldBytesLen { + return nil, fmt.Errorf("invalid NFS bytes stats: %v", ss) + } + + ns := make([]uint64, 0, fieldBytesLen) + for _, s := range ss { + n, err := strconv.ParseUint(s, 10, 64) + if err != nil { + return nil, err + } + + ns = append(ns, n) + } + + return &NFSBytesStats{ + Read: ns[0], + Write: ns[1], + DirectRead: ns[2], + DirectWrite: ns[3], + ReadTotal: ns[4], + WriteTotal: ns[5], + ReadPages: ns[6], + WritePages: ns[7], + }, nil +} + +// parseNFSEventsStats parses a NFSEventsStats line using an input set of +// integer fields. +func parseNFSEventsStats(ss []string) (*NFSEventsStats, error) { + if len(ss) != fieldEventsLen { + return nil, fmt.Errorf("invalid NFS events stats: %v", ss) + } + + ns := make([]uint64, 0, fieldEventsLen) + for _, s := range ss { + n, err := strconv.ParseUint(s, 10, 64) + if err != nil { + return nil, err + } + + ns = append(ns, n) + } + + return &NFSEventsStats{ + InodeRevalidate: ns[0], + DnodeRevalidate: ns[1], + DataInvalidate: ns[2], + AttributeInvalidate: ns[3], + VFSOpen: ns[4], + VFSLookup: ns[5], + VFSAccess: ns[6], + VFSUpdatePage: ns[7], + VFSReadPage: ns[8], + VFSReadPages: ns[9], + VFSWritePage: ns[10], + VFSWritePages: ns[11], + VFSGetdents: ns[12], + VFSSetattr: ns[13], + VFSFlush: ns[14], + VFSFsync: ns[15], + VFSLock: ns[16], + VFSFileRelease: ns[17], + CongestionWait: ns[18], + Truncation: ns[19], + WriteExtension: ns[20], + SillyRename: ns[21], + ShortRead: ns[22], + ShortWrite: ns[23], + JukeboxDelay: ns[24], + PNFSRead: ns[25], + PNFSWrite: ns[26], + }, nil +} + +// parseNFSOperationStats parses a slice of NFSOperationStats by scanning +// additional information about per-operation statistics until an empty +// line is reached. +func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) { + const ( + // Number of expected fields in each per-operation statistics set + numFields = 9 + ) + + var ops []NFSOperationStats + + for s.Scan() { + ss := strings.Fields(string(s.Bytes())) + if len(ss) == 0 { + // Must break when reading a blank line after per-operation stats to + // enable top-level function to parse the next device entry + break + } + + if len(ss) != numFields { + return nil, fmt.Errorf("invalid NFS per-operations stats: %v", ss) + } + + // Skip string operation name for integers + ns := make([]uint64, 0, numFields-1) + for _, st := range ss[1:] { + n, err := strconv.ParseUint(st, 10, 64) + if err != nil { + return nil, err + } + + ns = append(ns, n) + } + + ops = append(ops, NFSOperationStats{ + Operation: strings.TrimSuffix(ss[0], ":"), + Requests: ns[0], + Transmissions: ns[1], + MajorTimeouts: ns[2], + BytesSent: ns[3], + BytesReceived: ns[4], + CumulativeQueueTime: time.Duration(ns[5]) * time.Millisecond, + CumulativeTotalResponseTime: time.Duration(ns[6]) * time.Millisecond, + CumulativeTotalRequestTime: time.Duration(ns[7]) * time.Millisecond, + }) + } + + return ops, s.Err() +} + +// parseNFSTransportStats parses a NFSTransportStats line using an input set of +// integer fields matched to a specific stats version. +func parseNFSTransportStats(ss []string, statVersion string) (*NFSTransportStats, error) { + switch statVersion { + case statVersion10: + if len(ss) != fieldTransport10Len { + return nil, fmt.Errorf("invalid NFS transport stats 1.0 statement: %v", ss) + } + case statVersion11: + if len(ss) != fieldTransport11Len { + return nil, fmt.Errorf("invalid NFS transport stats 1.1 statement: %v", ss) + } + default: + return nil, fmt.Errorf("unrecognized NFS transport stats version: %q", statVersion) + } + + // Allocate enough for v1.1 stats since zero value for v1.1 stats will be okay + // in a v1.0 response. + // + // Note: slice length must be set to length of v1.1 stats to avoid a panic when + // only v1.0 stats are present. + // See: https://github.com/prometheus/node_exporter/issues/571. + ns := make([]uint64, fieldTransport11Len) + for i, s := range ss { + n, err := strconv.ParseUint(s, 10, 64) + if err != nil { + return nil, err + } + + ns[i] = n + } + + return &NFSTransportStats{ + Port: ns[0], + Bind: ns[1], + Connect: ns[2], + ConnectIdleTime: ns[3], + IdleTime: time.Duration(ns[4]) * time.Second, + Sends: ns[5], + Receives: ns[6], + BadTransactionIDs: ns[7], + CumulativeActiveRequests: ns[8], + CumulativeBacklog: ns[9], + MaximumRPCSlotsUsed: ns[10], + CumulativeSendingQueue: ns[11], + CumulativePendingQueue: ns[12], + }, nil +} diff --git a/vendor/github.com/prometheus/procfs/net_dev.go b/vendor/github.com/prometheus/procfs/net_dev.go new file mode 100644 index 00000000..3f252337 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/net_dev.go @@ -0,0 +1,216 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "errors" + "os" + "sort" + "strconv" + "strings" +) + +// NetDevLine is single line parsed from /proc/net/dev or /proc/[pid]/net/dev. +type NetDevLine struct { + Name string `json:"name"` // The name of the interface. + RxBytes uint64 `json:"rx_bytes"` // Cumulative count of bytes received. + RxPackets uint64 `json:"rx_packets"` // Cumulative count of packets received. + RxErrors uint64 `json:"rx_errors"` // Cumulative count of receive errors encountered. + RxDropped uint64 `json:"rx_dropped"` // Cumulative count of packets dropped while receiving. + RxFIFO uint64 `json:"rx_fifo"` // Cumulative count of FIFO buffer errors. + RxFrame uint64 `json:"rx_frame"` // Cumulative count of packet framing errors. + RxCompressed uint64 `json:"rx_compressed"` // Cumulative count of compressed packets received by the device driver. + RxMulticast uint64 `json:"rx_multicast"` // Cumulative count of multicast frames received by the device driver. + TxBytes uint64 `json:"tx_bytes"` // Cumulative count of bytes transmitted. + TxPackets uint64 `json:"tx_packets"` // Cumulative count of packets transmitted. + TxErrors uint64 `json:"tx_errors"` // Cumulative count of transmit errors encountered. + TxDropped uint64 `json:"tx_dropped"` // Cumulative count of packets dropped while transmitting. + TxFIFO uint64 `json:"tx_fifo"` // Cumulative count of FIFO buffer errors. + TxCollisions uint64 `json:"tx_collisions"` // Cumulative count of collisions detected on the interface. + TxCarrier uint64 `json:"tx_carrier"` // Cumulative count of carrier losses detected by the device driver. + TxCompressed uint64 `json:"tx_compressed"` // Cumulative count of compressed packets transmitted by the device driver. +} + +// NetDev is parsed from /proc/net/dev or /proc/[pid]/net/dev. The map keys +// are interface names. +type NetDev map[string]NetDevLine + +// NewNetDev returns kernel/system statistics read from /proc/net/dev. +func NewNetDev() (NetDev, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return nil, err + } + + return fs.NewNetDev() +} + +// NewNetDev returns kernel/system statistics read from /proc/net/dev. +func (fs FS) NewNetDev() (NetDev, error) { + return newNetDev(fs.Path("net/dev")) +} + +// NewNetDev returns kernel/system statistics read from /proc/[pid]/net/dev. +func (p Proc) NewNetDev() (NetDev, error) { + return newNetDev(p.path("net/dev")) +} + +// newNetDev creates a new NetDev from the contents of the given file. +func newNetDev(file string) (NetDev, error) { + f, err := os.Open(file) + if err != nil { + return NetDev{}, err + } + defer f.Close() + + nd := NetDev{} + s := bufio.NewScanner(f) + for n := 0; s.Scan(); n++ { + // Skip the 2 header lines. + if n < 2 { + continue + } + + line, err := nd.parseLine(s.Text()) + if err != nil { + return nd, err + } + + nd[line.Name] = *line + } + + return nd, s.Err() +} + +// parseLine parses a single line from the /proc/net/dev file. Header lines +// must be filtered prior to calling this method. +func (nd NetDev) parseLine(rawLine string) (*NetDevLine, error) { + parts := strings.SplitN(rawLine, ":", 2) + if len(parts) != 2 { + return nil, errors.New("invalid net/dev line, missing colon") + } + fields := strings.Fields(strings.TrimSpace(parts[1])) + + var err error + line := &NetDevLine{} + + // Interface Name + line.Name = strings.TrimSpace(parts[0]) + if line.Name == "" { + return nil, errors.New("invalid net/dev line, empty interface name") + } + + // RX + line.RxBytes, err = strconv.ParseUint(fields[0], 10, 64) + if err != nil { + return nil, err + } + line.RxPackets, err = strconv.ParseUint(fields[1], 10, 64) + if err != nil { + return nil, err + } + line.RxErrors, err = strconv.ParseUint(fields[2], 10, 64) + if err != nil { + return nil, err + } + line.RxDropped, err = strconv.ParseUint(fields[3], 10, 64) + if err != nil { + return nil, err + } + line.RxFIFO, err = strconv.ParseUint(fields[4], 10, 64) + if err != nil { + return nil, err + } + line.RxFrame, err = strconv.ParseUint(fields[5], 10, 64) + if err != nil { + return nil, err + } + line.RxCompressed, err = strconv.ParseUint(fields[6], 10, 64) + if err != nil { + return nil, err + } + line.RxMulticast, err = strconv.ParseUint(fields[7], 10, 64) + if err != nil { + return nil, err + } + + // TX + line.TxBytes, err = strconv.ParseUint(fields[8], 10, 64) + if err != nil { + return nil, err + } + line.TxPackets, err = strconv.ParseUint(fields[9], 10, 64) + if err != nil { + return nil, err + } + line.TxErrors, err = strconv.ParseUint(fields[10], 10, 64) + if err != nil { + return nil, err + } + line.TxDropped, err = strconv.ParseUint(fields[11], 10, 64) + if err != nil { + return nil, err + } + line.TxFIFO, err = strconv.ParseUint(fields[12], 10, 64) + if err != nil { + return nil, err + } + line.TxCollisions, err = strconv.ParseUint(fields[13], 10, 64) + if err != nil { + return nil, err + } + line.TxCarrier, err = strconv.ParseUint(fields[14], 10, 64) + if err != nil { + return nil, err + } + line.TxCompressed, err = strconv.ParseUint(fields[15], 10, 64) + if err != nil { + return nil, err + } + + return line, nil +} + +// Total aggregates the values across interfaces and returns a new NetDevLine. +// The Name field will be a sorted comma separated list of interface names. +func (nd NetDev) Total() NetDevLine { + total := NetDevLine{} + + names := make([]string, 0, len(nd)) + for _, ifc := range nd { + names = append(names, ifc.Name) + total.RxBytes += ifc.RxBytes + total.RxPackets += ifc.RxPackets + total.RxPackets += ifc.RxPackets + total.RxErrors += ifc.RxErrors + total.RxDropped += ifc.RxDropped + total.RxFIFO += ifc.RxFIFO + total.RxFrame += ifc.RxFrame + total.RxCompressed += ifc.RxCompressed + total.RxMulticast += ifc.RxMulticast + total.TxBytes += ifc.TxBytes + total.TxPackets += ifc.TxPackets + total.TxErrors += ifc.TxErrors + total.TxDropped += ifc.TxDropped + total.TxFIFO += ifc.TxFIFO + total.TxCollisions += ifc.TxCollisions + total.TxCarrier += ifc.TxCarrier + total.TxCompressed += ifc.TxCompressed + } + sort.Strings(names) + total.Name = strings.Join(names, ", ") + + return total +} diff --git a/vendor/github.com/prometheus/procfs/nfs/nfs.go b/vendor/github.com/prometheus/procfs/nfs/nfs.go new file mode 100644 index 00000000..651bf681 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/nfs/nfs.go @@ -0,0 +1,263 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package nfs implements parsing of /proc/net/rpc/nfsd. +// Fields are documented in https://www.svennd.be/nfsd-stats-explained-procnetrpcnfsd/ +package nfs + +// ReplyCache models the "rc" line. +type ReplyCache struct { + Hits uint64 + Misses uint64 + NoCache uint64 +} + +// FileHandles models the "fh" line. +type FileHandles struct { + Stale uint64 + TotalLookups uint64 + AnonLookups uint64 + DirNoCache uint64 + NoDirNoCache uint64 +} + +// InputOutput models the "io" line. +type InputOutput struct { + Read uint64 + Write uint64 +} + +// Threads models the "th" line. +type Threads struct { + Threads uint64 + FullCnt uint64 +} + +// ReadAheadCache models the "ra" line. +type ReadAheadCache struct { + CacheSize uint64 + CacheHistogram []uint64 + NotFound uint64 +} + +// Network models the "net" line. +type Network struct { + NetCount uint64 + UDPCount uint64 + TCPCount uint64 + TCPConnect uint64 +} + +// ClientRPC models the nfs "rpc" line. +type ClientRPC struct { + RPCCount uint64 + Retransmissions uint64 + AuthRefreshes uint64 +} + +// ServerRPC models the nfsd "rpc" line. +type ServerRPC struct { + RPCCount uint64 + BadCnt uint64 + BadFmt uint64 + BadAuth uint64 + BadcInt uint64 +} + +// V2Stats models the "proc2" line. +type V2Stats struct { + Null uint64 + GetAttr uint64 + SetAttr uint64 + Root uint64 + Lookup uint64 + ReadLink uint64 + Read uint64 + WrCache uint64 + Write uint64 + Create uint64 + Remove uint64 + Rename uint64 + Link uint64 + SymLink uint64 + MkDir uint64 + RmDir uint64 + ReadDir uint64 + FsStat uint64 +} + +// V3Stats models the "proc3" line. +type V3Stats struct { + Null uint64 + GetAttr uint64 + SetAttr uint64 + Lookup uint64 + Access uint64 + ReadLink uint64 + Read uint64 + Write uint64 + Create uint64 + MkDir uint64 + SymLink uint64 + MkNod uint64 + Remove uint64 + RmDir uint64 + Rename uint64 + Link uint64 + ReadDir uint64 + ReadDirPlus uint64 + FsStat uint64 + FsInfo uint64 + PathConf uint64 + Commit uint64 +} + +// ClientV4Stats models the nfs "proc4" line. +type ClientV4Stats struct { + Null uint64 + Read uint64 + Write uint64 + Commit uint64 + Open uint64 + OpenConfirm uint64 + OpenNoattr uint64 + OpenDowngrade uint64 + Close uint64 + Setattr uint64 + FsInfo uint64 + Renew uint64 + SetClientID uint64 + SetClientIDConfirm uint64 + Lock uint64 + Lockt uint64 + Locku uint64 + Access uint64 + Getattr uint64 + Lookup uint64 + LookupRoot uint64 + Remove uint64 + Rename uint64 + Link uint64 + Symlink uint64 + Create uint64 + Pathconf uint64 + StatFs uint64 + ReadLink uint64 + ReadDir uint64 + ServerCaps uint64 + DelegReturn uint64 + GetACL uint64 + SetACL uint64 + FsLocations uint64 + ReleaseLockowner uint64 + Secinfo uint64 + FsidPresent uint64 + ExchangeID uint64 + CreateSession uint64 + DestroySession uint64 + Sequence uint64 + GetLeaseTime uint64 + ReclaimComplete uint64 + LayoutGet uint64 + GetDeviceInfo uint64 + LayoutCommit uint64 + LayoutReturn uint64 + SecinfoNoName uint64 + TestStateID uint64 + FreeStateID uint64 + GetDeviceList uint64 + BindConnToSession uint64 + DestroyClientID uint64 + Seek uint64 + Allocate uint64 + DeAllocate uint64 + LayoutStats uint64 + Clone uint64 +} + +// ServerV4Stats models the nfsd "proc4" line. +type ServerV4Stats struct { + Null uint64 + Compound uint64 +} + +// V4Ops models the "proc4ops" line: NFSv4 operations +// Variable list, see: +// v4.0 https://tools.ietf.org/html/rfc3010 (38 operations) +// v4.1 https://tools.ietf.org/html/rfc5661 (58 operations) +// v4.2 https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41 (71 operations) +type V4Ops struct { + //Values uint64 // Variable depending on v4.x sub-version. TODO: Will this always at least include the fields in this struct? + Op0Unused uint64 + Op1Unused uint64 + Op2Future uint64 + Access uint64 + Close uint64 + Commit uint64 + Create uint64 + DelegPurge uint64 + DelegReturn uint64 + GetAttr uint64 + GetFH uint64 + Link uint64 + Lock uint64 + Lockt uint64 + Locku uint64 + Lookup uint64 + LookupRoot uint64 + Nverify uint64 + Open uint64 + OpenAttr uint64 + OpenConfirm uint64 + OpenDgrd uint64 + PutFH uint64 + PutPubFH uint64 + PutRootFH uint64 + Read uint64 + ReadDir uint64 + ReadLink uint64 + Remove uint64 + Rename uint64 + Renew uint64 + RestoreFH uint64 + SaveFH uint64 + SecInfo uint64 + SetAttr uint64 + Verify uint64 + Write uint64 + RelLockOwner uint64 +} + +// ClientRPCStats models all stats from /proc/net/rpc/nfs. +type ClientRPCStats struct { + Network Network + ClientRPC ClientRPC + V2Stats V2Stats + V3Stats V3Stats + ClientV4Stats ClientV4Stats +} + +// ServerRPCStats models all stats from /proc/net/rpc/nfsd. +type ServerRPCStats struct { + ReplyCache ReplyCache + FileHandles FileHandles + InputOutput InputOutput + Threads Threads + ReadAheadCache ReadAheadCache + Network Network + ServerRPC ServerRPC + V2Stats V2Stats + V3Stats V3Stats + ServerV4Stats ServerV4Stats + V4Ops V4Ops +} diff --git a/vendor/github.com/prometheus/procfs/nfs/parse.go b/vendor/github.com/prometheus/procfs/nfs/parse.go new file mode 100644 index 00000000..95a83cc5 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/nfs/parse.go @@ -0,0 +1,317 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package nfs + +import ( + "fmt" +) + +func parseReplyCache(v []uint64) (ReplyCache, error) { + if len(v) != 3 { + return ReplyCache{}, fmt.Errorf("invalid ReplyCache line %q", v) + } + + return ReplyCache{ + Hits: v[0], + Misses: v[1], + NoCache: v[2], + }, nil +} + +func parseFileHandles(v []uint64) (FileHandles, error) { + if len(v) != 5 { + return FileHandles{}, fmt.Errorf("invalid FileHandles, line %q", v) + } + + return FileHandles{ + Stale: v[0], + TotalLookups: v[1], + AnonLookups: v[2], + DirNoCache: v[3], + NoDirNoCache: v[4], + }, nil +} + +func parseInputOutput(v []uint64) (InputOutput, error) { + if len(v) != 2 { + return InputOutput{}, fmt.Errorf("invalid InputOutput line %q", v) + } + + return InputOutput{ + Read: v[0], + Write: v[1], + }, nil +} + +func parseThreads(v []uint64) (Threads, error) { + if len(v) != 2 { + return Threads{}, fmt.Errorf("invalid Threads line %q", v) + } + + return Threads{ + Threads: v[0], + FullCnt: v[1], + }, nil +} + +func parseReadAheadCache(v []uint64) (ReadAheadCache, error) { + if len(v) != 12 { + return ReadAheadCache{}, fmt.Errorf("invalid ReadAheadCache line %q", v) + } + + return ReadAheadCache{ + CacheSize: v[0], + CacheHistogram: v[1:11], + NotFound: v[11], + }, nil +} + +func parseNetwork(v []uint64) (Network, error) { + if len(v) != 4 { + return Network{}, fmt.Errorf("invalid Network line %q", v) + } + + return Network{ + NetCount: v[0], + UDPCount: v[1], + TCPCount: v[2], + TCPConnect: v[3], + }, nil +} + +func parseServerRPC(v []uint64) (ServerRPC, error) { + if len(v) != 5 { + return ServerRPC{}, fmt.Errorf("invalid RPC line %q", v) + } + + return ServerRPC{ + RPCCount: v[0], + BadCnt: v[1], + BadFmt: v[2], + BadAuth: v[3], + BadcInt: v[4], + }, nil +} + +func parseClientRPC(v []uint64) (ClientRPC, error) { + if len(v) != 3 { + return ClientRPC{}, fmt.Errorf("invalid RPC line %q", v) + } + + return ClientRPC{ + RPCCount: v[0], + Retransmissions: v[1], + AuthRefreshes: v[2], + }, nil +} + +func parseV2Stats(v []uint64) (V2Stats, error) { + values := int(v[0]) + if len(v[1:]) != values || values != 18 { + return V2Stats{}, fmt.Errorf("invalid V2Stats line %q", v) + } + + return V2Stats{ + Null: v[1], + GetAttr: v[2], + SetAttr: v[3], + Root: v[4], + Lookup: v[5], + ReadLink: v[6], + Read: v[7], + WrCache: v[8], + Write: v[9], + Create: v[10], + Remove: v[11], + Rename: v[12], + Link: v[13], + SymLink: v[14], + MkDir: v[15], + RmDir: v[16], + ReadDir: v[17], + FsStat: v[18], + }, nil +} + +func parseV3Stats(v []uint64) (V3Stats, error) { + values := int(v[0]) + if len(v[1:]) != values || values != 22 { + return V3Stats{}, fmt.Errorf("invalid V3Stats line %q", v) + } + + return V3Stats{ + Null: v[1], + GetAttr: v[2], + SetAttr: v[3], + Lookup: v[4], + Access: v[5], + ReadLink: v[6], + Read: v[7], + Write: v[8], + Create: v[9], + MkDir: v[10], + SymLink: v[11], + MkNod: v[12], + Remove: v[13], + RmDir: v[14], + Rename: v[15], + Link: v[16], + ReadDir: v[17], + ReadDirPlus: v[18], + FsStat: v[19], + FsInfo: v[20], + PathConf: v[21], + Commit: v[22], + }, nil +} + +func parseClientV4Stats(v []uint64) (ClientV4Stats, error) { + values := int(v[0]) + if len(v[1:]) != values { + return ClientV4Stats{}, fmt.Errorf("invalid ClientV4Stats line %q", v) + } + + // This function currently supports mapping 59 NFS v4 client stats. Older + // kernels may emit fewer stats, so we must detect this and pad out the + // values to match the expected slice size. + if values < 59 { + newValues := make([]uint64, 60) + copy(newValues, v) + v = newValues + } + + return ClientV4Stats{ + Null: v[1], + Read: v[2], + Write: v[3], + Commit: v[4], + Open: v[5], + OpenConfirm: v[6], + OpenNoattr: v[7], + OpenDowngrade: v[8], + Close: v[9], + Setattr: v[10], + FsInfo: v[11], + Renew: v[12], + SetClientID: v[13], + SetClientIDConfirm: v[14], + Lock: v[15], + Lockt: v[16], + Locku: v[17], + Access: v[18], + Getattr: v[19], + Lookup: v[20], + LookupRoot: v[21], + Remove: v[22], + Rename: v[23], + Link: v[24], + Symlink: v[25], + Create: v[26], + Pathconf: v[27], + StatFs: v[28], + ReadLink: v[29], + ReadDir: v[30], + ServerCaps: v[31], + DelegReturn: v[32], + GetACL: v[33], + SetACL: v[34], + FsLocations: v[35], + ReleaseLockowner: v[36], + Secinfo: v[37], + FsidPresent: v[38], + ExchangeID: v[39], + CreateSession: v[40], + DestroySession: v[41], + Sequence: v[42], + GetLeaseTime: v[43], + ReclaimComplete: v[44], + LayoutGet: v[45], + GetDeviceInfo: v[46], + LayoutCommit: v[47], + LayoutReturn: v[48], + SecinfoNoName: v[49], + TestStateID: v[50], + FreeStateID: v[51], + GetDeviceList: v[52], + BindConnToSession: v[53], + DestroyClientID: v[54], + Seek: v[55], + Allocate: v[56], + DeAllocate: v[57], + LayoutStats: v[58], + Clone: v[59], + }, nil +} + +func parseServerV4Stats(v []uint64) (ServerV4Stats, error) { + values := int(v[0]) + if len(v[1:]) != values || values != 2 { + return ServerV4Stats{}, fmt.Errorf("invalid V4Stats line %q", v) + } + + return ServerV4Stats{ + Null: v[1], + Compound: v[2], + }, nil +} + +func parseV4Ops(v []uint64) (V4Ops, error) { + values := int(v[0]) + if len(v[1:]) != values || values < 39 { + return V4Ops{}, fmt.Errorf("invalid V4Ops line %q", v) + } + + stats := V4Ops{ + Op0Unused: v[1], + Op1Unused: v[2], + Op2Future: v[3], + Access: v[4], + Close: v[5], + Commit: v[6], + Create: v[7], + DelegPurge: v[8], + DelegReturn: v[9], + GetAttr: v[10], + GetFH: v[11], + Link: v[12], + Lock: v[13], + Lockt: v[14], + Locku: v[15], + Lookup: v[16], + LookupRoot: v[17], + Nverify: v[18], + Open: v[19], + OpenAttr: v[20], + OpenConfirm: v[21], + OpenDgrd: v[22], + PutFH: v[23], + PutPubFH: v[24], + PutRootFH: v[25], + Read: v[26], + ReadDir: v[27], + ReadLink: v[28], + Remove: v[29], + Rename: v[30], + Renew: v[31], + RestoreFH: v[32], + SaveFH: v[33], + SecInfo: v[34], + SetAttr: v[35], + Verify: v[36], + Write: v[37], + RelLockOwner: v[38], + } + + return stats, nil +} diff --git a/vendor/github.com/prometheus/procfs/nfs/parse_nfs.go b/vendor/github.com/prometheus/procfs/nfs/parse_nfs.go new file mode 100644 index 00000000..c0d3a5ad --- /dev/null +++ b/vendor/github.com/prometheus/procfs/nfs/parse_nfs.go @@ -0,0 +1,67 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package nfs + +import ( + "bufio" + "fmt" + "io" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// ParseClientRPCStats returns stats read from /proc/net/rpc/nfs +func ParseClientRPCStats(r io.Reader) (*ClientRPCStats, error) { + stats := &ClientRPCStats{} + + scanner := bufio.NewScanner(r) + for scanner.Scan() { + line := scanner.Text() + parts := strings.Fields(scanner.Text()) + // require at least + if len(parts) < 2 { + return nil, fmt.Errorf("invalid NFS metric line %q", line) + } + + values, err := util.ParseUint64s(parts[1:]) + if err != nil { + return nil, fmt.Errorf("error parsing NFS metric line: %s", err) + } + + switch metricLine := parts[0]; metricLine { + case "net": + stats.Network, err = parseNetwork(values) + case "rpc": + stats.ClientRPC, err = parseClientRPC(values) + case "proc2": + stats.V2Stats, err = parseV2Stats(values) + case "proc3": + stats.V3Stats, err = parseV3Stats(values) + case "proc4": + stats.ClientV4Stats, err = parseClientV4Stats(values) + default: + return nil, fmt.Errorf("unknown NFS metric line %q", metricLine) + } + if err != nil { + return nil, fmt.Errorf("errors parsing NFS metric line: %s", err) + } + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("error scanning NFS file: %s", err) + } + + return stats, nil +} diff --git a/vendor/github.com/prometheus/procfs/nfs/parse_nfsd.go b/vendor/github.com/prometheus/procfs/nfs/parse_nfsd.go new file mode 100644 index 00000000..57bb4a35 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/nfs/parse_nfsd.go @@ -0,0 +1,89 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package nfs + +import ( + "bufio" + "fmt" + "io" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// ParseServerRPCStats returns stats read from /proc/net/rpc/nfsd +func ParseServerRPCStats(r io.Reader) (*ServerRPCStats, error) { + stats := &ServerRPCStats{} + + scanner := bufio.NewScanner(r) + for scanner.Scan() { + line := scanner.Text() + parts := strings.Fields(scanner.Text()) + // require at least + if len(parts) < 2 { + return nil, fmt.Errorf("invalid NFSd metric line %q", line) + } + label := parts[0] + + var values []uint64 + var err error + if label == "th" { + if len(parts) < 3 { + return nil, fmt.Errorf("invalid NFSd th metric line %q", line) + } + values, err = util.ParseUint64s(parts[1:3]) + } else { + values, err = util.ParseUint64s(parts[1:]) + } + if err != nil { + return nil, fmt.Errorf("error parsing NFSd metric line: %s", err) + } + + switch metricLine := parts[0]; metricLine { + case "rc": + stats.ReplyCache, err = parseReplyCache(values) + case "fh": + stats.FileHandles, err = parseFileHandles(values) + case "io": + stats.InputOutput, err = parseInputOutput(values) + case "th": + stats.Threads, err = parseThreads(values) + case "ra": + stats.ReadAheadCache, err = parseReadAheadCache(values) + case "net": + stats.Network, err = parseNetwork(values) + case "rpc": + stats.ServerRPC, err = parseServerRPC(values) + case "proc2": + stats.V2Stats, err = parseV2Stats(values) + case "proc3": + stats.V3Stats, err = parseV3Stats(values) + case "proc4": + stats.ServerV4Stats, err = parseServerV4Stats(values) + case "proc4ops": + stats.V4Ops, err = parseV4Ops(values) + default: + return nil, fmt.Errorf("unknown NFSd metric line %q", metricLine) + } + if err != nil { + return nil, fmt.Errorf("errors parsing NFSd metric line: %s", err) + } + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("error scanning NFSd file: %s", err) + } + + return stats, nil +} diff --git a/vendor/github.com/prometheus/procfs/proc.go b/vendor/github.com/prometheus/procfs/proc.go new file mode 100644 index 00000000..7cf5b8ac --- /dev/null +++ b/vendor/github.com/prometheus/procfs/proc.go @@ -0,0 +1,238 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bytes" + "fmt" + "io/ioutil" + "os" + "strconv" + "strings" +) + +// Proc provides information about a running process. +type Proc struct { + // The process ID. + PID int + + fs FS +} + +// Procs represents a list of Proc structs. +type Procs []Proc + +func (p Procs) Len() int { return len(p) } +func (p Procs) Swap(i, j int) { p[i], p[j] = p[j], p[i] } +func (p Procs) Less(i, j int) bool { return p[i].PID < p[j].PID } + +// Self returns a process for the current process read via /proc/self. +func Self() (Proc, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return Proc{}, err + } + return fs.Self() +} + +// NewProc returns a process for the given pid under /proc. +func NewProc(pid int) (Proc, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return Proc{}, err + } + return fs.NewProc(pid) +} + +// AllProcs returns a list of all currently available processes under /proc. +func AllProcs() (Procs, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return Procs{}, err + } + return fs.AllProcs() +} + +// Self returns a process for the current process. +func (fs FS) Self() (Proc, error) { + p, err := os.Readlink(fs.Path("self")) + if err != nil { + return Proc{}, err + } + pid, err := strconv.Atoi(strings.Replace(p, string(fs), "", -1)) + if err != nil { + return Proc{}, err + } + return fs.NewProc(pid) +} + +// NewProc returns a process for the given pid. +func (fs FS) NewProc(pid int) (Proc, error) { + if _, err := os.Stat(fs.Path(strconv.Itoa(pid))); err != nil { + return Proc{}, err + } + return Proc{PID: pid, fs: fs}, nil +} + +// AllProcs returns a list of all currently available processes. +func (fs FS) AllProcs() (Procs, error) { + d, err := os.Open(fs.Path()) + if err != nil { + return Procs{}, err + } + defer d.Close() + + names, err := d.Readdirnames(-1) + if err != nil { + return Procs{}, fmt.Errorf("could not read %s: %s", d.Name(), err) + } + + p := Procs{} + for _, n := range names { + pid, err := strconv.ParseInt(n, 10, 64) + if err != nil { + continue + } + p = append(p, Proc{PID: int(pid), fs: fs}) + } + + return p, nil +} + +// CmdLine returns the command line of a process. +func (p Proc) CmdLine() ([]string, error) { + f, err := os.Open(p.path("cmdline")) + if err != nil { + return nil, err + } + defer f.Close() + + data, err := ioutil.ReadAll(f) + if err != nil { + return nil, err + } + + if len(data) < 1 { + return []string{}, nil + } + + return strings.Split(string(bytes.TrimRight(data, string("\x00"))), string(byte(0))), nil +} + +// Comm returns the command name of a process. +func (p Proc) Comm() (string, error) { + f, err := os.Open(p.path("comm")) + if err != nil { + return "", err + } + defer f.Close() + + data, err := ioutil.ReadAll(f) + if err != nil { + return "", err + } + + return strings.TrimSpace(string(data)), nil +} + +// Executable returns the absolute path of the executable command of a process. +func (p Proc) Executable() (string, error) { + exe, err := os.Readlink(p.path("exe")) + if os.IsNotExist(err) { + return "", nil + } + + return exe, err +} + +// FileDescriptors returns the currently open file descriptors of a process. +func (p Proc) FileDescriptors() ([]uintptr, error) { + names, err := p.fileDescriptors() + if err != nil { + return nil, err + } + + fds := make([]uintptr, len(names)) + for i, n := range names { + fd, err := strconv.ParseInt(n, 10, 32) + if err != nil { + return nil, fmt.Errorf("could not parse fd %s: %s", n, err) + } + fds[i] = uintptr(fd) + } + + return fds, nil +} + +// FileDescriptorTargets returns the targets of all file descriptors of a process. +// If a file descriptor is not a symlink to a file (like a socket), that value will be the empty string. +func (p Proc) FileDescriptorTargets() ([]string, error) { + names, err := p.fileDescriptors() + if err != nil { + return nil, err + } + + targets := make([]string, len(names)) + + for i, name := range names { + target, err := os.Readlink(p.path("fd", name)) + if err == nil { + targets[i] = target + } + } + + return targets, nil +} + +// FileDescriptorsLen returns the number of currently open file descriptors of +// a process. +func (p Proc) FileDescriptorsLen() (int, error) { + fds, err := p.fileDescriptors() + if err != nil { + return 0, err + } + + return len(fds), nil +} + +// MountStats retrieves statistics and configuration for mount points in a +// process's namespace. +func (p Proc) MountStats() ([]*Mount, error) { + f, err := os.Open(p.path("mountstats")) + if err != nil { + return nil, err + } + defer f.Close() + + return parseMountStats(f) +} + +func (p Proc) fileDescriptors() ([]string, error) { + d, err := os.Open(p.path("fd")) + if err != nil { + return nil, err + } + defer d.Close() + + names, err := d.Readdirnames(-1) + if err != nil { + return nil, fmt.Errorf("could not read %s: %s", d.Name(), err) + } + + return names, nil +} + +func (p Proc) path(pa ...string) string { + return p.fs.Path(append([]string{strconv.Itoa(p.PID)}, pa...)...) +} diff --git a/vendor/github.com/prometheus/procfs/proc_io.go b/vendor/github.com/prometheus/procfs/proc_io.go new file mode 100644 index 00000000..0251c83b --- /dev/null +++ b/vendor/github.com/prometheus/procfs/proc_io.go @@ -0,0 +1,65 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "fmt" + "io/ioutil" + "os" +) + +// ProcIO models the content of /proc//io. +type ProcIO struct { + // Chars read. + RChar uint64 + // Chars written. + WChar uint64 + // Read syscalls. + SyscR uint64 + // Write syscalls. + SyscW uint64 + // Bytes read. + ReadBytes uint64 + // Bytes written. + WriteBytes uint64 + // Bytes written, but taking into account truncation. See + // Documentation/filesystems/proc.txt in the kernel sources for + // detailed explanation. + CancelledWriteBytes int64 +} + +// NewIO creates a new ProcIO instance from a given Proc instance. +func (p Proc) NewIO() (ProcIO, error) { + pio := ProcIO{} + + f, err := os.Open(p.path("io")) + if err != nil { + return pio, err + } + defer f.Close() + + data, err := ioutil.ReadAll(f) + if err != nil { + return pio, err + } + + ioFormat := "rchar: %d\nwchar: %d\nsyscr: %d\nsyscw: %d\n" + + "read_bytes: %d\nwrite_bytes: %d\n" + + "cancelled_write_bytes: %d\n" + + _, err = fmt.Sscanf(string(data), ioFormat, &pio.RChar, &pio.WChar, &pio.SyscR, + &pio.SyscW, &pio.ReadBytes, &pio.WriteBytes, &pio.CancelledWriteBytes) + + return pio, err +} diff --git a/vendor/github.com/prometheus/procfs/proc_limits.go b/vendor/github.com/prometheus/procfs/proc_limits.go new file mode 100644 index 00000000..f04ba6fd --- /dev/null +++ b/vendor/github.com/prometheus/procfs/proc_limits.go @@ -0,0 +1,150 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "fmt" + "os" + "regexp" + "strconv" +) + +// ProcLimits represents the soft limits for each of the process's resource +// limits. For more information see getrlimit(2): +// http://man7.org/linux/man-pages/man2/getrlimit.2.html. +type ProcLimits struct { + // CPU time limit in seconds. + CPUTime int64 + // Maximum size of files that the process may create. + FileSize int64 + // Maximum size of the process's data segment (initialized data, + // uninitialized data, and heap). + DataSize int64 + // Maximum size of the process stack in bytes. + StackSize int64 + // Maximum size of a core file. + CoreFileSize int64 + // Limit of the process's resident set in pages. + ResidentSet int64 + // Maximum number of processes that can be created for the real user ID of + // the calling process. + Processes int64 + // Value one greater than the maximum file descriptor number that can be + // opened by this process. + OpenFiles int64 + // Maximum number of bytes of memory that may be locked into RAM. + LockedMemory int64 + // Maximum size of the process's virtual memory address space in bytes. + AddressSpace int64 + // Limit on the combined number of flock(2) locks and fcntl(2) leases that + // this process may establish. + FileLocks int64 + // Limit of signals that may be queued for the real user ID of the calling + // process. + PendingSignals int64 + // Limit on the number of bytes that can be allocated for POSIX message + // queues for the real user ID of the calling process. + MsqqueueSize int64 + // Limit of the nice priority set using setpriority(2) or nice(2). + NicePriority int64 + // Limit of the real-time priority set using sched_setscheduler(2) or + // sched_setparam(2). + RealtimePriority int64 + // Limit (in microseconds) on the amount of CPU time that a process + // scheduled under a real-time scheduling policy may consume without making + // a blocking system call. + RealtimeTimeout int64 +} + +const ( + limitsFields = 3 + limitsUnlimited = "unlimited" +) + +var ( + limitsDelimiter = regexp.MustCompile(" +") +) + +// NewLimits returns the current soft limits of the process. +func (p Proc) NewLimits() (ProcLimits, error) { + f, err := os.Open(p.path("limits")) + if err != nil { + return ProcLimits{}, err + } + defer f.Close() + + var ( + l = ProcLimits{} + s = bufio.NewScanner(f) + ) + for s.Scan() { + fields := limitsDelimiter.Split(s.Text(), limitsFields) + if len(fields) != limitsFields { + return ProcLimits{}, fmt.Errorf( + "couldn't parse %s line %s", f.Name(), s.Text()) + } + + switch fields[0] { + case "Max cpu time": + l.CPUTime, err = parseInt(fields[1]) + case "Max file size": + l.FileSize, err = parseInt(fields[1]) + case "Max data size": + l.DataSize, err = parseInt(fields[1]) + case "Max stack size": + l.StackSize, err = parseInt(fields[1]) + case "Max core file size": + l.CoreFileSize, err = parseInt(fields[1]) + case "Max resident set": + l.ResidentSet, err = parseInt(fields[1]) + case "Max processes": + l.Processes, err = parseInt(fields[1]) + case "Max open files": + l.OpenFiles, err = parseInt(fields[1]) + case "Max locked memory": + l.LockedMemory, err = parseInt(fields[1]) + case "Max address space": + l.AddressSpace, err = parseInt(fields[1]) + case "Max file locks": + l.FileLocks, err = parseInt(fields[1]) + case "Max pending signals": + l.PendingSignals, err = parseInt(fields[1]) + case "Max msgqueue size": + l.MsqqueueSize, err = parseInt(fields[1]) + case "Max nice priority": + l.NicePriority, err = parseInt(fields[1]) + case "Max realtime priority": + l.RealtimePriority, err = parseInt(fields[1]) + case "Max realtime timeout": + l.RealtimeTimeout, err = parseInt(fields[1]) + } + if err != nil { + return ProcLimits{}, err + } + } + + return l, s.Err() +} + +func parseInt(s string) (int64, error) { + if s == limitsUnlimited { + return -1, nil + } + i, err := strconv.ParseInt(s, 10, 64) + if err != nil { + return 0, fmt.Errorf("couldn't parse value %s: %s", s, err) + } + return i, nil +} diff --git a/vendor/github.com/prometheus/procfs/proc_ns.go b/vendor/github.com/prometheus/procfs/proc_ns.go new file mode 100644 index 00000000..d06c26eb --- /dev/null +++ b/vendor/github.com/prometheus/procfs/proc_ns.go @@ -0,0 +1,68 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "fmt" + "os" + "strconv" + "strings" +) + +// Namespace represents a single namespace of a process. +type Namespace struct { + Type string // Namespace type. + Inode uint32 // Inode number of the namespace. If two processes are in the same namespace their inodes will match. +} + +// Namespaces contains all of the namespaces that the process is contained in. +type Namespaces map[string]Namespace + +// NewNamespaces reads from /proc/[pid/ns/* to get the namespaces of which the +// process is a member. +func (p Proc) NewNamespaces() (Namespaces, error) { + d, err := os.Open(p.path("ns")) + if err != nil { + return nil, err + } + defer d.Close() + + names, err := d.Readdirnames(-1) + if err != nil { + return nil, fmt.Errorf("failed to read contents of ns dir: %v", err) + } + + ns := make(Namespaces, len(names)) + for _, name := range names { + target, err := os.Readlink(p.path("ns", name)) + if err != nil { + return nil, err + } + + fields := strings.SplitN(target, ":", 2) + if len(fields) != 2 { + return nil, fmt.Errorf("failed to parse namespace type and inode from '%v'", target) + } + + typ := fields[0] + inode, err := strconv.ParseUint(strings.Trim(fields[1], "[]"), 10, 32) + if err != nil { + return nil, fmt.Errorf("failed to parse inode from '%v': %v", fields[1], err) + } + + ns[name] = Namespace{typ, uint32(inode)} + } + + return ns, nil +} diff --git a/vendor/github.com/prometheus/procfs/proc_stat.go b/vendor/github.com/prometheus/procfs/proc_stat.go new file mode 100644 index 00000000..3cf2a9f1 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/proc_stat.go @@ -0,0 +1,188 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bytes" + "fmt" + "io/ioutil" + "os" +) + +// Originally, this USER_HZ value was dynamically retrieved via a sysconf call +// which required cgo. However, that caused a lot of problems regarding +// cross-compilation. Alternatives such as running a binary to determine the +// value, or trying to derive it in some other way were all problematic. After +// much research it was determined that USER_HZ is actually hardcoded to 100 on +// all Go-supported platforms as of the time of this writing. This is why we +// decided to hardcode it here as well. It is not impossible that there could +// be systems with exceptions, but they should be very exotic edge cases, and +// in that case, the worst outcome will be two misreported metrics. +// +// See also the following discussions: +// +// - https://github.com/prometheus/node_exporter/issues/52 +// - https://github.com/prometheus/procfs/pull/2 +// - http://stackoverflow.com/questions/17410841/how-does-user-hz-solve-the-jiffy-scaling-issue +const userHZ = 100 + +// ProcStat provides status information about the process, +// read from /proc/[pid]/stat. +type ProcStat struct { + // The process ID. + PID int + // The filename of the executable. + Comm string + // The process state. + State string + // The PID of the parent of this process. + PPID int + // The process group ID of the process. + PGRP int + // The session ID of the process. + Session int + // The controlling terminal of the process. + TTY int + // The ID of the foreground process group of the controlling terminal of + // the process. + TPGID int + // The kernel flags word of the process. + Flags uint + // The number of minor faults the process has made which have not required + // loading a memory page from disk. + MinFlt uint + // The number of minor faults that the process's waited-for children have + // made. + CMinFlt uint + // The number of major faults the process has made which have required + // loading a memory page from disk. + MajFlt uint + // The number of major faults that the process's waited-for children have + // made. + CMajFlt uint + // Amount of time that this process has been scheduled in user mode, + // measured in clock ticks. + UTime uint + // Amount of time that this process has been scheduled in kernel mode, + // measured in clock ticks. + STime uint + // Amount of time that this process's waited-for children have been + // scheduled in user mode, measured in clock ticks. + CUTime uint + // Amount of time that this process's waited-for children have been + // scheduled in kernel mode, measured in clock ticks. + CSTime uint + // For processes running a real-time scheduling policy, this is the negated + // scheduling priority, minus one. + Priority int + // The nice value, a value in the range 19 (low priority) to -20 (high + // priority). + Nice int + // Number of threads in this process. + NumThreads int + // The time the process started after system boot, the value is expressed + // in clock ticks. + Starttime uint64 + // Virtual memory size in bytes. + VSize int + // Resident set size in pages. + RSS int + + fs FS +} + +// NewStat returns the current status information of the process. +func (p Proc) NewStat() (ProcStat, error) { + f, err := os.Open(p.path("stat")) + if err != nil { + return ProcStat{}, err + } + defer f.Close() + + data, err := ioutil.ReadAll(f) + if err != nil { + return ProcStat{}, err + } + + var ( + ignore int + + s = ProcStat{PID: p.PID, fs: p.fs} + l = bytes.Index(data, []byte("(")) + r = bytes.LastIndex(data, []byte(")")) + ) + + if l < 0 || r < 0 { + return ProcStat{}, fmt.Errorf( + "unexpected format, couldn't extract comm: %s", + data, + ) + } + + s.Comm = string(data[l+1 : r]) + _, err = fmt.Fscan( + bytes.NewBuffer(data[r+2:]), + &s.State, + &s.PPID, + &s.PGRP, + &s.Session, + &s.TTY, + &s.TPGID, + &s.Flags, + &s.MinFlt, + &s.CMinFlt, + &s.MajFlt, + &s.CMajFlt, + &s.UTime, + &s.STime, + &s.CUTime, + &s.CSTime, + &s.Priority, + &s.Nice, + &s.NumThreads, + &ignore, + &s.Starttime, + &s.VSize, + &s.RSS, + ) + if err != nil { + return ProcStat{}, err + } + + return s, nil +} + +// VirtualMemory returns the virtual memory size in bytes. +func (s ProcStat) VirtualMemory() int { + return s.VSize +} + +// ResidentMemory returns the resident memory size in bytes. +func (s ProcStat) ResidentMemory() int { + return s.RSS * os.Getpagesize() +} + +// StartTime returns the unix timestamp of the process in seconds. +func (s ProcStat) StartTime() (float64, error) { + stat, err := s.fs.NewStat() + if err != nil { + return 0, err + } + return float64(stat.BootTime) + (float64(s.Starttime) / userHZ), nil +} + +// CPUTime returns the total CPU user and system time in seconds. +func (s ProcStat) CPUTime() float64 { + return float64(s.UTime+s.STime) / userHZ +} diff --git a/vendor/github.com/prometheus/procfs/stat.go b/vendor/github.com/prometheus/procfs/stat.go new file mode 100644 index 00000000..61eb6b0e --- /dev/null +++ b/vendor/github.com/prometheus/procfs/stat.go @@ -0,0 +1,232 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "fmt" + "io" + "os" + "strconv" + "strings" +) + +// CPUStat shows how much time the cpu spend in various stages. +type CPUStat struct { + User float64 + Nice float64 + System float64 + Idle float64 + Iowait float64 + IRQ float64 + SoftIRQ float64 + Steal float64 + Guest float64 + GuestNice float64 +} + +// SoftIRQStat represent the softirq statistics as exported in the procfs stat file. +// A nice introduction can be found at https://0xax.gitbooks.io/linux-insides/content/interrupts/interrupts-9.html +// It is possible to get per-cpu stats by reading /proc/softirqs +type SoftIRQStat struct { + Hi uint64 + Timer uint64 + NetTx uint64 + NetRx uint64 + Block uint64 + BlockIoPoll uint64 + Tasklet uint64 + Sched uint64 + Hrtimer uint64 + Rcu uint64 +} + +// Stat represents kernel/system statistics. +type Stat struct { + // Boot time in seconds since the Epoch. + BootTime uint64 + // Summed up cpu statistics. + CPUTotal CPUStat + // Per-CPU statistics. + CPU []CPUStat + // Number of times interrupts were handled, which contains numbered and unnumbered IRQs. + IRQTotal uint64 + // Number of times a numbered IRQ was triggered. + IRQ []uint64 + // Number of times a context switch happened. + ContextSwitches uint64 + // Number of times a process was created. + ProcessCreated uint64 + // Number of processes currently running. + ProcessesRunning uint64 + // Number of processes currently blocked (waiting for IO). + ProcessesBlocked uint64 + // Number of times a softirq was scheduled. + SoftIRQTotal uint64 + // Detailed softirq statistics. + SoftIRQ SoftIRQStat +} + +// NewStat returns kernel/system statistics read from /proc/stat. +func NewStat() (Stat, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return Stat{}, err + } + + return fs.NewStat() +} + +// Parse a cpu statistics line and returns the CPUStat struct plus the cpu id (or -1 for the overall sum). +func parseCPUStat(line string) (CPUStat, int64, error) { + cpuStat := CPUStat{} + var cpu string + + count, err := fmt.Sscanf(line, "%s %f %f %f %f %f %f %f %f %f %f", + &cpu, + &cpuStat.User, &cpuStat.Nice, &cpuStat.System, &cpuStat.Idle, + &cpuStat.Iowait, &cpuStat.IRQ, &cpuStat.SoftIRQ, &cpuStat.Steal, + &cpuStat.Guest, &cpuStat.GuestNice) + + if err != nil && err != io.EOF { + return CPUStat{}, -1, fmt.Errorf("couldn't parse %s (cpu): %s", line, err) + } + if count == 0 { + return CPUStat{}, -1, fmt.Errorf("couldn't parse %s (cpu): 0 elements parsed", line) + } + + cpuStat.User /= userHZ + cpuStat.Nice /= userHZ + cpuStat.System /= userHZ + cpuStat.Idle /= userHZ + cpuStat.Iowait /= userHZ + cpuStat.IRQ /= userHZ + cpuStat.SoftIRQ /= userHZ + cpuStat.Steal /= userHZ + cpuStat.Guest /= userHZ + cpuStat.GuestNice /= userHZ + + if cpu == "cpu" { + return cpuStat, -1, nil + } + + cpuID, err := strconv.ParseInt(cpu[3:], 10, 64) + if err != nil { + return CPUStat{}, -1, fmt.Errorf("couldn't parse %s (cpu/cpuid): %s", line, err) + } + + return cpuStat, cpuID, nil +} + +// Parse a softirq line. +func parseSoftIRQStat(line string) (SoftIRQStat, uint64, error) { + softIRQStat := SoftIRQStat{} + var total uint64 + var prefix string + + _, err := fmt.Sscanf(line, "%s %d %d %d %d %d %d %d %d %d %d %d", + &prefix, &total, + &softIRQStat.Hi, &softIRQStat.Timer, &softIRQStat.NetTx, &softIRQStat.NetRx, + &softIRQStat.Block, &softIRQStat.BlockIoPoll, + &softIRQStat.Tasklet, &softIRQStat.Sched, + &softIRQStat.Hrtimer, &softIRQStat.Rcu) + + if err != nil { + return SoftIRQStat{}, 0, fmt.Errorf("couldn't parse %s (softirq): %s", line, err) + } + + return softIRQStat, total, nil +} + +// NewStat returns an information about current kernel/system statistics. +func (fs FS) NewStat() (Stat, error) { + // See https://www.kernel.org/doc/Documentation/filesystems/proc.txt + + f, err := os.Open(fs.Path("stat")) + if err != nil { + return Stat{}, err + } + defer f.Close() + + stat := Stat{} + + scanner := bufio.NewScanner(f) + for scanner.Scan() { + line := scanner.Text() + parts := strings.Fields(scanner.Text()) + // require at least + if len(parts) < 2 { + continue + } + switch { + case parts[0] == "btime": + if stat.BootTime, err = strconv.ParseUint(parts[1], 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (btime): %s", parts[1], err) + } + case parts[0] == "intr": + if stat.IRQTotal, err = strconv.ParseUint(parts[1], 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (intr): %s", parts[1], err) + } + numberedIRQs := parts[2:] + stat.IRQ = make([]uint64, len(numberedIRQs)) + for i, count := range numberedIRQs { + if stat.IRQ[i], err = strconv.ParseUint(count, 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (intr%d): %s", count, i, err) + } + } + case parts[0] == "ctxt": + if stat.ContextSwitches, err = strconv.ParseUint(parts[1], 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (ctxt): %s", parts[1], err) + } + case parts[0] == "processes": + if stat.ProcessCreated, err = strconv.ParseUint(parts[1], 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (processes): %s", parts[1], err) + } + case parts[0] == "procs_running": + if stat.ProcessesRunning, err = strconv.ParseUint(parts[1], 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (procs_running): %s", parts[1], err) + } + case parts[0] == "procs_blocked": + if stat.ProcessesBlocked, err = strconv.ParseUint(parts[1], 10, 64); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s (procs_blocked): %s", parts[1], err) + } + case parts[0] == "softirq": + softIRQStats, total, err := parseSoftIRQStat(line) + if err != nil { + return Stat{}, err + } + stat.SoftIRQTotal = total + stat.SoftIRQ = softIRQStats + case strings.HasPrefix(parts[0], "cpu"): + cpuStat, cpuID, err := parseCPUStat(line) + if err != nil { + return Stat{}, err + } + if cpuID == -1 { + stat.CPUTotal = cpuStat + } else { + for int64(len(stat.CPU)) <= cpuID { + stat.CPU = append(stat.CPU, CPUStat{}) + } + stat.CPU[cpuID] = cpuStat + } + } + } + + if err := scanner.Err(); err != nil { + return Stat{}, fmt.Errorf("couldn't parse %s: %s", f.Name(), err) + } + + return stat, nil +} diff --git a/vendor/github.com/prometheus/procfs/xfrm.go b/vendor/github.com/prometheus/procfs/xfrm.go new file mode 100644 index 00000000..ffe9df50 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/xfrm.go @@ -0,0 +1,187 @@ +// Copyright 2017 Prometheus Team +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "fmt" + "os" + "strconv" + "strings" +) + +// XfrmStat models the contents of /proc/net/xfrm_stat. +type XfrmStat struct { + // All errors which are not matched by other + XfrmInError int + // No buffer is left + XfrmInBufferError int + // Header Error + XfrmInHdrError int + // No state found + // i.e. either inbound SPI, address, or IPSEC protocol at SA is wrong + XfrmInNoStates int + // Transformation protocol specific error + // e.g. SA Key is wrong + XfrmInStateProtoError int + // Transformation mode specific error + XfrmInStateModeError int + // Sequence error + // e.g. sequence number is out of window + XfrmInStateSeqError int + // State is expired + XfrmInStateExpired int + // State has mismatch option + // e.g. UDP encapsulation type is mismatched + XfrmInStateMismatch int + // State is invalid + XfrmInStateInvalid int + // No matching template for states + // e.g. Inbound SAs are correct but SP rule is wrong + XfrmInTmplMismatch int + // No policy is found for states + // e.g. Inbound SAs are correct but no SP is found + XfrmInNoPols int + // Policy discards + XfrmInPolBlock int + // Policy error + XfrmInPolError int + // All errors which are not matched by others + XfrmOutError int + // Bundle generation error + XfrmOutBundleGenError int + // Bundle check error + XfrmOutBundleCheckError int + // No state was found + XfrmOutNoStates int + // Transformation protocol specific error + XfrmOutStateProtoError int + // Transportation mode specific error + XfrmOutStateModeError int + // Sequence error + // i.e sequence number overflow + XfrmOutStateSeqError int + // State is expired + XfrmOutStateExpired int + // Policy discads + XfrmOutPolBlock int + // Policy is dead + XfrmOutPolDead int + // Policy Error + XfrmOutPolError int + XfrmFwdHdrError int + XfrmOutStateInvalid int + XfrmAcquireError int +} + +// NewXfrmStat reads the xfrm_stat statistics. +func NewXfrmStat() (XfrmStat, error) { + fs, err := NewFS(DefaultMountPoint) + if err != nil { + return XfrmStat{}, err + } + + return fs.NewXfrmStat() +} + +// NewXfrmStat reads the xfrm_stat statistics from the 'proc' filesystem. +func (fs FS) NewXfrmStat() (XfrmStat, error) { + file, err := os.Open(fs.Path("net/xfrm_stat")) + if err != nil { + return XfrmStat{}, err + } + defer file.Close() + + var ( + x = XfrmStat{} + s = bufio.NewScanner(file) + ) + + for s.Scan() { + fields := strings.Fields(s.Text()) + + if len(fields) != 2 { + return XfrmStat{}, fmt.Errorf( + "couldnt parse %s line %s", file.Name(), s.Text()) + } + + name := fields[0] + value, err := strconv.Atoi(fields[1]) + if err != nil { + return XfrmStat{}, err + } + + switch name { + case "XfrmInError": + x.XfrmInError = value + case "XfrmInBufferError": + x.XfrmInBufferError = value + case "XfrmInHdrError": + x.XfrmInHdrError = value + case "XfrmInNoStates": + x.XfrmInNoStates = value + case "XfrmInStateProtoError": + x.XfrmInStateProtoError = value + case "XfrmInStateModeError": + x.XfrmInStateModeError = value + case "XfrmInStateSeqError": + x.XfrmInStateSeqError = value + case "XfrmInStateExpired": + x.XfrmInStateExpired = value + case "XfrmInStateInvalid": + x.XfrmInStateInvalid = value + case "XfrmInTmplMismatch": + x.XfrmInTmplMismatch = value + case "XfrmInNoPols": + x.XfrmInNoPols = value + case "XfrmInPolBlock": + x.XfrmInPolBlock = value + case "XfrmInPolError": + x.XfrmInPolError = value + case "XfrmOutError": + x.XfrmOutError = value + case "XfrmInStateMismatch": + x.XfrmInStateMismatch = value + case "XfrmOutBundleGenError": + x.XfrmOutBundleGenError = value + case "XfrmOutBundleCheckError": + x.XfrmOutBundleCheckError = value + case "XfrmOutNoStates": + x.XfrmOutNoStates = value + case "XfrmOutStateProtoError": + x.XfrmOutStateProtoError = value + case "XfrmOutStateModeError": + x.XfrmOutStateModeError = value + case "XfrmOutStateSeqError": + x.XfrmOutStateSeqError = value + case "XfrmOutStateExpired": + x.XfrmOutStateExpired = value + case "XfrmOutPolBlock": + x.XfrmOutPolBlock = value + case "XfrmOutPolDead": + x.XfrmOutPolDead = value + case "XfrmOutPolError": + x.XfrmOutPolError = value + case "XfrmFwdHdrError": + x.XfrmFwdHdrError = value + case "XfrmOutStateInvalid": + x.XfrmOutStateInvalid = value + case "XfrmAcquireError": + x.XfrmAcquireError = value + } + + } + + return x, s.Err() +} diff --git a/vendor/github.com/prometheus/procfs/xfs/parse.go b/vendor/github.com/prometheus/procfs/xfs/parse.go new file mode 100644 index 00000000..2bc0ef34 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/xfs/parse.go @@ -0,0 +1,330 @@ +// Copyright 2017 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package xfs + +import ( + "bufio" + "fmt" + "io" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// ParseStats parses a Stats from an input io.Reader, using the format +// found in /proc/fs/xfs/stat. +func ParseStats(r io.Reader) (*Stats, error) { + const ( + // Fields parsed into stats structures. + fieldExtentAlloc = "extent_alloc" + fieldAbt = "abt" + fieldBlkMap = "blk_map" + fieldBmbt = "bmbt" + fieldDir = "dir" + fieldTrans = "trans" + fieldIg = "ig" + fieldLog = "log" + fieldRw = "rw" + fieldAttr = "attr" + fieldIcluster = "icluster" + fieldVnodes = "vnodes" + fieldBuf = "buf" + fieldXpc = "xpc" + + // Unimplemented at this time due to lack of documentation. + fieldPushAil = "push_ail" + fieldXstrat = "xstrat" + fieldAbtb2 = "abtb2" + fieldAbtc2 = "abtc2" + fieldBmbt2 = "bmbt2" + fieldIbt2 = "ibt2" + fieldFibt2 = "fibt2" + fieldQm = "qm" + fieldDebug = "debug" + ) + + var xfss Stats + + s := bufio.NewScanner(r) + for s.Scan() { + // Expect at least a string label and a single integer value, ex: + // - abt 0 + // - rw 1 2 + ss := strings.Fields(string(s.Bytes())) + if len(ss) < 2 { + continue + } + label := ss[0] + + // Extended precision counters are uint64 values. + if label == fieldXpc { + us, err := util.ParseUint64s(ss[1:]) + if err != nil { + return nil, err + } + + xfss.ExtendedPrecision, err = extendedPrecisionStats(us) + if err != nil { + return nil, err + } + + continue + } + + // All other counters are uint32 values. + us, err := util.ParseUint32s(ss[1:]) + if err != nil { + return nil, err + } + + switch label { + case fieldExtentAlloc: + xfss.ExtentAllocation, err = extentAllocationStats(us) + case fieldAbt: + xfss.AllocationBTree, err = btreeStats(us) + case fieldBlkMap: + xfss.BlockMapping, err = blockMappingStats(us) + case fieldBmbt: + xfss.BlockMapBTree, err = btreeStats(us) + case fieldDir: + xfss.DirectoryOperation, err = directoryOperationStats(us) + case fieldTrans: + xfss.Transaction, err = transactionStats(us) + case fieldIg: + xfss.InodeOperation, err = inodeOperationStats(us) + case fieldLog: + xfss.LogOperation, err = logOperationStats(us) + case fieldRw: + xfss.ReadWrite, err = readWriteStats(us) + case fieldAttr: + xfss.AttributeOperation, err = attributeOperationStats(us) + case fieldIcluster: + xfss.InodeClustering, err = inodeClusteringStats(us) + case fieldVnodes: + xfss.Vnode, err = vnodeStats(us) + case fieldBuf: + xfss.Buffer, err = bufferStats(us) + } + if err != nil { + return nil, err + } + } + + return &xfss, s.Err() +} + +// extentAllocationStats builds an ExtentAllocationStats from a slice of uint32s. +func extentAllocationStats(us []uint32) (ExtentAllocationStats, error) { + if l := len(us); l != 4 { + return ExtentAllocationStats{}, fmt.Errorf("incorrect number of values for XFS extent allocation stats: %d", l) + } + + return ExtentAllocationStats{ + ExtentsAllocated: us[0], + BlocksAllocated: us[1], + ExtentsFreed: us[2], + BlocksFreed: us[3], + }, nil +} + +// btreeStats builds a BTreeStats from a slice of uint32s. +func btreeStats(us []uint32) (BTreeStats, error) { + if l := len(us); l != 4 { + return BTreeStats{}, fmt.Errorf("incorrect number of values for XFS btree stats: %d", l) + } + + return BTreeStats{ + Lookups: us[0], + Compares: us[1], + RecordsInserted: us[2], + RecordsDeleted: us[3], + }, nil +} + +// BlockMappingStat builds a BlockMappingStats from a slice of uint32s. +func blockMappingStats(us []uint32) (BlockMappingStats, error) { + if l := len(us); l != 7 { + return BlockMappingStats{}, fmt.Errorf("incorrect number of values for XFS block mapping stats: %d", l) + } + + return BlockMappingStats{ + Reads: us[0], + Writes: us[1], + Unmaps: us[2], + ExtentListInsertions: us[3], + ExtentListDeletions: us[4], + ExtentListLookups: us[5], + ExtentListCompares: us[6], + }, nil +} + +// DirectoryOperationStats builds a DirectoryOperationStats from a slice of uint32s. +func directoryOperationStats(us []uint32) (DirectoryOperationStats, error) { + if l := len(us); l != 4 { + return DirectoryOperationStats{}, fmt.Errorf("incorrect number of values for XFS directory operation stats: %d", l) + } + + return DirectoryOperationStats{ + Lookups: us[0], + Creates: us[1], + Removes: us[2], + Getdents: us[3], + }, nil +} + +// TransactionStats builds a TransactionStats from a slice of uint32s. +func transactionStats(us []uint32) (TransactionStats, error) { + if l := len(us); l != 3 { + return TransactionStats{}, fmt.Errorf("incorrect number of values for XFS transaction stats: %d", l) + } + + return TransactionStats{ + Sync: us[0], + Async: us[1], + Empty: us[2], + }, nil +} + +// InodeOperationStats builds an InodeOperationStats from a slice of uint32s. +func inodeOperationStats(us []uint32) (InodeOperationStats, error) { + if l := len(us); l != 7 { + return InodeOperationStats{}, fmt.Errorf("incorrect number of values for XFS inode operation stats: %d", l) + } + + return InodeOperationStats{ + Attempts: us[0], + Found: us[1], + Recycle: us[2], + Missed: us[3], + Duplicate: us[4], + Reclaims: us[5], + AttributeChange: us[6], + }, nil +} + +// LogOperationStats builds a LogOperationStats from a slice of uint32s. +func logOperationStats(us []uint32) (LogOperationStats, error) { + if l := len(us); l != 5 { + return LogOperationStats{}, fmt.Errorf("incorrect number of values for XFS log operation stats: %d", l) + } + + return LogOperationStats{ + Writes: us[0], + Blocks: us[1], + NoInternalBuffers: us[2], + Force: us[3], + ForceSleep: us[4], + }, nil +} + +// ReadWriteStats builds a ReadWriteStats from a slice of uint32s. +func readWriteStats(us []uint32) (ReadWriteStats, error) { + if l := len(us); l != 2 { + return ReadWriteStats{}, fmt.Errorf("incorrect number of values for XFS read write stats: %d", l) + } + + return ReadWriteStats{ + Read: us[0], + Write: us[1], + }, nil +} + +// AttributeOperationStats builds an AttributeOperationStats from a slice of uint32s. +func attributeOperationStats(us []uint32) (AttributeOperationStats, error) { + if l := len(us); l != 4 { + return AttributeOperationStats{}, fmt.Errorf("incorrect number of values for XFS attribute operation stats: %d", l) + } + + return AttributeOperationStats{ + Get: us[0], + Set: us[1], + Remove: us[2], + List: us[3], + }, nil +} + +// InodeClusteringStats builds an InodeClusteringStats from a slice of uint32s. +func inodeClusteringStats(us []uint32) (InodeClusteringStats, error) { + if l := len(us); l != 3 { + return InodeClusteringStats{}, fmt.Errorf("incorrect number of values for XFS inode clustering stats: %d", l) + } + + return InodeClusteringStats{ + Iflush: us[0], + Flush: us[1], + FlushInode: us[2], + }, nil +} + +// VnodeStats builds a VnodeStats from a slice of uint32s. +func vnodeStats(us []uint32) (VnodeStats, error) { + // The attribute "Free" appears to not be available on older XFS + // stats versions. Therefore, 7 or 8 elements may appear in + // this slice. + l := len(us) + if l != 7 && l != 8 { + return VnodeStats{}, fmt.Errorf("incorrect number of values for XFS vnode stats: %d", l) + } + + s := VnodeStats{ + Active: us[0], + Allocate: us[1], + Get: us[2], + Hold: us[3], + Release: us[4], + Reclaim: us[5], + Remove: us[6], + } + + // Skip adding free, unless it is present. The zero value will + // be used in place of an actual count. + if l == 7 { + return s, nil + } + + s.Free = us[7] + return s, nil +} + +// BufferStats builds a BufferStats from a slice of uint32s. +func bufferStats(us []uint32) (BufferStats, error) { + if l := len(us); l != 9 { + return BufferStats{}, fmt.Errorf("incorrect number of values for XFS buffer stats: %d", l) + } + + return BufferStats{ + Get: us[0], + Create: us[1], + GetLocked: us[2], + GetLockedWaited: us[3], + BusyLocked: us[4], + MissLocked: us[5], + PageRetries: us[6], + PageFound: us[7], + GetRead: us[8], + }, nil +} + +// ExtendedPrecisionStats builds an ExtendedPrecisionStats from a slice of uint32s. +func extendedPrecisionStats(us []uint64) (ExtendedPrecisionStats, error) { + if l := len(us); l != 3 { + return ExtendedPrecisionStats{}, fmt.Errorf("incorrect number of values for XFS extended precision stats: %d", l) + } + + return ExtendedPrecisionStats{ + FlushBytes: us[0], + WriteBytes: us[1], + ReadBytes: us[2], + }, nil +} diff --git a/vendor/github.com/prometheus/procfs/xfs/xfs.go b/vendor/github.com/prometheus/procfs/xfs/xfs.go new file mode 100644 index 00000000..d86794b7 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/xfs/xfs.go @@ -0,0 +1,163 @@ +// Copyright 2017 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package xfs provides access to statistics exposed by the XFS filesystem. +package xfs + +// Stats contains XFS filesystem runtime statistics, parsed from +// /proc/fs/xfs/stat. +// +// The names and meanings of each statistic were taken from +// http://xfs.org/index.php/Runtime_Stats and xfs_stats.h in the Linux +// kernel source. Most counters are uint32s (same data types used in +// xfs_stats.h), but some of the "extended precision stats" are uint64s. +type Stats struct { + // The name of the filesystem used to source these statistics. + // If empty, this indicates aggregated statistics for all XFS + // filesystems on the host. + Name string + + ExtentAllocation ExtentAllocationStats + AllocationBTree BTreeStats + BlockMapping BlockMappingStats + BlockMapBTree BTreeStats + DirectoryOperation DirectoryOperationStats + Transaction TransactionStats + InodeOperation InodeOperationStats + LogOperation LogOperationStats + ReadWrite ReadWriteStats + AttributeOperation AttributeOperationStats + InodeClustering InodeClusteringStats + Vnode VnodeStats + Buffer BufferStats + ExtendedPrecision ExtendedPrecisionStats +} + +// ExtentAllocationStats contains statistics regarding XFS extent allocations. +type ExtentAllocationStats struct { + ExtentsAllocated uint32 + BlocksAllocated uint32 + ExtentsFreed uint32 + BlocksFreed uint32 +} + +// BTreeStats contains statistics regarding an XFS internal B-tree. +type BTreeStats struct { + Lookups uint32 + Compares uint32 + RecordsInserted uint32 + RecordsDeleted uint32 +} + +// BlockMappingStats contains statistics regarding XFS block maps. +type BlockMappingStats struct { + Reads uint32 + Writes uint32 + Unmaps uint32 + ExtentListInsertions uint32 + ExtentListDeletions uint32 + ExtentListLookups uint32 + ExtentListCompares uint32 +} + +// DirectoryOperationStats contains statistics regarding XFS directory entries. +type DirectoryOperationStats struct { + Lookups uint32 + Creates uint32 + Removes uint32 + Getdents uint32 +} + +// TransactionStats contains statistics regarding XFS metadata transactions. +type TransactionStats struct { + Sync uint32 + Async uint32 + Empty uint32 +} + +// InodeOperationStats contains statistics regarding XFS inode operations. +type InodeOperationStats struct { + Attempts uint32 + Found uint32 + Recycle uint32 + Missed uint32 + Duplicate uint32 + Reclaims uint32 + AttributeChange uint32 +} + +// LogOperationStats contains statistics regarding the XFS log buffer. +type LogOperationStats struct { + Writes uint32 + Blocks uint32 + NoInternalBuffers uint32 + Force uint32 + ForceSleep uint32 +} + +// ReadWriteStats contains statistics regarding the number of read and write +// system calls for XFS filesystems. +type ReadWriteStats struct { + Read uint32 + Write uint32 +} + +// AttributeOperationStats contains statistics regarding manipulation of +// XFS extended file attributes. +type AttributeOperationStats struct { + Get uint32 + Set uint32 + Remove uint32 + List uint32 +} + +// InodeClusteringStats contains statistics regarding XFS inode clustering +// operations. +type InodeClusteringStats struct { + Iflush uint32 + Flush uint32 + FlushInode uint32 +} + +// VnodeStats contains statistics regarding XFS vnode operations. +type VnodeStats struct { + Active uint32 + Allocate uint32 + Get uint32 + Hold uint32 + Release uint32 + Reclaim uint32 + Remove uint32 + Free uint32 +} + +// BufferStats contains statistics regarding XFS read/write I/O buffers. +type BufferStats struct { + Get uint32 + Create uint32 + GetLocked uint32 + GetLockedWaited uint32 + BusyLocked uint32 + MissLocked uint32 + PageRetries uint32 + PageFound uint32 + GetRead uint32 +} + +// ExtendedPrecisionStats contains high precision counters used to track the +// total number of bytes read, written, or flushed, during XFS operations. +type ExtendedPrecisionStats struct { + FlushBytes uint64 + WriteBytes uint64 + ReadBytes uint64 +} diff --git a/vendor/github.com/satori/go.uuid/LICENSE b/vendor/github.com/satori/go.uuid/LICENSE new file mode 100644 index 00000000..926d5498 --- /dev/null +++ b/vendor/github.com/satori/go.uuid/LICENSE @@ -0,0 +1,20 @@ +Copyright (C) 2013-2018 by Maxim Bublis + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/satori/go.uuid/codec.go b/vendor/github.com/satori/go.uuid/codec.go new file mode 100644 index 00000000..656892c5 --- /dev/null +++ b/vendor/github.com/satori/go.uuid/codec.go @@ -0,0 +1,206 @@ +// Copyright (C) 2013-2018 by Maxim Bublis +// +// Permission is hereby granted, free of charge, to any person obtaining +// a copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to +// permit persons to whom the Software is furnished to do so, subject to +// the following conditions: +// +// The above copyright notice and this permission notice shall be +// included in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +package uuid + +import ( + "bytes" + "encoding/hex" + "fmt" +) + +// FromBytes returns UUID converted from raw byte slice input. +// It will return error if the slice isn't 16 bytes long. +func FromBytes(input []byte) (u UUID, err error) { + err = u.UnmarshalBinary(input) + return +} + +// FromBytesOrNil returns UUID converted from raw byte slice input. +// Same behavior as FromBytes, but returns a Nil UUID on error. +func FromBytesOrNil(input []byte) UUID { + uuid, err := FromBytes(input) + if err != nil { + return Nil + } + return uuid +} + +// FromString returns UUID parsed from string input. +// Input is expected in a form accepted by UnmarshalText. +func FromString(input string) (u UUID, err error) { + err = u.UnmarshalText([]byte(input)) + return +} + +// FromStringOrNil returns UUID parsed from string input. +// Same behavior as FromString, but returns a Nil UUID on error. +func FromStringOrNil(input string) UUID { + uuid, err := FromString(input) + if err != nil { + return Nil + } + return uuid +} + +// MarshalText implements the encoding.TextMarshaler interface. +// The encoding is the same as returned by String. +func (u UUID) MarshalText() (text []byte, err error) { + text = []byte(u.String()) + return +} + +// UnmarshalText implements the encoding.TextUnmarshaler interface. +// Following formats are supported: +// "6ba7b810-9dad-11d1-80b4-00c04fd430c8", +// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", +// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" +// "6ba7b8109dad11d180b400c04fd430c8" +// ABNF for supported UUID text representation follows: +// uuid := canonical | hashlike | braced | urn +// plain := canonical | hashlike +// canonical := 4hexoct '-' 2hexoct '-' 2hexoct '-' 6hexoct +// hashlike := 12hexoct +// braced := '{' plain '}' +// urn := URN ':' UUID-NID ':' plain +// URN := 'urn' +// UUID-NID := 'uuid' +// 12hexoct := 6hexoct 6hexoct +// 6hexoct := 4hexoct 2hexoct +// 4hexoct := 2hexoct 2hexoct +// 2hexoct := hexoct hexoct +// hexoct := hexdig hexdig +// hexdig := '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | +// 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | +// 'A' | 'B' | 'C' | 'D' | 'E' | 'F' +func (u *UUID) UnmarshalText(text []byte) (err error) { + switch len(text) { + case 32: + return u.decodeHashLike(text) + case 36: + return u.decodeCanonical(text) + case 38: + return u.decodeBraced(text) + case 41: + fallthrough + case 45: + return u.decodeURN(text) + default: + return fmt.Errorf("uuid: incorrect UUID length: %s", text) + } +} + +// decodeCanonical decodes UUID string in format +// "6ba7b810-9dad-11d1-80b4-00c04fd430c8". +func (u *UUID) decodeCanonical(t []byte) (err error) { + if t[8] != '-' || t[13] != '-' || t[18] != '-' || t[23] != '-' { + return fmt.Errorf("uuid: incorrect UUID format %s", t) + } + + src := t[:] + dst := u[:] + + for i, byteGroup := range byteGroups { + if i > 0 { + src = src[1:] // skip dash + } + _, err = hex.Decode(dst[:byteGroup/2], src[:byteGroup]) + if err != nil { + return + } + src = src[byteGroup:] + dst = dst[byteGroup/2:] + } + + return +} + +// decodeHashLike decodes UUID string in format +// "6ba7b8109dad11d180b400c04fd430c8". +func (u *UUID) decodeHashLike(t []byte) (err error) { + src := t[:] + dst := u[:] + + if _, err = hex.Decode(dst, src); err != nil { + return err + } + return +} + +// decodeBraced decodes UUID string in format +// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}" or in format +// "{6ba7b8109dad11d180b400c04fd430c8}". +func (u *UUID) decodeBraced(t []byte) (err error) { + l := len(t) + + if t[0] != '{' || t[l-1] != '}' { + return fmt.Errorf("uuid: incorrect UUID format %s", t) + } + + return u.decodePlain(t[1 : l-1]) +} + +// decodeURN decodes UUID string in format +// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" or in format +// "urn:uuid:6ba7b8109dad11d180b400c04fd430c8". +func (u *UUID) decodeURN(t []byte) (err error) { + total := len(t) + + urn_uuid_prefix := t[:9] + + if !bytes.Equal(urn_uuid_prefix, urnPrefix) { + return fmt.Errorf("uuid: incorrect UUID format: %s", t) + } + + return u.decodePlain(t[9:total]) +} + +// decodePlain decodes UUID string in canonical format +// "6ba7b810-9dad-11d1-80b4-00c04fd430c8" or in hash-like format +// "6ba7b8109dad11d180b400c04fd430c8". +func (u *UUID) decodePlain(t []byte) (err error) { + switch len(t) { + case 32: + return u.decodeHashLike(t) + case 36: + return u.decodeCanonical(t) + default: + return fmt.Errorf("uuid: incorrrect UUID length: %s", t) + } +} + +// MarshalBinary implements the encoding.BinaryMarshaler interface. +func (u UUID) MarshalBinary() (data []byte, err error) { + data = u.Bytes() + return +} + +// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. +// It will return error if the slice isn't 16 bytes long. +func (u *UUID) UnmarshalBinary(data []byte) (err error) { + if len(data) != Size { + err = fmt.Errorf("uuid: UUID must be exactly 16 bytes long, got %d bytes", len(data)) + return + } + copy(u[:], data) + + return +} diff --git a/vendor/github.com/satori/go.uuid/generator.go b/vendor/github.com/satori/go.uuid/generator.go new file mode 100644 index 00000000..3f2f1da2 --- /dev/null +++ b/vendor/github.com/satori/go.uuid/generator.go @@ -0,0 +1,239 @@ +// Copyright (C) 2013-2018 by Maxim Bublis +// +// Permission is hereby granted, free of charge, to any person obtaining +// a copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to +// permit persons to whom the Software is furnished to do so, subject to +// the following conditions: +// +// The above copyright notice and this permission notice shall be +// included in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +package uuid + +import ( + "crypto/md5" + "crypto/rand" + "crypto/sha1" + "encoding/binary" + "hash" + "net" + "os" + "sync" + "time" +) + +// Difference in 100-nanosecond intervals between +// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970). +const epochStart = 122192928000000000 + +var ( + global = newDefaultGenerator() + + epochFunc = unixTimeFunc + posixUID = uint32(os.Getuid()) + posixGID = uint32(os.Getgid()) +) + +// NewV1 returns UUID based on current timestamp and MAC address. +func NewV1() UUID { + return global.NewV1() +} + +// NewV2 returns DCE Security UUID based on POSIX UID/GID. +func NewV2(domain byte) UUID { + return global.NewV2(domain) +} + +// NewV3 returns UUID based on MD5 hash of namespace UUID and name. +func NewV3(ns UUID, name string) UUID { + return global.NewV3(ns, name) +} + +// NewV4 returns random generated UUID. +func NewV4() UUID { + return global.NewV4() +} + +// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name. +func NewV5(ns UUID, name string) UUID { + return global.NewV5(ns, name) +} + +// Generator provides interface for generating UUIDs. +type Generator interface { + NewV1() UUID + NewV2(domain byte) UUID + NewV3(ns UUID, name string) UUID + NewV4() UUID + NewV5(ns UUID, name string) UUID +} + +// Default generator implementation. +type generator struct { + storageOnce sync.Once + storageMutex sync.Mutex + + lastTime uint64 + clockSequence uint16 + hardwareAddr [6]byte +} + +func newDefaultGenerator() Generator { + return &generator{} +} + +// NewV1 returns UUID based on current timestamp and MAC address. +func (g *generator) NewV1() UUID { + u := UUID{} + + timeNow, clockSeq, hardwareAddr := g.getStorage() + + binary.BigEndian.PutUint32(u[0:], uint32(timeNow)) + binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) + binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) + binary.BigEndian.PutUint16(u[8:], clockSeq) + + copy(u[10:], hardwareAddr) + + u.SetVersion(V1) + u.SetVariant(VariantRFC4122) + + return u +} + +// NewV2 returns DCE Security UUID based on POSIX UID/GID. +func (g *generator) NewV2(domain byte) UUID { + u := UUID{} + + timeNow, clockSeq, hardwareAddr := g.getStorage() + + switch domain { + case DomainPerson: + binary.BigEndian.PutUint32(u[0:], posixUID) + case DomainGroup: + binary.BigEndian.PutUint32(u[0:], posixGID) + } + + binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) + binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) + binary.BigEndian.PutUint16(u[8:], clockSeq) + u[9] = domain + + copy(u[10:], hardwareAddr) + + u.SetVersion(V2) + u.SetVariant(VariantRFC4122) + + return u +} + +// NewV3 returns UUID based on MD5 hash of namespace UUID and name. +func (g *generator) NewV3(ns UUID, name string) UUID { + u := newFromHash(md5.New(), ns, name) + u.SetVersion(V3) + u.SetVariant(VariantRFC4122) + + return u +} + +// NewV4 returns random generated UUID. +func (g *generator) NewV4() UUID { + u := UUID{} + g.safeRandom(u[:]) + u.SetVersion(V4) + u.SetVariant(VariantRFC4122) + + return u +} + +// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name. +func (g *generator) NewV5(ns UUID, name string) UUID { + u := newFromHash(sha1.New(), ns, name) + u.SetVersion(V5) + u.SetVariant(VariantRFC4122) + + return u +} + +func (g *generator) initStorage() { + g.initClockSequence() + g.initHardwareAddr() +} + +func (g *generator) initClockSequence() { + buf := make([]byte, 2) + g.safeRandom(buf) + g.clockSequence = binary.BigEndian.Uint16(buf) +} + +func (g *generator) initHardwareAddr() { + interfaces, err := net.Interfaces() + if err == nil { + for _, iface := range interfaces { + if len(iface.HardwareAddr) >= 6 { + copy(g.hardwareAddr[:], iface.HardwareAddr) + return + } + } + } + + // Initialize hardwareAddr randomly in case + // of real network interfaces absence + g.safeRandom(g.hardwareAddr[:]) + + // Set multicast bit as recommended in RFC 4122 + g.hardwareAddr[0] |= 0x01 +} + +func (g *generator) safeRandom(dest []byte) { + if _, err := rand.Read(dest); err != nil { + panic(err) + } +} + +// Returns UUID v1/v2 storage state. +// Returns epoch timestamp, clock sequence, and hardware address. +func (g *generator) getStorage() (uint64, uint16, []byte) { + g.storageOnce.Do(g.initStorage) + + g.storageMutex.Lock() + defer g.storageMutex.Unlock() + + timeNow := epochFunc() + // Clock changed backwards since last UUID generation. + // Should increase clock sequence. + if timeNow <= g.lastTime { + g.clockSequence++ + } + g.lastTime = timeNow + + return timeNow, g.clockSequence, g.hardwareAddr[:] +} + +// Returns difference in 100-nanosecond intervals between +// UUID epoch (October 15, 1582) and current time. +// This is default epoch calculation function. +func unixTimeFunc() uint64 { + return epochStart + uint64(time.Now().UnixNano()/100) +} + +// Returns UUID based on hashing of namespace UUID and name. +func newFromHash(h hash.Hash, ns UUID, name string) UUID { + u := UUID{} + h.Write(ns[:]) + h.Write([]byte(name)) + copy(u[:], h.Sum(nil)) + + return u +} diff --git a/vendor/github.com/satori/go.uuid/sql.go b/vendor/github.com/satori/go.uuid/sql.go new file mode 100644 index 00000000..56759d39 --- /dev/null +++ b/vendor/github.com/satori/go.uuid/sql.go @@ -0,0 +1,78 @@ +// Copyright (C) 2013-2018 by Maxim Bublis +// +// Permission is hereby granted, free of charge, to any person obtaining +// a copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to +// permit persons to whom the Software is furnished to do so, subject to +// the following conditions: +// +// The above copyright notice and this permission notice shall be +// included in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +package uuid + +import ( + "database/sql/driver" + "fmt" +) + +// Value implements the driver.Valuer interface. +func (u UUID) Value() (driver.Value, error) { + return u.String(), nil +} + +// Scan implements the sql.Scanner interface. +// A 16-byte slice is handled by UnmarshalBinary, while +// a longer byte slice or a string is handled by UnmarshalText. +func (u *UUID) Scan(src interface{}) error { + switch src := src.(type) { + case []byte: + if len(src) == Size { + return u.UnmarshalBinary(src) + } + return u.UnmarshalText(src) + + case string: + return u.UnmarshalText([]byte(src)) + } + + return fmt.Errorf("uuid: cannot convert %T to UUID", src) +} + +// NullUUID can be used with the standard sql package to represent a +// UUID value that can be NULL in the database +type NullUUID struct { + UUID UUID + Valid bool +} + +// Value implements the driver.Valuer interface. +func (u NullUUID) Value() (driver.Value, error) { + if !u.Valid { + return nil, nil + } + // Delegate to UUID Value function + return u.UUID.Value() +} + +// Scan implements the sql.Scanner interface. +func (u *NullUUID) Scan(src interface{}) error { + if src == nil { + u.UUID, u.Valid = Nil, false + return nil + } + + // Delegate to UUID Scan function + u.Valid = true + return u.UUID.Scan(src) +} diff --git a/vendor/github.com/satori/go.uuid/uuid.go b/vendor/github.com/satori/go.uuid/uuid.go new file mode 100644 index 00000000..a2b8e2ca --- /dev/null +++ b/vendor/github.com/satori/go.uuid/uuid.go @@ -0,0 +1,161 @@ +// Copyright (C) 2013-2018 by Maxim Bublis +// +// Permission is hereby granted, free of charge, to any person obtaining +// a copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to +// permit persons to whom the Software is furnished to do so, subject to +// the following conditions: +// +// The above copyright notice and this permission notice shall be +// included in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +// Package uuid provides implementation of Universally Unique Identifier (UUID). +// Supported versions are 1, 3, 4 and 5 (as specified in RFC 4122) and +// version 2 (as specified in DCE 1.1). +package uuid + +import ( + "bytes" + "encoding/hex" +) + +// Size of a UUID in bytes. +const Size = 16 + +// UUID representation compliant with specification +// described in RFC 4122. +type UUID [Size]byte + +// UUID versions +const ( + _ byte = iota + V1 + V2 + V3 + V4 + V5 +) + +// UUID layout variants. +const ( + VariantNCS byte = iota + VariantRFC4122 + VariantMicrosoft + VariantFuture +) + +// UUID DCE domains. +const ( + DomainPerson = iota + DomainGroup + DomainOrg +) + +// String parse helpers. +var ( + urnPrefix = []byte("urn:uuid:") + byteGroups = []int{8, 4, 4, 4, 12} +) + +// Nil is special form of UUID that is specified to have all +// 128 bits set to zero. +var Nil = UUID{} + +// Predefined namespace UUIDs. +var ( + NamespaceDNS = Must(FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8")) + NamespaceURL = Must(FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8")) + NamespaceOID = Must(FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8")) + NamespaceX500 = Must(FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8")) +) + +// Equal returns true if u1 and u2 equals, otherwise returns false. +func Equal(u1 UUID, u2 UUID) bool { + return bytes.Equal(u1[:], u2[:]) +} + +// Version returns algorithm version used to generate UUID. +func (u UUID) Version() byte { + return u[6] >> 4 +} + +// Variant returns UUID layout variant. +func (u UUID) Variant() byte { + switch { + case (u[8] >> 7) == 0x00: + return VariantNCS + case (u[8] >> 6) == 0x02: + return VariantRFC4122 + case (u[8] >> 5) == 0x06: + return VariantMicrosoft + case (u[8] >> 5) == 0x07: + fallthrough + default: + return VariantFuture + } +} + +// Bytes returns bytes slice representation of UUID. +func (u UUID) Bytes() []byte { + return u[:] +} + +// Returns canonical string representation of UUID: +// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. +func (u UUID) String() string { + buf := make([]byte, 36) + + hex.Encode(buf[0:8], u[0:4]) + buf[8] = '-' + hex.Encode(buf[9:13], u[4:6]) + buf[13] = '-' + hex.Encode(buf[14:18], u[6:8]) + buf[18] = '-' + hex.Encode(buf[19:23], u[8:10]) + buf[23] = '-' + hex.Encode(buf[24:], u[10:]) + + return string(buf) +} + +// SetVersion sets version bits. +func (u *UUID) SetVersion(v byte) { + u[6] = (u[6] & 0x0f) | (v << 4) +} + +// SetVariant sets variant bits. +func (u *UUID) SetVariant(v byte) { + switch v { + case VariantNCS: + u[8] = (u[8]&(0xff>>1) | (0x00 << 7)) + case VariantRFC4122: + u[8] = (u[8]&(0xff>>2) | (0x02 << 6)) + case VariantMicrosoft: + u[8] = (u[8]&(0xff>>3) | (0x06 << 5)) + case VariantFuture: + fallthrough + default: + u[8] = (u[8]&(0xff>>3) | (0x07 << 5)) + } +} + +// Must is a helper that wraps a call to a function returning (UUID, error) +// and panics if the error is non-nil. It is intended for use in variable +// initializations such as +// var packageUUID = uuid.Must(uuid.FromString("123e4567-e89b-12d3-a456-426655440000")); +func Must(u UUID, err error) UUID { + if err != nil { + panic(err) + } + return u +} diff --git a/vendor/github.com/stretchr/testify/LICENSE b/vendor/github.com/stretchr/testify/LICENSE new file mode 100644 index 00000000..473b670a --- /dev/null +++ b/vendor/github.com/stretchr/testify/LICENSE @@ -0,0 +1,22 @@ +Copyright (c) 2012 - 2013 Mat Ryer and Tyler Bunnell + +Please consider promoting this project if you find it useful. + +Permission is hereby granted, free of charge, to any person +obtaining a copy of this software and associated documentation +files (the "Software"), to deal in the Software without restriction, +including without limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of the Software, +and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES +OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT +OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE +OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/stretchr/testify/assert/assertion_format.go b/vendor/github.com/stretchr/testify/assert/assertion_format.go new file mode 100644 index 00000000..aa1c2b95 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/assertion_format.go @@ -0,0 +1,484 @@ +/* +* CODE GENERATED AUTOMATICALLY WITH github.com/stretchr/testify/_codegen +* THIS FILE MUST NOT BE EDITED BY HAND + */ + +package assert + +import ( + http "net/http" + url "net/url" + time "time" +) + +// Conditionf uses a Comparison to assert a complex condition. +func Conditionf(t TestingT, comp Comparison, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Condition(t, comp, append([]interface{}{msg}, args...)...) +} + +// Containsf asserts that the specified string, list(array, slice...) or map contains the +// specified substring or element. +// +// assert.Containsf(t, "Hello World", "World", "error message %s", "formatted") +// assert.Containsf(t, ["Hello", "World"], "World", "error message %s", "formatted") +// assert.Containsf(t, {"Hello": "World"}, "Hello", "error message %s", "formatted") +func Containsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Contains(t, s, contains, append([]interface{}{msg}, args...)...) +} + +// DirExistsf checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func DirExistsf(t TestingT, path string, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return DirExists(t, path, append([]interface{}{msg}, args...)...) +} + +// ElementsMatchf asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// assert.ElementsMatchf(t, [1, 3, 2, 3], [1, 3, 3, 2], "error message %s", "formatted") +func ElementsMatchf(t TestingT, listA interface{}, listB interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return ElementsMatch(t, listA, listB, append([]interface{}{msg}, args...)...) +} + +// Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// assert.Emptyf(t, obj, "error message %s", "formatted") +func Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Empty(t, object, append([]interface{}{msg}, args...)...) +} + +// Equalf asserts that two objects are equal. +// +// assert.Equalf(t, 123, 123, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. +func Equalf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Equal(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// EqualErrorf asserts that a function returned an error (i.e. not `nil`) +// and that it is equal to the provided error. +// +// actualObj, err := SomeFunction() +// assert.EqualErrorf(t, err, expectedErrorString, "error message %s", "formatted") +func EqualErrorf(t TestingT, theError error, errString string, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return EqualError(t, theError, errString, append([]interface{}{msg}, args...)...) +} + +// EqualValuesf asserts that two objects are equal or convertable to the same types +// and equal. +// +// assert.EqualValuesf(t, uint32(123, "error message %s", "formatted"), int32(123)) +func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return EqualValues(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// Errorf asserts that a function returned an error (i.e. not `nil`). +// +// actualObj, err := SomeFunction() +// if assert.Errorf(t, err, "error message %s", "formatted") { +// assert.Equal(t, expectedErrorf, err) +// } +func Errorf(t TestingT, err error, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Error(t, err, append([]interface{}{msg}, args...)...) +} + +// Exactlyf asserts that two objects are equal in value and type. +// +// assert.Exactlyf(t, int32(123, "error message %s", "formatted"), int64(123)) +func Exactlyf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Exactly(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// Failf reports a failure through +func Failf(t TestingT, failureMessage string, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Fail(t, failureMessage, append([]interface{}{msg}, args...)...) +} + +// FailNowf fails test +func FailNowf(t TestingT, failureMessage string, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return FailNow(t, failureMessage, append([]interface{}{msg}, args...)...) +} + +// Falsef asserts that the specified value is false. +// +// assert.Falsef(t, myBool, "error message %s", "formatted") +func Falsef(t TestingT, value bool, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return False(t, value, append([]interface{}{msg}, args...)...) +} + +// FileExistsf checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func FileExistsf(t TestingT, path string, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return FileExists(t, path, append([]interface{}{msg}, args...)...) +} + +// HTTPBodyContainsf asserts that a specified handler returns a +// body that contains a string. +// +// assert.HTTPBodyContainsf(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return HTTPBodyContains(t, handler, method, url, values, str, append([]interface{}{msg}, args...)...) +} + +// HTTPBodyNotContainsf asserts that a specified handler returns a +// body that does not contain a string. +// +// assert.HTTPBodyNotContainsf(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return HTTPBodyNotContains(t, handler, method, url, values, str, append([]interface{}{msg}, args...)...) +} + +// HTTPErrorf asserts that a specified handler returns an error status code. +// +// assert.HTTPErrorf(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return HTTPError(t, handler, method, url, values, append([]interface{}{msg}, args...)...) +} + +// HTTPRedirectf asserts that a specified handler returns a redirect status code. +// +// assert.HTTPRedirectf(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return HTTPRedirect(t, handler, method, url, values, append([]interface{}{msg}, args...)...) +} + +// HTTPSuccessf asserts that a specified handler returns a success status code. +// +// assert.HTTPSuccessf(t, myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return HTTPSuccess(t, handler, method, url, values, append([]interface{}{msg}, args...)...) +} + +// Implementsf asserts that an object is implemented by the specified interface. +// +// assert.Implementsf(t, (*MyInterface, "error message %s", "formatted")(nil), new(MyObject)) +func Implementsf(t TestingT, interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Implements(t, interfaceObject, object, append([]interface{}{msg}, args...)...) +} + +// InDeltaf asserts that the two numerals are within delta of each other. +// +// assert.InDeltaf(t, math.Pi, (22 / 7.0, "error message %s", "formatted"), 0.01) +func InDeltaf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return InDelta(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// InDeltaMapValuesf is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func InDeltaMapValuesf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return InDeltaMapValues(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// InDeltaSlicef is the same as InDelta, except it compares two slices. +func InDeltaSlicef(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return InDeltaSlice(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// InEpsilonf asserts that expected and actual have a relative error less than epsilon +func InEpsilonf(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return InEpsilon(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...) +} + +// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices. +func InEpsilonSlicef(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return InEpsilonSlice(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...) +} + +// IsTypef asserts that the specified objects are of the same type. +func IsTypef(t TestingT, expectedType interface{}, object interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return IsType(t, expectedType, object, append([]interface{}{msg}, args...)...) +} + +// JSONEqf asserts that two JSON strings are equivalent. +// +// assert.JSONEqf(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted") +func JSONEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return JSONEq(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// Lenf asserts that the specified object has specific length. +// Lenf also fails if the object has a type that len() not accept. +// +// assert.Lenf(t, mySlice, 3, "error message %s", "formatted") +func Lenf(t TestingT, object interface{}, length int, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Len(t, object, length, append([]interface{}{msg}, args...)...) +} + +// Nilf asserts that the specified object is nil. +// +// assert.Nilf(t, err, "error message %s", "formatted") +func Nilf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Nil(t, object, append([]interface{}{msg}, args...)...) +} + +// NoErrorf asserts that a function returned no error (i.e. `nil`). +// +// actualObj, err := SomeFunction() +// if assert.NoErrorf(t, err, "error message %s", "formatted") { +// assert.Equal(t, expectedObj, actualObj) +// } +func NoErrorf(t TestingT, err error, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NoError(t, err, append([]interface{}{msg}, args...)...) +} + +// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the +// specified substring or element. +// +// assert.NotContainsf(t, "Hello World", "Earth", "error message %s", "formatted") +// assert.NotContainsf(t, ["Hello", "World"], "Earth", "error message %s", "formatted") +// assert.NotContainsf(t, {"Hello": "World"}, "Earth", "error message %s", "formatted") +func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotContains(t, s, contains, append([]interface{}{msg}, args...)...) +} + +// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// if assert.NotEmptyf(t, obj, "error message %s", "formatted") { +// assert.Equal(t, "two", obj[1]) +// } +func NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotEmpty(t, object, append([]interface{}{msg}, args...)...) +} + +// NotEqualf asserts that the specified values are NOT equal. +// +// assert.NotEqualf(t, obj1, obj2, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). +func NotEqualf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotEqual(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// NotNilf asserts that the specified object is not nil. +// +// assert.NotNilf(t, err, "error message %s", "formatted") +func NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotNil(t, object, append([]interface{}{msg}, args...)...) +} + +// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic. +// +// assert.NotPanicsf(t, func(){ RemainCalm() }, "error message %s", "formatted") +func NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotPanics(t, f, append([]interface{}{msg}, args...)...) +} + +// NotRegexpf asserts that a specified regexp does not match a string. +// +// assert.NotRegexpf(t, regexp.MustCompile("starts", "error message %s", "formatted"), "it's starting") +// assert.NotRegexpf(t, "^start", "it's not starting", "error message %s", "formatted") +func NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotRegexp(t, rx, str, append([]interface{}{msg}, args...)...) +} + +// NotSubsetf asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// assert.NotSubsetf(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted") +func NotSubsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotSubset(t, list, subset, append([]interface{}{msg}, args...)...) +} + +// NotZerof asserts that i is not the zero value for its type. +func NotZerof(t TestingT, i interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return NotZero(t, i, append([]interface{}{msg}, args...)...) +} + +// Panicsf asserts that the code inside the specified PanicTestFunc panics. +// +// assert.Panicsf(t, func(){ GoCrazy() }, "error message %s", "formatted") +func Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Panics(t, f, append([]interface{}{msg}, args...)...) +} + +// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// assert.PanicsWithValuef(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted") +func PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return PanicsWithValue(t, expected, f, append([]interface{}{msg}, args...)...) +} + +// Regexpf asserts that a specified regexp matches a string. +// +// assert.Regexpf(t, regexp.MustCompile("start", "error message %s", "formatted"), "it's starting") +// assert.Regexpf(t, "start...$", "it's not starting", "error message %s", "formatted") +func Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Regexp(t, rx, str, append([]interface{}{msg}, args...)...) +} + +// Subsetf asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// assert.Subsetf(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted") +func Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Subset(t, list, subset, append([]interface{}{msg}, args...)...) +} + +// Truef asserts that the specified value is true. +// +// assert.Truef(t, myBool, "error message %s", "formatted") +func Truef(t TestingT, value bool, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return True(t, value, append([]interface{}{msg}, args...)...) +} + +// WithinDurationf asserts that the two times are within duration delta of each other. +// +// assert.WithinDurationf(t, time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted") +func WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return WithinDuration(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// Zerof asserts that i is the zero value for its type. +func Zerof(t TestingT, i interface{}, msg string, args ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + return Zero(t, i, append([]interface{}{msg}, args...)...) +} diff --git a/vendor/github.com/stretchr/testify/assert/assertion_forward.go b/vendor/github.com/stretchr/testify/assert/assertion_forward.go new file mode 100644 index 00000000..de39f794 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/assertion_forward.go @@ -0,0 +1,956 @@ +/* +* CODE GENERATED AUTOMATICALLY WITH github.com/stretchr/testify/_codegen +* THIS FILE MUST NOT BE EDITED BY HAND + */ + +package assert + +import ( + http "net/http" + url "net/url" + time "time" +) + +// Condition uses a Comparison to assert a complex condition. +func (a *Assertions) Condition(comp Comparison, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Condition(a.t, comp, msgAndArgs...) +} + +// Conditionf uses a Comparison to assert a complex condition. +func (a *Assertions) Conditionf(comp Comparison, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Conditionf(a.t, comp, msg, args...) +} + +// Contains asserts that the specified string, list(array, slice...) or map contains the +// specified substring or element. +// +// a.Contains("Hello World", "World") +// a.Contains(["Hello", "World"], "World") +// a.Contains({"Hello": "World"}, "Hello") +func (a *Assertions) Contains(s interface{}, contains interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Contains(a.t, s, contains, msgAndArgs...) +} + +// Containsf asserts that the specified string, list(array, slice...) or map contains the +// specified substring or element. +// +// a.Containsf("Hello World", "World", "error message %s", "formatted") +// a.Containsf(["Hello", "World"], "World", "error message %s", "formatted") +// a.Containsf({"Hello": "World"}, "Hello", "error message %s", "formatted") +func (a *Assertions) Containsf(s interface{}, contains interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Containsf(a.t, s, contains, msg, args...) +} + +// DirExists checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func (a *Assertions) DirExists(path string, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return DirExists(a.t, path, msgAndArgs...) +} + +// DirExistsf checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func (a *Assertions) DirExistsf(path string, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return DirExistsf(a.t, path, msg, args...) +} + +// ElementsMatch asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// a.ElementsMatch([1, 3, 2, 3], [1, 3, 3, 2]) +func (a *Assertions) ElementsMatch(listA interface{}, listB interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return ElementsMatch(a.t, listA, listB, msgAndArgs...) +} + +// ElementsMatchf asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// a.ElementsMatchf([1, 3, 2, 3], [1, 3, 3, 2], "error message %s", "formatted") +func (a *Assertions) ElementsMatchf(listA interface{}, listB interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return ElementsMatchf(a.t, listA, listB, msg, args...) +} + +// Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// a.Empty(obj) +func (a *Assertions) Empty(object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Empty(a.t, object, msgAndArgs...) +} + +// Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// a.Emptyf(obj, "error message %s", "formatted") +func (a *Assertions) Emptyf(object interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Emptyf(a.t, object, msg, args...) +} + +// Equal asserts that two objects are equal. +// +// a.Equal(123, 123) +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. +func (a *Assertions) Equal(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Equal(a.t, expected, actual, msgAndArgs...) +} + +// EqualError asserts that a function returned an error (i.e. not `nil`) +// and that it is equal to the provided error. +// +// actualObj, err := SomeFunction() +// a.EqualError(err, expectedErrorString) +func (a *Assertions) EqualError(theError error, errString string, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return EqualError(a.t, theError, errString, msgAndArgs...) +} + +// EqualErrorf asserts that a function returned an error (i.e. not `nil`) +// and that it is equal to the provided error. +// +// actualObj, err := SomeFunction() +// a.EqualErrorf(err, expectedErrorString, "error message %s", "formatted") +func (a *Assertions) EqualErrorf(theError error, errString string, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return EqualErrorf(a.t, theError, errString, msg, args...) +} + +// EqualValues asserts that two objects are equal or convertable to the same types +// and equal. +// +// a.EqualValues(uint32(123), int32(123)) +func (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return EqualValues(a.t, expected, actual, msgAndArgs...) +} + +// EqualValuesf asserts that two objects are equal or convertable to the same types +// and equal. +// +// a.EqualValuesf(uint32(123, "error message %s", "formatted"), int32(123)) +func (a *Assertions) EqualValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return EqualValuesf(a.t, expected, actual, msg, args...) +} + +// Equalf asserts that two objects are equal. +// +// a.Equalf(123, 123, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. +func (a *Assertions) Equalf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Equalf(a.t, expected, actual, msg, args...) +} + +// Error asserts that a function returned an error (i.e. not `nil`). +// +// actualObj, err := SomeFunction() +// if a.Error(err) { +// assert.Equal(t, expectedError, err) +// } +func (a *Assertions) Error(err error, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Error(a.t, err, msgAndArgs...) +} + +// Errorf asserts that a function returned an error (i.e. not `nil`). +// +// actualObj, err := SomeFunction() +// if a.Errorf(err, "error message %s", "formatted") { +// assert.Equal(t, expectedErrorf, err) +// } +func (a *Assertions) Errorf(err error, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Errorf(a.t, err, msg, args...) +} + +// Exactly asserts that two objects are equal in value and type. +// +// a.Exactly(int32(123), int64(123)) +func (a *Assertions) Exactly(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Exactly(a.t, expected, actual, msgAndArgs...) +} + +// Exactlyf asserts that two objects are equal in value and type. +// +// a.Exactlyf(int32(123, "error message %s", "formatted"), int64(123)) +func (a *Assertions) Exactlyf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Exactlyf(a.t, expected, actual, msg, args...) +} + +// Fail reports a failure through +func (a *Assertions) Fail(failureMessage string, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Fail(a.t, failureMessage, msgAndArgs...) +} + +// FailNow fails test +func (a *Assertions) FailNow(failureMessage string, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return FailNow(a.t, failureMessage, msgAndArgs...) +} + +// FailNowf fails test +func (a *Assertions) FailNowf(failureMessage string, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return FailNowf(a.t, failureMessage, msg, args...) +} + +// Failf reports a failure through +func (a *Assertions) Failf(failureMessage string, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Failf(a.t, failureMessage, msg, args...) +} + +// False asserts that the specified value is false. +// +// a.False(myBool) +func (a *Assertions) False(value bool, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return False(a.t, value, msgAndArgs...) +} + +// Falsef asserts that the specified value is false. +// +// a.Falsef(myBool, "error message %s", "formatted") +func (a *Assertions) Falsef(value bool, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Falsef(a.t, value, msg, args...) +} + +// FileExists checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func (a *Assertions) FileExists(path string, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return FileExists(a.t, path, msgAndArgs...) +} + +// FileExistsf checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func (a *Assertions) FileExistsf(path string, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return FileExistsf(a.t, path, msg, args...) +} + +// HTTPBodyContains asserts that a specified handler returns a +// body that contains a string. +// +// a.HTTPBodyContains(myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPBodyContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPBodyContains(a.t, handler, method, url, values, str, msgAndArgs...) +} + +// HTTPBodyContainsf asserts that a specified handler returns a +// body that contains a string. +// +// a.HTTPBodyContainsf(myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPBodyContainsf(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPBodyContainsf(a.t, handler, method, url, values, str, msg, args...) +} + +// HTTPBodyNotContains asserts that a specified handler returns a +// body that does not contain a string. +// +// a.HTTPBodyNotContains(myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPBodyNotContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPBodyNotContains(a.t, handler, method, url, values, str, msgAndArgs...) +} + +// HTTPBodyNotContainsf asserts that a specified handler returns a +// body that does not contain a string. +// +// a.HTTPBodyNotContainsf(myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPBodyNotContainsf(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPBodyNotContainsf(a.t, handler, method, url, values, str, msg, args...) +} + +// HTTPError asserts that a specified handler returns an error status code. +// +// a.HTTPError(myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPError(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPError(a.t, handler, method, url, values, msgAndArgs...) +} + +// HTTPErrorf asserts that a specified handler returns an error status code. +// +// a.HTTPErrorf(myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func (a *Assertions) HTTPErrorf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPErrorf(a.t, handler, method, url, values, msg, args...) +} + +// HTTPRedirect asserts that a specified handler returns a redirect status code. +// +// a.HTTPRedirect(myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPRedirect(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPRedirect(a.t, handler, method, url, values, msgAndArgs...) +} + +// HTTPRedirectf asserts that a specified handler returns a redirect status code. +// +// a.HTTPRedirectf(myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func (a *Assertions) HTTPRedirectf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPRedirectf(a.t, handler, method, url, values, msg, args...) +} + +// HTTPSuccess asserts that a specified handler returns a success status code. +// +// a.HTTPSuccess(myHandler, "POST", "http://www.google.com", nil) +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPSuccess(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPSuccess(a.t, handler, method, url, values, msgAndArgs...) +} + +// HTTPSuccessf asserts that a specified handler returns a success status code. +// +// a.HTTPSuccessf(myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPSuccessf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return HTTPSuccessf(a.t, handler, method, url, values, msg, args...) +} + +// Implements asserts that an object is implemented by the specified interface. +// +// a.Implements((*MyInterface)(nil), new(MyObject)) +func (a *Assertions) Implements(interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Implements(a.t, interfaceObject, object, msgAndArgs...) +} + +// Implementsf asserts that an object is implemented by the specified interface. +// +// a.Implementsf((*MyInterface, "error message %s", "formatted")(nil), new(MyObject)) +func (a *Assertions) Implementsf(interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Implementsf(a.t, interfaceObject, object, msg, args...) +} + +// InDelta asserts that the two numerals are within delta of each other. +// +// a.InDelta(math.Pi, (22 / 7.0), 0.01) +func (a *Assertions) InDelta(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InDelta(a.t, expected, actual, delta, msgAndArgs...) +} + +// InDeltaMapValues is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func (a *Assertions) InDeltaMapValues(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InDeltaMapValues(a.t, expected, actual, delta, msgAndArgs...) +} + +// InDeltaMapValuesf is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func (a *Assertions) InDeltaMapValuesf(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InDeltaMapValuesf(a.t, expected, actual, delta, msg, args...) +} + +// InDeltaSlice is the same as InDelta, except it compares two slices. +func (a *Assertions) InDeltaSlice(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InDeltaSlice(a.t, expected, actual, delta, msgAndArgs...) +} + +// InDeltaSlicef is the same as InDelta, except it compares two slices. +func (a *Assertions) InDeltaSlicef(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InDeltaSlicef(a.t, expected, actual, delta, msg, args...) +} + +// InDeltaf asserts that the two numerals are within delta of each other. +// +// a.InDeltaf(math.Pi, (22 / 7.0, "error message %s", "formatted"), 0.01) +func (a *Assertions) InDeltaf(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InDeltaf(a.t, expected, actual, delta, msg, args...) +} + +// InEpsilon asserts that expected and actual have a relative error less than epsilon +func (a *Assertions) InEpsilon(expected interface{}, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InEpsilon(a.t, expected, actual, epsilon, msgAndArgs...) +} + +// InEpsilonSlice is the same as InEpsilon, except it compares each value from two slices. +func (a *Assertions) InEpsilonSlice(expected interface{}, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InEpsilonSlice(a.t, expected, actual, epsilon, msgAndArgs...) +} + +// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices. +func (a *Assertions) InEpsilonSlicef(expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InEpsilonSlicef(a.t, expected, actual, epsilon, msg, args...) +} + +// InEpsilonf asserts that expected and actual have a relative error less than epsilon +func (a *Assertions) InEpsilonf(expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return InEpsilonf(a.t, expected, actual, epsilon, msg, args...) +} + +// IsType asserts that the specified objects are of the same type. +func (a *Assertions) IsType(expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return IsType(a.t, expectedType, object, msgAndArgs...) +} + +// IsTypef asserts that the specified objects are of the same type. +func (a *Assertions) IsTypef(expectedType interface{}, object interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return IsTypef(a.t, expectedType, object, msg, args...) +} + +// JSONEq asserts that two JSON strings are equivalent. +// +// a.JSONEq(`{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`) +func (a *Assertions) JSONEq(expected string, actual string, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return JSONEq(a.t, expected, actual, msgAndArgs...) +} + +// JSONEqf asserts that two JSON strings are equivalent. +// +// a.JSONEqf(`{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted") +func (a *Assertions) JSONEqf(expected string, actual string, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return JSONEqf(a.t, expected, actual, msg, args...) +} + +// Len asserts that the specified object has specific length. +// Len also fails if the object has a type that len() not accept. +// +// a.Len(mySlice, 3) +func (a *Assertions) Len(object interface{}, length int, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Len(a.t, object, length, msgAndArgs...) +} + +// Lenf asserts that the specified object has specific length. +// Lenf also fails if the object has a type that len() not accept. +// +// a.Lenf(mySlice, 3, "error message %s", "formatted") +func (a *Assertions) Lenf(object interface{}, length int, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Lenf(a.t, object, length, msg, args...) +} + +// Nil asserts that the specified object is nil. +// +// a.Nil(err) +func (a *Assertions) Nil(object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Nil(a.t, object, msgAndArgs...) +} + +// Nilf asserts that the specified object is nil. +// +// a.Nilf(err, "error message %s", "formatted") +func (a *Assertions) Nilf(object interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Nilf(a.t, object, msg, args...) +} + +// NoError asserts that a function returned no error (i.e. `nil`). +// +// actualObj, err := SomeFunction() +// if a.NoError(err) { +// assert.Equal(t, expectedObj, actualObj) +// } +func (a *Assertions) NoError(err error, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NoError(a.t, err, msgAndArgs...) +} + +// NoErrorf asserts that a function returned no error (i.e. `nil`). +// +// actualObj, err := SomeFunction() +// if a.NoErrorf(err, "error message %s", "formatted") { +// assert.Equal(t, expectedObj, actualObj) +// } +func (a *Assertions) NoErrorf(err error, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NoErrorf(a.t, err, msg, args...) +} + +// NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the +// specified substring or element. +// +// a.NotContains("Hello World", "Earth") +// a.NotContains(["Hello", "World"], "Earth") +// a.NotContains({"Hello": "World"}, "Earth") +func (a *Assertions) NotContains(s interface{}, contains interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotContains(a.t, s, contains, msgAndArgs...) +} + +// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the +// specified substring or element. +// +// a.NotContainsf("Hello World", "Earth", "error message %s", "formatted") +// a.NotContainsf(["Hello", "World"], "Earth", "error message %s", "formatted") +// a.NotContainsf({"Hello": "World"}, "Earth", "error message %s", "formatted") +func (a *Assertions) NotContainsf(s interface{}, contains interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotContainsf(a.t, s, contains, msg, args...) +} + +// NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// if a.NotEmpty(obj) { +// assert.Equal(t, "two", obj[1]) +// } +func (a *Assertions) NotEmpty(object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotEmpty(a.t, object, msgAndArgs...) +} + +// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// if a.NotEmptyf(obj, "error message %s", "formatted") { +// assert.Equal(t, "two", obj[1]) +// } +func (a *Assertions) NotEmptyf(object interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotEmptyf(a.t, object, msg, args...) +} + +// NotEqual asserts that the specified values are NOT equal. +// +// a.NotEqual(obj1, obj2) +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). +func (a *Assertions) NotEqual(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotEqual(a.t, expected, actual, msgAndArgs...) +} + +// NotEqualf asserts that the specified values are NOT equal. +// +// a.NotEqualf(obj1, obj2, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). +func (a *Assertions) NotEqualf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotEqualf(a.t, expected, actual, msg, args...) +} + +// NotNil asserts that the specified object is not nil. +// +// a.NotNil(err) +func (a *Assertions) NotNil(object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotNil(a.t, object, msgAndArgs...) +} + +// NotNilf asserts that the specified object is not nil. +// +// a.NotNilf(err, "error message %s", "formatted") +func (a *Assertions) NotNilf(object interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotNilf(a.t, object, msg, args...) +} + +// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic. +// +// a.NotPanics(func(){ RemainCalm() }) +func (a *Assertions) NotPanics(f PanicTestFunc, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotPanics(a.t, f, msgAndArgs...) +} + +// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic. +// +// a.NotPanicsf(func(){ RemainCalm() }, "error message %s", "formatted") +func (a *Assertions) NotPanicsf(f PanicTestFunc, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotPanicsf(a.t, f, msg, args...) +} + +// NotRegexp asserts that a specified regexp does not match a string. +// +// a.NotRegexp(regexp.MustCompile("starts"), "it's starting") +// a.NotRegexp("^start", "it's not starting") +func (a *Assertions) NotRegexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotRegexp(a.t, rx, str, msgAndArgs...) +} + +// NotRegexpf asserts that a specified regexp does not match a string. +// +// a.NotRegexpf(regexp.MustCompile("starts", "error message %s", "formatted"), "it's starting") +// a.NotRegexpf("^start", "it's not starting", "error message %s", "formatted") +func (a *Assertions) NotRegexpf(rx interface{}, str interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotRegexpf(a.t, rx, str, msg, args...) +} + +// NotSubset asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// a.NotSubset([1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]") +func (a *Assertions) NotSubset(list interface{}, subset interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotSubset(a.t, list, subset, msgAndArgs...) +} + +// NotSubsetf asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// a.NotSubsetf([1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted") +func (a *Assertions) NotSubsetf(list interface{}, subset interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotSubsetf(a.t, list, subset, msg, args...) +} + +// NotZero asserts that i is not the zero value for its type. +func (a *Assertions) NotZero(i interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotZero(a.t, i, msgAndArgs...) +} + +// NotZerof asserts that i is not the zero value for its type. +func (a *Assertions) NotZerof(i interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return NotZerof(a.t, i, msg, args...) +} + +// Panics asserts that the code inside the specified PanicTestFunc panics. +// +// a.Panics(func(){ GoCrazy() }) +func (a *Assertions) Panics(f PanicTestFunc, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Panics(a.t, f, msgAndArgs...) +} + +// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// a.PanicsWithValue("crazy error", func(){ GoCrazy() }) +func (a *Assertions) PanicsWithValue(expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return PanicsWithValue(a.t, expected, f, msgAndArgs...) +} + +// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// a.PanicsWithValuef("crazy error", func(){ GoCrazy() }, "error message %s", "formatted") +func (a *Assertions) PanicsWithValuef(expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return PanicsWithValuef(a.t, expected, f, msg, args...) +} + +// Panicsf asserts that the code inside the specified PanicTestFunc panics. +// +// a.Panicsf(func(){ GoCrazy() }, "error message %s", "formatted") +func (a *Assertions) Panicsf(f PanicTestFunc, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Panicsf(a.t, f, msg, args...) +} + +// Regexp asserts that a specified regexp matches a string. +// +// a.Regexp(regexp.MustCompile("start"), "it's starting") +// a.Regexp("start...$", "it's not starting") +func (a *Assertions) Regexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Regexp(a.t, rx, str, msgAndArgs...) +} + +// Regexpf asserts that a specified regexp matches a string. +// +// a.Regexpf(regexp.MustCompile("start", "error message %s", "formatted"), "it's starting") +// a.Regexpf("start...$", "it's not starting", "error message %s", "formatted") +func (a *Assertions) Regexpf(rx interface{}, str interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Regexpf(a.t, rx, str, msg, args...) +} + +// Subset asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// a.Subset([1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]") +func (a *Assertions) Subset(list interface{}, subset interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Subset(a.t, list, subset, msgAndArgs...) +} + +// Subsetf asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// a.Subsetf([1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted") +func (a *Assertions) Subsetf(list interface{}, subset interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Subsetf(a.t, list, subset, msg, args...) +} + +// True asserts that the specified value is true. +// +// a.True(myBool) +func (a *Assertions) True(value bool, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return True(a.t, value, msgAndArgs...) +} + +// Truef asserts that the specified value is true. +// +// a.Truef(myBool, "error message %s", "formatted") +func (a *Assertions) Truef(value bool, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Truef(a.t, value, msg, args...) +} + +// WithinDuration asserts that the two times are within duration delta of each other. +// +// a.WithinDuration(time.Now(), time.Now(), 10*time.Second) +func (a *Assertions) WithinDuration(expected time.Time, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return WithinDuration(a.t, expected, actual, delta, msgAndArgs...) +} + +// WithinDurationf asserts that the two times are within duration delta of each other. +// +// a.WithinDurationf(time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted") +func (a *Assertions) WithinDurationf(expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return WithinDurationf(a.t, expected, actual, delta, msg, args...) +} + +// Zero asserts that i is the zero value for its type. +func (a *Assertions) Zero(i interface{}, msgAndArgs ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Zero(a.t, i, msgAndArgs...) +} + +// Zerof asserts that i is the zero value for its type. +func (a *Assertions) Zerof(i interface{}, msg string, args ...interface{}) bool { + if h, ok := a.t.(tHelper); ok { + h.Helper() + } + return Zerof(a.t, i, msg, args...) +} diff --git a/vendor/github.com/stretchr/testify/assert/assertions.go b/vendor/github.com/stretchr/testify/assert/assertions.go new file mode 100644 index 00000000..5bdec56c --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/assertions.go @@ -0,0 +1,1394 @@ +package assert + +import ( + "bufio" + "bytes" + "encoding/json" + "errors" + "fmt" + "math" + "os" + "reflect" + "regexp" + "runtime" + "strings" + "time" + "unicode" + "unicode/utf8" + + "github.com/davecgh/go-spew/spew" + "github.com/pmezard/go-difflib/difflib" +) + +//go:generate go run ../_codegen/main.go -output-package=assert -template=assertion_format.go.tmpl + +// TestingT is an interface wrapper around *testing.T +type TestingT interface { + Errorf(format string, args ...interface{}) +} + +// ComparisonAssertionFunc is a common function prototype when comparing two values. Can be useful +// for table driven tests. +type ComparisonAssertionFunc func(TestingT, interface{}, interface{}, ...interface{}) bool + +// ValueAssertionFunc is a common function prototype when validating a single value. Can be useful +// for table driven tests. +type ValueAssertionFunc func(TestingT, interface{}, ...interface{}) bool + +// BoolAssertionFunc is a common function prototype when validating a bool value. Can be useful +// for table driven tests. +type BoolAssertionFunc func(TestingT, bool, ...interface{}) bool + +// ValuesAssertionFunc is a common function prototype when validating an error value. Can be useful +// for table driven tests. +type ErrorAssertionFunc func(TestingT, error, ...interface{}) bool + +// Comparison a custom function that returns true on success and false on failure +type Comparison func() (success bool) + +/* + Helper functions +*/ + +// ObjectsAreEqual determines if two objects are considered equal. +// +// This function does no assertion of any kind. +func ObjectsAreEqual(expected, actual interface{}) bool { + if expected == nil || actual == nil { + return expected == actual + } + + exp, ok := expected.([]byte) + if !ok { + return reflect.DeepEqual(expected, actual) + } + + act, ok := actual.([]byte) + if !ok { + return false + } + if exp == nil || act == nil { + return exp == nil && act == nil + } + return bytes.Equal(exp, act) +} + +// ObjectsAreEqualValues gets whether two objects are equal, or if their +// values are equal. +func ObjectsAreEqualValues(expected, actual interface{}) bool { + if ObjectsAreEqual(expected, actual) { + return true + } + + actualType := reflect.TypeOf(actual) + if actualType == nil { + return false + } + expectedValue := reflect.ValueOf(expected) + if expectedValue.IsValid() && expectedValue.Type().ConvertibleTo(actualType) { + // Attempt comparison after type conversion + return reflect.DeepEqual(expectedValue.Convert(actualType).Interface(), actual) + } + + return false +} + +/* CallerInfo is necessary because the assert functions use the testing object +internally, causing it to print the file:line of the assert method, rather than where +the problem actually occurred in calling code.*/ + +// CallerInfo returns an array of strings containing the file and line number +// of each stack frame leading from the current test to the assert call that +// failed. +func CallerInfo() []string { + + pc := uintptr(0) + file := "" + line := 0 + ok := false + name := "" + + callers := []string{} + for i := 0; ; i++ { + pc, file, line, ok = runtime.Caller(i) + if !ok { + // The breaks below failed to terminate the loop, and we ran off the + // end of the call stack. + break + } + + // This is a huge edge case, but it will panic if this is the case, see #180 + if file == "" { + break + } + + f := runtime.FuncForPC(pc) + if f == nil { + break + } + name = f.Name() + + // testing.tRunner is the standard library function that calls + // tests. Subtests are called directly by tRunner, without going through + // the Test/Benchmark/Example function that contains the t.Run calls, so + // with subtests we should break when we hit tRunner, without adding it + // to the list of callers. + if name == "testing.tRunner" { + break + } + + parts := strings.Split(file, "/") + file = parts[len(parts)-1] + if len(parts) > 1 { + dir := parts[len(parts)-2] + if (dir != "assert" && dir != "mock" && dir != "require") || file == "mock_test.go" { + callers = append(callers, fmt.Sprintf("%s:%d", file, line)) + } + } + + // Drop the package + segments := strings.Split(name, ".") + name = segments[len(segments)-1] + if isTest(name, "Test") || + isTest(name, "Benchmark") || + isTest(name, "Example") { + break + } + } + + return callers +} + +// Stolen from the `go test` tool. +// isTest tells whether name looks like a test (or benchmark, according to prefix). +// It is a Test (say) if there is a character after Test that is not a lower-case letter. +// We don't want TesticularCancer. +func isTest(name, prefix string) bool { + if !strings.HasPrefix(name, prefix) { + return false + } + if len(name) == len(prefix) { // "Test" is ok + return true + } + rune, _ := utf8.DecodeRuneInString(name[len(prefix):]) + return !unicode.IsLower(rune) +} + +func messageFromMsgAndArgs(msgAndArgs ...interface{}) string { + if len(msgAndArgs) == 0 || msgAndArgs == nil { + return "" + } + if len(msgAndArgs) == 1 { + return msgAndArgs[0].(string) + } + if len(msgAndArgs) > 1 { + return fmt.Sprintf(msgAndArgs[0].(string), msgAndArgs[1:]...) + } + return "" +} + +// Aligns the provided message so that all lines after the first line start at the same location as the first line. +// Assumes that the first line starts at the correct location (after carriage return, tab, label, spacer and tab). +// The longestLabelLen parameter specifies the length of the longest label in the output (required becaues this is the +// basis on which the alignment occurs). +func indentMessageLines(message string, longestLabelLen int) string { + outBuf := new(bytes.Buffer) + + for i, scanner := 0, bufio.NewScanner(strings.NewReader(message)); scanner.Scan(); i++ { + // no need to align first line because it starts at the correct location (after the label) + if i != 0 { + // append alignLen+1 spaces to align with "{{longestLabel}}:" before adding tab + outBuf.WriteString("\n\t" + strings.Repeat(" ", longestLabelLen+1) + "\t") + } + outBuf.WriteString(scanner.Text()) + } + + return outBuf.String() +} + +type failNower interface { + FailNow() +} + +// FailNow fails test +func FailNow(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + Fail(t, failureMessage, msgAndArgs...) + + // We cannot extend TestingT with FailNow() and + // maintain backwards compatibility, so we fallback + // to panicking when FailNow is not available in + // TestingT. + // See issue #263 + + if t, ok := t.(failNower); ok { + t.FailNow() + } else { + panic("test failed and t is missing `FailNow()`") + } + return false +} + +// Fail reports a failure through +func Fail(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + content := []labeledContent{ + {"Error Trace", strings.Join(CallerInfo(), "\n\t\t\t")}, + {"Error", failureMessage}, + } + + // Add test name if the Go version supports it + if n, ok := t.(interface { + Name() string + }); ok { + content = append(content, labeledContent{"Test", n.Name()}) + } + + message := messageFromMsgAndArgs(msgAndArgs...) + if len(message) > 0 { + content = append(content, labeledContent{"Messages", message}) + } + + t.Errorf("\n%s", ""+labeledOutput(content...)) + + return false +} + +type labeledContent struct { + label string + content string +} + +// labeledOutput returns a string consisting of the provided labeledContent. Each labeled output is appended in the following manner: +// +// \t{{label}}:{{align_spaces}}\t{{content}}\n +// +// The initial carriage return is required to undo/erase any padding added by testing.T.Errorf. The "\t{{label}}:" is for the label. +// If a label is shorter than the longest label provided, padding spaces are added to make all the labels match in length. Once this +// alignment is achieved, "\t{{content}}\n" is added for the output. +// +// If the content of the labeledOutput contains line breaks, the subsequent lines are aligned so that they start at the same location as the first line. +func labeledOutput(content ...labeledContent) string { + longestLabel := 0 + for _, v := range content { + if len(v.label) > longestLabel { + longestLabel = len(v.label) + } + } + var output string + for _, v := range content { + output += "\t" + v.label + ":" + strings.Repeat(" ", longestLabel-len(v.label)) + "\t" + indentMessageLines(v.content, longestLabel) + "\n" + } + return output +} + +// Implements asserts that an object is implemented by the specified interface. +// +// assert.Implements(t, (*MyInterface)(nil), new(MyObject)) +func Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + interfaceType := reflect.TypeOf(interfaceObject).Elem() + + if object == nil { + return Fail(t, fmt.Sprintf("Cannot check if nil implements %v", interfaceType), msgAndArgs...) + } + if !reflect.TypeOf(object).Implements(interfaceType) { + return Fail(t, fmt.Sprintf("%T must implement %v", object, interfaceType), msgAndArgs...) + } + + return true +} + +// IsType asserts that the specified objects are of the same type. +func IsType(t TestingT, expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + if !ObjectsAreEqual(reflect.TypeOf(object), reflect.TypeOf(expectedType)) { + return Fail(t, fmt.Sprintf("Object expected to be of type %v, but was %v", reflect.TypeOf(expectedType), reflect.TypeOf(object)), msgAndArgs...) + } + + return true +} + +// Equal asserts that two objects are equal. +// +// assert.Equal(t, 123, 123) +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. +func Equal(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if err := validateEqualArgs(expected, actual); err != nil { + return Fail(t, fmt.Sprintf("Invalid operation: %#v == %#v (%s)", + expected, actual, err), msgAndArgs...) + } + + if !ObjectsAreEqual(expected, actual) { + diff := diff(expected, actual) + expected, actual = formatUnequalValues(expected, actual) + return Fail(t, fmt.Sprintf("Not equal: \n"+ + "expected: %s\n"+ + "actual : %s%s", expected, actual, diff), msgAndArgs...) + } + + return true + +} + +// formatUnequalValues takes two values of arbitrary types and returns string +// representations appropriate to be presented to the user. +// +// If the values are not of like type, the returned strings will be prefixed +// with the type name, and the value will be enclosed in parenthesis similar +// to a type conversion in the Go grammar. +func formatUnequalValues(expected, actual interface{}) (e string, a string) { + if reflect.TypeOf(expected) != reflect.TypeOf(actual) { + return fmt.Sprintf("%T(%#v)", expected, expected), + fmt.Sprintf("%T(%#v)", actual, actual) + } + + return fmt.Sprintf("%#v", expected), + fmt.Sprintf("%#v", actual) +} + +// EqualValues asserts that two objects are equal or convertable to the same types +// and equal. +// +// assert.EqualValues(t, uint32(123), int32(123)) +func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + if !ObjectsAreEqualValues(expected, actual) { + diff := diff(expected, actual) + expected, actual = formatUnequalValues(expected, actual) + return Fail(t, fmt.Sprintf("Not equal: \n"+ + "expected: %s\n"+ + "actual : %s%s", expected, actual, diff), msgAndArgs...) + } + + return true + +} + +// Exactly asserts that two objects are equal in value and type. +// +// assert.Exactly(t, int32(123), int64(123)) +func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + aType := reflect.TypeOf(expected) + bType := reflect.TypeOf(actual) + + if aType != bType { + return Fail(t, fmt.Sprintf("Types expected to match exactly\n\t%v != %v", aType, bType), msgAndArgs...) + } + + return Equal(t, expected, actual, msgAndArgs...) + +} + +// NotNil asserts that the specified object is not nil. +// +// assert.NotNil(t, err) +func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if !isNil(object) { + return true + } + return Fail(t, "Expected value not to be nil.", msgAndArgs...) +} + +// isNil checks if a specified object is nil or not, without Failing. +func isNil(object interface{}) bool { + if object == nil { + return true + } + + value := reflect.ValueOf(object) + kind := value.Kind() + if kind >= reflect.Chan && kind <= reflect.Slice && value.IsNil() { + return true + } + + return false +} + +// Nil asserts that the specified object is nil. +// +// assert.Nil(t, err) +func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if isNil(object) { + return true + } + return Fail(t, fmt.Sprintf("Expected nil, but got: %#v", object), msgAndArgs...) +} + +// isEmpty gets whether the specified object is considered empty or not. +func isEmpty(object interface{}) bool { + + // get nil case out of the way + if object == nil { + return true + } + + objValue := reflect.ValueOf(object) + + switch objValue.Kind() { + // collection types are empty when they have no element + case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice: + return objValue.Len() == 0 + // pointers are empty if nil or if the value they point to is empty + case reflect.Ptr: + if objValue.IsNil() { + return true + } + deref := objValue.Elem().Interface() + return isEmpty(deref) + // for all other types, compare against the zero value + default: + zero := reflect.Zero(objValue.Type()) + return reflect.DeepEqual(object, zero.Interface()) + } +} + +// Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// assert.Empty(t, obj) +func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + pass := isEmpty(object) + if !pass { + Fail(t, fmt.Sprintf("Should be empty, but was %v", object), msgAndArgs...) + } + + return pass + +} + +// NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// if assert.NotEmpty(t, obj) { +// assert.Equal(t, "two", obj[1]) +// } +func NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + pass := !isEmpty(object) + if !pass { + Fail(t, fmt.Sprintf("Should NOT be empty, but was %v", object), msgAndArgs...) + } + + return pass + +} + +// getLen try to get length of object. +// return (false, 0) if impossible. +func getLen(x interface{}) (ok bool, length int) { + v := reflect.ValueOf(x) + defer func() { + if e := recover(); e != nil { + ok = false + } + }() + return true, v.Len() +} + +// Len asserts that the specified object has specific length. +// Len also fails if the object has a type that len() not accept. +// +// assert.Len(t, mySlice, 3) +func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + ok, l := getLen(object) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", object), msgAndArgs...) + } + + if l != length { + return Fail(t, fmt.Sprintf("\"%s\" should have %d item(s), but has %d", object, length, l), msgAndArgs...) + } + return true +} + +// True asserts that the specified value is true. +// +// assert.True(t, myBool) +func True(t TestingT, value bool, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if h, ok := t.(interface { + Helper() + }); ok { + h.Helper() + } + + if value != true { + return Fail(t, "Should be true", msgAndArgs...) + } + + return true + +} + +// False asserts that the specified value is false. +// +// assert.False(t, myBool) +func False(t TestingT, value bool, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + if value != false { + return Fail(t, "Should be false", msgAndArgs...) + } + + return true + +} + +// NotEqual asserts that the specified values are NOT equal. +// +// assert.NotEqual(t, obj1, obj2) +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). +func NotEqual(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if err := validateEqualArgs(expected, actual); err != nil { + return Fail(t, fmt.Sprintf("Invalid operation: %#v != %#v (%s)", + expected, actual, err), msgAndArgs...) + } + + if ObjectsAreEqual(expected, actual) { + return Fail(t, fmt.Sprintf("Should not be: %#v\n", actual), msgAndArgs...) + } + + return true + +} + +// containsElement try loop over the list check if the list includes the element. +// return (false, false) if impossible. +// return (true, false) if element was not found. +// return (true, true) if element was found. +func includeElement(list interface{}, element interface{}) (ok, found bool) { + + listValue := reflect.ValueOf(list) + elementValue := reflect.ValueOf(element) + defer func() { + if e := recover(); e != nil { + ok = false + found = false + } + }() + + if reflect.TypeOf(list).Kind() == reflect.String { + return true, strings.Contains(listValue.String(), elementValue.String()) + } + + if reflect.TypeOf(list).Kind() == reflect.Map { + mapKeys := listValue.MapKeys() + for i := 0; i < len(mapKeys); i++ { + if ObjectsAreEqual(mapKeys[i].Interface(), element) { + return true, true + } + } + return true, false + } + + for i := 0; i < listValue.Len(); i++ { + if ObjectsAreEqual(listValue.Index(i).Interface(), element) { + return true, true + } + } + return true, false + +} + +// Contains asserts that the specified string, list(array, slice...) or map contains the +// specified substring or element. +// +// assert.Contains(t, "Hello World", "World") +// assert.Contains(t, ["Hello", "World"], "World") +// assert.Contains(t, {"Hello": "World"}, "Hello") +func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + ok, found := includeElement(s, contains) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", s), msgAndArgs...) + } + if !found { + return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", s, contains), msgAndArgs...) + } + + return true + +} + +// NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the +// specified substring or element. +// +// assert.NotContains(t, "Hello World", "Earth") +// assert.NotContains(t, ["Hello", "World"], "Earth") +// assert.NotContains(t, {"Hello": "World"}, "Earth") +func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + ok, found := includeElement(s, contains) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", s), msgAndArgs...) + } + if found { + return Fail(t, fmt.Sprintf("\"%s\" should not contain \"%s\"", s, contains), msgAndArgs...) + } + + return true + +} + +// Subset asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// assert.Subset(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]") +func Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if subset == nil { + return true // we consider nil to be equal to the nil set + } + + subsetValue := reflect.ValueOf(subset) + defer func() { + if e := recover(); e != nil { + ok = false + } + }() + + listKind := reflect.TypeOf(list).Kind() + subsetKind := reflect.TypeOf(subset).Kind() + + if listKind != reflect.Array && listKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...) + } + + if subsetKind != reflect.Array && subsetKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...) + } + + for i := 0; i < subsetValue.Len(); i++ { + element := subsetValue.Index(i).Interface() + ok, found := includeElement(list, element) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...) + } + if !found { + return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", list, element), msgAndArgs...) + } + } + + return true +} + +// NotSubset asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// assert.NotSubset(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]") +func NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if subset == nil { + return Fail(t, fmt.Sprintf("nil is the empty set which is a subset of every set"), msgAndArgs...) + } + + subsetValue := reflect.ValueOf(subset) + defer func() { + if e := recover(); e != nil { + ok = false + } + }() + + listKind := reflect.TypeOf(list).Kind() + subsetKind := reflect.TypeOf(subset).Kind() + + if listKind != reflect.Array && listKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...) + } + + if subsetKind != reflect.Array && subsetKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...) + } + + for i := 0; i < subsetValue.Len(); i++ { + element := subsetValue.Index(i).Interface() + ok, found := includeElement(list, element) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...) + } + if !found { + return true + } + } + + return Fail(t, fmt.Sprintf("%q is a subset of %q", subset, list), msgAndArgs...) +} + +// ElementsMatch asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// assert.ElementsMatch(t, [1, 3, 2, 3], [1, 3, 3, 2]) +func ElementsMatch(t TestingT, listA, listB interface{}, msgAndArgs ...interface{}) (ok bool) { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if isEmpty(listA) && isEmpty(listB) { + return true + } + + aKind := reflect.TypeOf(listA).Kind() + bKind := reflect.TypeOf(listB).Kind() + + if aKind != reflect.Array && aKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", listA, aKind), msgAndArgs...) + } + + if bKind != reflect.Array && bKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", listB, bKind), msgAndArgs...) + } + + aValue := reflect.ValueOf(listA) + bValue := reflect.ValueOf(listB) + + aLen := aValue.Len() + bLen := bValue.Len() + + if aLen != bLen { + return Fail(t, fmt.Sprintf("lengths don't match: %d != %d", aLen, bLen), msgAndArgs...) + } + + // Mark indexes in bValue that we already used + visited := make([]bool, bLen) + for i := 0; i < aLen; i++ { + element := aValue.Index(i).Interface() + found := false + for j := 0; j < bLen; j++ { + if visited[j] { + continue + } + if ObjectsAreEqual(bValue.Index(j).Interface(), element) { + visited[j] = true + found = true + break + } + } + if !found { + return Fail(t, fmt.Sprintf("element %s appears more times in %s than in %s", element, aValue, bValue), msgAndArgs...) + } + } + + return true +} + +// Condition uses a Comparison to assert a complex condition. +func Condition(t TestingT, comp Comparison, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + result := comp() + if !result { + Fail(t, "Condition failed!", msgAndArgs...) + } + return result +} + +// PanicTestFunc defines a func that should be passed to the assert.Panics and assert.NotPanics +// methods, and represents a simple func that takes no arguments, and returns nothing. +type PanicTestFunc func() + +// didPanic returns true if the function passed to it panics. Otherwise, it returns false. +func didPanic(f PanicTestFunc) (bool, interface{}) { + + didPanic := false + var message interface{} + func() { + + defer func() { + if message = recover(); message != nil { + didPanic = true + } + }() + + // call the target function + f() + + }() + + return didPanic, message + +} + +// Panics asserts that the code inside the specified PanicTestFunc panics. +// +// assert.Panics(t, func(){ GoCrazy() }) +func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + if funcDidPanic, panicValue := didPanic(f); !funcDidPanic { + return Fail(t, fmt.Sprintf("func %#v should panic\n\tPanic value:\t%#v", f, panicValue), msgAndArgs...) + } + + return true +} + +// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// assert.PanicsWithValue(t, "crazy error", func(){ GoCrazy() }) +func PanicsWithValue(t TestingT, expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + funcDidPanic, panicValue := didPanic(f) + if !funcDidPanic { + return Fail(t, fmt.Sprintf("func %#v should panic\n\tPanic value:\t%#v", f, panicValue), msgAndArgs...) + } + if panicValue != expected { + return Fail(t, fmt.Sprintf("func %#v should panic with value:\t%#v\n\tPanic value:\t%#v", f, expected, panicValue), msgAndArgs...) + } + + return true +} + +// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic. +// +// assert.NotPanics(t, func(){ RemainCalm() }) +func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + if funcDidPanic, panicValue := didPanic(f); funcDidPanic { + return Fail(t, fmt.Sprintf("func %#v should not panic\n\tPanic value:\t%v", f, panicValue), msgAndArgs...) + } + + return true +} + +// WithinDuration asserts that the two times are within duration delta of each other. +// +// assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second) +func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + dt := expected.Sub(actual) + if dt < -delta || dt > delta { + return Fail(t, fmt.Sprintf("Max difference between %v and %v allowed is %v, but difference was %v", expected, actual, delta, dt), msgAndArgs...) + } + + return true +} + +func toFloat(x interface{}) (float64, bool) { + var xf float64 + xok := true + + switch xn := x.(type) { + case uint8: + xf = float64(xn) + case uint16: + xf = float64(xn) + case uint32: + xf = float64(xn) + case uint64: + xf = float64(xn) + case int: + xf = float64(xn) + case int8: + xf = float64(xn) + case int16: + xf = float64(xn) + case int32: + xf = float64(xn) + case int64: + xf = float64(xn) + case float32: + xf = float64(xn) + case float64: + xf = float64(xn) + case time.Duration: + xf = float64(xn) + default: + xok = false + } + + return xf, xok +} + +// InDelta asserts that the two numerals are within delta of each other. +// +// assert.InDelta(t, math.Pi, (22 / 7.0), 0.01) +func InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + af, aok := toFloat(expected) + bf, bok := toFloat(actual) + + if !aok || !bok { + return Fail(t, fmt.Sprintf("Parameters must be numerical"), msgAndArgs...) + } + + if math.IsNaN(af) { + return Fail(t, fmt.Sprintf("Expected must not be NaN"), msgAndArgs...) + } + + if math.IsNaN(bf) { + return Fail(t, fmt.Sprintf("Expected %v with delta %v, but was NaN", expected, delta), msgAndArgs...) + } + + dt := af - bf + if dt < -delta || dt > delta { + return Fail(t, fmt.Sprintf("Max difference between %v and %v allowed is %v, but difference was %v", expected, actual, delta, dt), msgAndArgs...) + } + + return true +} + +// InDeltaSlice is the same as InDelta, except it compares two slices. +func InDeltaSlice(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if expected == nil || actual == nil || + reflect.TypeOf(actual).Kind() != reflect.Slice || + reflect.TypeOf(expected).Kind() != reflect.Slice { + return Fail(t, fmt.Sprintf("Parameters must be slice"), msgAndArgs...) + } + + actualSlice := reflect.ValueOf(actual) + expectedSlice := reflect.ValueOf(expected) + + for i := 0; i < actualSlice.Len(); i++ { + result := InDelta(t, actualSlice.Index(i).Interface(), expectedSlice.Index(i).Interface(), delta, msgAndArgs...) + if !result { + return result + } + } + + return true +} + +// InDeltaMapValues is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func InDeltaMapValues(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if expected == nil || actual == nil || + reflect.TypeOf(actual).Kind() != reflect.Map || + reflect.TypeOf(expected).Kind() != reflect.Map { + return Fail(t, "Arguments must be maps", msgAndArgs...) + } + + expectedMap := reflect.ValueOf(expected) + actualMap := reflect.ValueOf(actual) + + if expectedMap.Len() != actualMap.Len() { + return Fail(t, "Arguments must have the same number of keys", msgAndArgs...) + } + + for _, k := range expectedMap.MapKeys() { + ev := expectedMap.MapIndex(k) + av := actualMap.MapIndex(k) + + if !ev.IsValid() { + return Fail(t, fmt.Sprintf("missing key %q in expected map", k), msgAndArgs...) + } + + if !av.IsValid() { + return Fail(t, fmt.Sprintf("missing key %q in actual map", k), msgAndArgs...) + } + + if !InDelta( + t, + ev.Interface(), + av.Interface(), + delta, + msgAndArgs..., + ) { + return false + } + } + + return true +} + +func calcRelativeError(expected, actual interface{}) (float64, error) { + af, aok := toFloat(expected) + if !aok { + return 0, fmt.Errorf("expected value %q cannot be converted to float", expected) + } + if af == 0 { + return 0, fmt.Errorf("expected value must have a value other than zero to calculate the relative error") + } + bf, bok := toFloat(actual) + if !bok { + return 0, fmt.Errorf("actual value %q cannot be converted to float", actual) + } + + return math.Abs(af-bf) / math.Abs(af), nil +} + +// InEpsilon asserts that expected and actual have a relative error less than epsilon +func InEpsilon(t TestingT, expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + actualEpsilon, err := calcRelativeError(expected, actual) + if err != nil { + return Fail(t, err.Error(), msgAndArgs...) + } + if actualEpsilon > epsilon { + return Fail(t, fmt.Sprintf("Relative error is too high: %#v (expected)\n"+ + " < %#v (actual)", epsilon, actualEpsilon), msgAndArgs...) + } + + return true +} + +// InEpsilonSlice is the same as InEpsilon, except it compares each value from two slices. +func InEpsilonSlice(t TestingT, expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if expected == nil || actual == nil || + reflect.TypeOf(actual).Kind() != reflect.Slice || + reflect.TypeOf(expected).Kind() != reflect.Slice { + return Fail(t, fmt.Sprintf("Parameters must be slice"), msgAndArgs...) + } + + actualSlice := reflect.ValueOf(actual) + expectedSlice := reflect.ValueOf(expected) + + for i := 0; i < actualSlice.Len(); i++ { + result := InEpsilon(t, actualSlice.Index(i).Interface(), expectedSlice.Index(i).Interface(), epsilon) + if !result { + return result + } + } + + return true +} + +/* + Errors +*/ + +// NoError asserts that a function returned no error (i.e. `nil`). +// +// actualObj, err := SomeFunction() +// if assert.NoError(t, err) { +// assert.Equal(t, expectedObj, actualObj) +// } +func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if err != nil { + return Fail(t, fmt.Sprintf("Received unexpected error:\n%+v", err), msgAndArgs...) + } + + return true +} + +// Error asserts that a function returned an error (i.e. not `nil`). +// +// actualObj, err := SomeFunction() +// if assert.Error(t, err) { +// assert.Equal(t, expectedError, err) +// } +func Error(t TestingT, err error, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + if err == nil { + return Fail(t, "An error is expected but got nil.", msgAndArgs...) + } + + return true +} + +// EqualError asserts that a function returned an error (i.e. not `nil`) +// and that it is equal to the provided error. +// +// actualObj, err := SomeFunction() +// assert.EqualError(t, err, expectedErrorString) +func EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if !Error(t, theError, msgAndArgs...) { + return false + } + expected := errString + actual := theError.Error() + // don't need to use deep equals here, we know they are both strings + if expected != actual { + return Fail(t, fmt.Sprintf("Error message not equal:\n"+ + "expected: %q\n"+ + "actual : %q", expected, actual), msgAndArgs...) + } + return true +} + +// matchRegexp return true if a specified regexp matches a string. +func matchRegexp(rx interface{}, str interface{}) bool { + + var r *regexp.Regexp + if rr, ok := rx.(*regexp.Regexp); ok { + r = rr + } else { + r = regexp.MustCompile(fmt.Sprint(rx)) + } + + return (r.FindStringIndex(fmt.Sprint(str)) != nil) + +} + +// Regexp asserts that a specified regexp matches a string. +// +// assert.Regexp(t, regexp.MustCompile("start"), "it's starting") +// assert.Regexp(t, "start...$", "it's not starting") +func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + + match := matchRegexp(rx, str) + + if !match { + Fail(t, fmt.Sprintf("Expect \"%v\" to match \"%v\"", str, rx), msgAndArgs...) + } + + return match +} + +// NotRegexp asserts that a specified regexp does not match a string. +// +// assert.NotRegexp(t, regexp.MustCompile("starts"), "it's starting") +// assert.NotRegexp(t, "^start", "it's not starting") +func NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + match := matchRegexp(rx, str) + + if match { + Fail(t, fmt.Sprintf("Expect \"%v\" to NOT match \"%v\"", str, rx), msgAndArgs...) + } + + return !match + +} + +// Zero asserts that i is the zero value for its type. +func Zero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if i != nil && !reflect.DeepEqual(i, reflect.Zero(reflect.TypeOf(i)).Interface()) { + return Fail(t, fmt.Sprintf("Should be zero, but was %v", i), msgAndArgs...) + } + return true +} + +// NotZero asserts that i is not the zero value for its type. +func NotZero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + if i == nil || reflect.DeepEqual(i, reflect.Zero(reflect.TypeOf(i)).Interface()) { + return Fail(t, fmt.Sprintf("Should not be zero, but was %v", i), msgAndArgs...) + } + return true +} + +// FileExists checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func FileExists(t TestingT, path string, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + info, err := os.Lstat(path) + if err != nil { + if os.IsNotExist(err) { + return Fail(t, fmt.Sprintf("unable to find file %q", path), msgAndArgs...) + } + return Fail(t, fmt.Sprintf("error when running os.Lstat(%q): %s", path, err), msgAndArgs...) + } + if info.IsDir() { + return Fail(t, fmt.Sprintf("%q is a directory", path), msgAndArgs...) + } + return true +} + +// DirExists checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func DirExists(t TestingT, path string, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + info, err := os.Lstat(path) + if err != nil { + if os.IsNotExist(err) { + return Fail(t, fmt.Sprintf("unable to find file %q", path), msgAndArgs...) + } + return Fail(t, fmt.Sprintf("error when running os.Lstat(%q): %s", path, err), msgAndArgs...) + } + if !info.IsDir() { + return Fail(t, fmt.Sprintf("%q is a file", path), msgAndArgs...) + } + return true +} + +// JSONEq asserts that two JSON strings are equivalent. +// +// assert.JSONEq(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`) +func JSONEq(t TestingT, expected string, actual string, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + var expectedJSONAsInterface, actualJSONAsInterface interface{} + + if err := json.Unmarshal([]byte(expected), &expectedJSONAsInterface); err != nil { + return Fail(t, fmt.Sprintf("Expected value ('%s') is not valid json.\nJSON parsing error: '%s'", expected, err.Error()), msgAndArgs...) + } + + if err := json.Unmarshal([]byte(actual), &actualJSONAsInterface); err != nil { + return Fail(t, fmt.Sprintf("Input ('%s') needs to be valid json.\nJSON parsing error: '%s'", actual, err.Error()), msgAndArgs...) + } + + return Equal(t, expectedJSONAsInterface, actualJSONAsInterface, msgAndArgs...) +} + +func typeAndKind(v interface{}) (reflect.Type, reflect.Kind) { + t := reflect.TypeOf(v) + k := t.Kind() + + if k == reflect.Ptr { + t = t.Elem() + k = t.Kind() + } + return t, k +} + +// diff returns a diff of both values as long as both are of the same type and +// are a struct, map, slice or array. Otherwise it returns an empty string. +func diff(expected interface{}, actual interface{}) string { + if expected == nil || actual == nil { + return "" + } + + et, ek := typeAndKind(expected) + at, _ := typeAndKind(actual) + + if et != at { + return "" + } + + if ek != reflect.Struct && ek != reflect.Map && ek != reflect.Slice && ek != reflect.Array && ek != reflect.String { + return "" + } + + var e, a string + if ek != reflect.String { + e = spewConfig.Sdump(expected) + a = spewConfig.Sdump(actual) + } else { + e = expected.(string) + a = actual.(string) + } + + diff, _ := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{ + A: difflib.SplitLines(e), + B: difflib.SplitLines(a), + FromFile: "Expected", + FromDate: "", + ToFile: "Actual", + ToDate: "", + Context: 1, + }) + + return "\n\nDiff:\n" + diff +} + +// validateEqualArgs checks whether provided arguments can be safely used in the +// Equal/NotEqual functions. +func validateEqualArgs(expected, actual interface{}) error { + if isFunction(expected) || isFunction(actual) { + return errors.New("cannot take func type as argument") + } + return nil +} + +func isFunction(arg interface{}) bool { + if arg == nil { + return false + } + return reflect.TypeOf(arg).Kind() == reflect.Func +} + +var spewConfig = spew.ConfigState{ + Indent: " ", + DisablePointerAddresses: true, + DisableCapacities: true, + SortKeys: true, +} + +type tHelper interface { + Helper() +} diff --git a/vendor/github.com/stretchr/testify/assert/doc.go b/vendor/github.com/stretchr/testify/assert/doc.go new file mode 100644 index 00000000..c9dccc4d --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/doc.go @@ -0,0 +1,45 @@ +// Package assert provides a set of comprehensive testing tools for use with the normal Go testing system. +// +// Example Usage +// +// The following is a complete example using assert in a standard test function: +// import ( +// "testing" +// "github.com/stretchr/testify/assert" +// ) +// +// func TestSomething(t *testing.T) { +// +// var a string = "Hello" +// var b string = "Hello" +// +// assert.Equal(t, a, b, "The two words should be the same.") +// +// } +// +// if you assert many times, use the format below: +// +// import ( +// "testing" +// "github.com/stretchr/testify/assert" +// ) +// +// func TestSomething(t *testing.T) { +// assert := assert.New(t) +// +// var a string = "Hello" +// var b string = "Hello" +// +// assert.Equal(a, b, "The two words should be the same.") +// } +// +// Assertions +// +// Assertions allow you to easily write test code, and are global funcs in the `assert` package. +// All assertion functions take, as the first argument, the `*testing.T` object provided by the +// testing framework. This allows the assertion funcs to write the failings and other details to +// the correct place. +// +// Every assertion function also takes an optional string message as the final argument, +// allowing custom error messages to be appended to the message the assertion method outputs. +package assert diff --git a/vendor/github.com/stretchr/testify/assert/errors.go b/vendor/github.com/stretchr/testify/assert/errors.go new file mode 100644 index 00000000..ac9dc9d1 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/errors.go @@ -0,0 +1,10 @@ +package assert + +import ( + "errors" +) + +// AnError is an error instance useful for testing. If the code does not care +// about error specifics, and only needs to return the error for example, this +// error should be used to make the test code more readable. +var AnError = errors.New("assert.AnError general error for testing") diff --git a/vendor/github.com/stretchr/testify/assert/forward_assertions.go b/vendor/github.com/stretchr/testify/assert/forward_assertions.go new file mode 100644 index 00000000..9ad56851 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/forward_assertions.go @@ -0,0 +1,16 @@ +package assert + +// Assertions provides assertion methods around the +// TestingT interface. +type Assertions struct { + t TestingT +} + +// New makes a new Assertions object for the specified TestingT. +func New(t TestingT) *Assertions { + return &Assertions{ + t: t, + } +} + +//go:generate go run ../_codegen/main.go -output-package=assert -template=assertion_forward.go.tmpl -include-format-funcs diff --git a/vendor/github.com/stretchr/testify/assert/http_assertions.go b/vendor/github.com/stretchr/testify/assert/http_assertions.go new file mode 100644 index 00000000..df46fa77 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/http_assertions.go @@ -0,0 +1,143 @@ +package assert + +import ( + "fmt" + "net/http" + "net/http/httptest" + "net/url" + "strings" +) + +// httpCode is a helper that returns HTTP code of the response. It returns -1 and +// an error if building a new request fails. +func httpCode(handler http.HandlerFunc, method, url string, values url.Values) (int, error) { + w := httptest.NewRecorder() + req, err := http.NewRequest(method, url, nil) + if err != nil { + return -1, err + } + req.URL.RawQuery = values.Encode() + handler(w, req) + return w.Code, nil +} + +// HTTPSuccess asserts that a specified handler returns a success status code. +// +// assert.HTTPSuccess(t, myHandler, "POST", "http://www.google.com", nil) +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + code, err := httpCode(handler, method, url, values) + if err != nil { + Fail(t, fmt.Sprintf("Failed to build test request, got error: %s", err)) + return false + } + + isSuccessCode := code >= http.StatusOK && code <= http.StatusPartialContent + if !isSuccessCode { + Fail(t, fmt.Sprintf("Expected HTTP success status code for %q but received %d", url+"?"+values.Encode(), code)) + } + + return isSuccessCode +} + +// HTTPRedirect asserts that a specified handler returns a redirect status code. +// +// assert.HTTPRedirect(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + code, err := httpCode(handler, method, url, values) + if err != nil { + Fail(t, fmt.Sprintf("Failed to build test request, got error: %s", err)) + return false + } + + isRedirectCode := code >= http.StatusMultipleChoices && code <= http.StatusTemporaryRedirect + if !isRedirectCode { + Fail(t, fmt.Sprintf("Expected HTTP redirect status code for %q but received %d", url+"?"+values.Encode(), code)) + } + + return isRedirectCode +} + +// HTTPError asserts that a specified handler returns an error status code. +// +// assert.HTTPError(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + code, err := httpCode(handler, method, url, values) + if err != nil { + Fail(t, fmt.Sprintf("Failed to build test request, got error: %s", err)) + return false + } + + isErrorCode := code >= http.StatusBadRequest + if !isErrorCode { + Fail(t, fmt.Sprintf("Expected HTTP error status code for %q but received %d", url+"?"+values.Encode(), code)) + } + + return isErrorCode +} + +// HTTPBody is a helper that returns HTTP body of the response. It returns +// empty string if building a new request fails. +func HTTPBody(handler http.HandlerFunc, method, url string, values url.Values) string { + w := httptest.NewRecorder() + req, err := http.NewRequest(method, url+"?"+values.Encode(), nil) + if err != nil { + return "" + } + handler(w, req) + return w.Body.String() +} + +// HTTPBodyContains asserts that a specified handler returns a +// body that contains a string. +// +// assert.HTTPBodyContains(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + body := HTTPBody(handler, method, url, values) + + contains := strings.Contains(body, fmt.Sprint(str)) + if !contains { + Fail(t, fmt.Sprintf("Expected response body for \"%s\" to contain \"%s\" but found \"%s\"", url+"?"+values.Encode(), str, body)) + } + + return contains +} + +// HTTPBodyNotContains asserts that a specified handler returns a +// body that does not contain a string. +// +// assert.HTTPBodyNotContains(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { + if h, ok := t.(tHelper); ok { + h.Helper() + } + body := HTTPBody(handler, method, url, values) + + contains := strings.Contains(body, fmt.Sprint(str)) + if contains { + Fail(t, fmt.Sprintf("Expected response body for \"%s\" to NOT contain \"%s\" but found \"%s\"", url+"?"+values.Encode(), str, body)) + } + + return !contains +} diff --git a/vendor/golang.org/x/crypto/AUTHORS b/vendor/golang.org/x/crypto/AUTHORS new file mode 100644 index 00000000..2b00ddba --- /dev/null +++ b/vendor/golang.org/x/crypto/AUTHORS @@ -0,0 +1,3 @@ +# This source code refers to The Go Authors for copyright purposes. +# The master list of authors is in the main Go distribution, +# visible at https://tip.golang.org/AUTHORS. diff --git a/vendor/golang.org/x/crypto/CONTRIBUTORS b/vendor/golang.org/x/crypto/CONTRIBUTORS new file mode 100644 index 00000000..1fbd3e97 --- /dev/null +++ b/vendor/golang.org/x/crypto/CONTRIBUTORS @@ -0,0 +1,3 @@ +# This source code was written by the Go contributors. +# The master list of contributors is in the main Go distribution, +# visible at https://tip.golang.org/CONTRIBUTORS. diff --git a/vendor/golang.org/x/crypto/LICENSE b/vendor/golang.org/x/crypto/LICENSE new file mode 100644 index 00000000..6a66aea5 --- /dev/null +++ b/vendor/golang.org/x/crypto/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/crypto/PATENTS b/vendor/golang.org/x/crypto/PATENTS new file mode 100644 index 00000000..73309904 --- /dev/null +++ b/vendor/golang.org/x/crypto/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/crypto/bcrypt/base64.go b/vendor/golang.org/x/crypto/bcrypt/base64.go new file mode 100644 index 00000000..fc311609 --- /dev/null +++ b/vendor/golang.org/x/crypto/bcrypt/base64.go @@ -0,0 +1,35 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package bcrypt + +import "encoding/base64" + +const alphabet = "./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" + +var bcEncoding = base64.NewEncoding(alphabet) + +func base64Encode(src []byte) []byte { + n := bcEncoding.EncodedLen(len(src)) + dst := make([]byte, n) + bcEncoding.Encode(dst, src) + for dst[n-1] == '=' { + n-- + } + return dst[:n] +} + +func base64Decode(src []byte) ([]byte, error) { + numOfEquals := 4 - (len(src) % 4) + for i := 0; i < numOfEquals; i++ { + src = append(src, '=') + } + + dst := make([]byte, bcEncoding.DecodedLen(len(src))) + n, err := bcEncoding.Decode(dst, src) + if err != nil { + return nil, err + } + return dst[:n], nil +} diff --git a/vendor/golang.org/x/crypto/bcrypt/bcrypt.go b/vendor/golang.org/x/crypto/bcrypt/bcrypt.go new file mode 100644 index 00000000..aeb73f81 --- /dev/null +++ b/vendor/golang.org/x/crypto/bcrypt/bcrypt.go @@ -0,0 +1,295 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package bcrypt implements Provos and Mazières's bcrypt adaptive hashing +// algorithm. See http://www.usenix.org/event/usenix99/provos/provos.pdf +package bcrypt // import "golang.org/x/crypto/bcrypt" + +// The code is a port of Provos and Mazières's C implementation. +import ( + "crypto/rand" + "crypto/subtle" + "errors" + "fmt" + "io" + "strconv" + + "golang.org/x/crypto/blowfish" +) + +const ( + MinCost int = 4 // the minimum allowable cost as passed in to GenerateFromPassword + MaxCost int = 31 // the maximum allowable cost as passed in to GenerateFromPassword + DefaultCost int = 10 // the cost that will actually be set if a cost below MinCost is passed into GenerateFromPassword +) + +// The error returned from CompareHashAndPassword when a password and hash do +// not match. +var ErrMismatchedHashAndPassword = errors.New("crypto/bcrypt: hashedPassword is not the hash of the given password") + +// The error returned from CompareHashAndPassword when a hash is too short to +// be a bcrypt hash. +var ErrHashTooShort = errors.New("crypto/bcrypt: hashedSecret too short to be a bcrypted password") + +// The error returned from CompareHashAndPassword when a hash was created with +// a bcrypt algorithm newer than this implementation. +type HashVersionTooNewError byte + +func (hv HashVersionTooNewError) Error() string { + return fmt.Sprintf("crypto/bcrypt: bcrypt algorithm version '%c' requested is newer than current version '%c'", byte(hv), majorVersion) +} + +// The error returned from CompareHashAndPassword when a hash starts with something other than '$' +type InvalidHashPrefixError byte + +func (ih InvalidHashPrefixError) Error() string { + return fmt.Sprintf("crypto/bcrypt: bcrypt hashes must start with '$', but hashedSecret started with '%c'", byte(ih)) +} + +type InvalidCostError int + +func (ic InvalidCostError) Error() string { + return fmt.Sprintf("crypto/bcrypt: cost %d is outside allowed range (%d,%d)", int(ic), int(MinCost), int(MaxCost)) +} + +const ( + majorVersion = '2' + minorVersion = 'a' + maxSaltSize = 16 + maxCryptedHashSize = 23 + encodedSaltSize = 22 + encodedHashSize = 31 + minHashSize = 59 +) + +// magicCipherData is an IV for the 64 Blowfish encryption calls in +// bcrypt(). It's the string "OrpheanBeholderScryDoubt" in big-endian bytes. +var magicCipherData = []byte{ + 0x4f, 0x72, 0x70, 0x68, + 0x65, 0x61, 0x6e, 0x42, + 0x65, 0x68, 0x6f, 0x6c, + 0x64, 0x65, 0x72, 0x53, + 0x63, 0x72, 0x79, 0x44, + 0x6f, 0x75, 0x62, 0x74, +} + +type hashed struct { + hash []byte + salt []byte + cost int // allowed range is MinCost to MaxCost + major byte + minor byte +} + +// GenerateFromPassword returns the bcrypt hash of the password at the given +// cost. If the cost given is less than MinCost, the cost will be set to +// DefaultCost, instead. Use CompareHashAndPassword, as defined in this package, +// to compare the returned hashed password with its cleartext version. +func GenerateFromPassword(password []byte, cost int) ([]byte, error) { + p, err := newFromPassword(password, cost) + if err != nil { + return nil, err + } + return p.Hash(), nil +} + +// CompareHashAndPassword compares a bcrypt hashed password with its possible +// plaintext equivalent. Returns nil on success, or an error on failure. +func CompareHashAndPassword(hashedPassword, password []byte) error { + p, err := newFromHash(hashedPassword) + if err != nil { + return err + } + + otherHash, err := bcrypt(password, p.cost, p.salt) + if err != nil { + return err + } + + otherP := &hashed{otherHash, p.salt, p.cost, p.major, p.minor} + if subtle.ConstantTimeCompare(p.Hash(), otherP.Hash()) == 1 { + return nil + } + + return ErrMismatchedHashAndPassword +} + +// Cost returns the hashing cost used to create the given hashed +// password. When, in the future, the hashing cost of a password system needs +// to be increased in order to adjust for greater computational power, this +// function allows one to establish which passwords need to be updated. +func Cost(hashedPassword []byte) (int, error) { + p, err := newFromHash(hashedPassword) + if err != nil { + return 0, err + } + return p.cost, nil +} + +func newFromPassword(password []byte, cost int) (*hashed, error) { + if cost < MinCost { + cost = DefaultCost + } + p := new(hashed) + p.major = majorVersion + p.minor = minorVersion + + err := checkCost(cost) + if err != nil { + return nil, err + } + p.cost = cost + + unencodedSalt := make([]byte, maxSaltSize) + _, err = io.ReadFull(rand.Reader, unencodedSalt) + if err != nil { + return nil, err + } + + p.salt = base64Encode(unencodedSalt) + hash, err := bcrypt(password, p.cost, p.salt) + if err != nil { + return nil, err + } + p.hash = hash + return p, err +} + +func newFromHash(hashedSecret []byte) (*hashed, error) { + if len(hashedSecret) < minHashSize { + return nil, ErrHashTooShort + } + p := new(hashed) + n, err := p.decodeVersion(hashedSecret) + if err != nil { + return nil, err + } + hashedSecret = hashedSecret[n:] + n, err = p.decodeCost(hashedSecret) + if err != nil { + return nil, err + } + hashedSecret = hashedSecret[n:] + + // The "+2" is here because we'll have to append at most 2 '=' to the salt + // when base64 decoding it in expensiveBlowfishSetup(). + p.salt = make([]byte, encodedSaltSize, encodedSaltSize+2) + copy(p.salt, hashedSecret[:encodedSaltSize]) + + hashedSecret = hashedSecret[encodedSaltSize:] + p.hash = make([]byte, len(hashedSecret)) + copy(p.hash, hashedSecret) + + return p, nil +} + +func bcrypt(password []byte, cost int, salt []byte) ([]byte, error) { + cipherData := make([]byte, len(magicCipherData)) + copy(cipherData, magicCipherData) + + c, err := expensiveBlowfishSetup(password, uint32(cost), salt) + if err != nil { + return nil, err + } + + for i := 0; i < 24; i += 8 { + for j := 0; j < 64; j++ { + c.Encrypt(cipherData[i:i+8], cipherData[i:i+8]) + } + } + + // Bug compatibility with C bcrypt implementations. We only encode 23 of + // the 24 bytes encrypted. + hsh := base64Encode(cipherData[:maxCryptedHashSize]) + return hsh, nil +} + +func expensiveBlowfishSetup(key []byte, cost uint32, salt []byte) (*blowfish.Cipher, error) { + csalt, err := base64Decode(salt) + if err != nil { + return nil, err + } + + // Bug compatibility with C bcrypt implementations. They use the trailing + // NULL in the key string during expansion. + // We copy the key to prevent changing the underlying array. + ckey := append(key[:len(key):len(key)], 0) + + c, err := blowfish.NewSaltedCipher(ckey, csalt) + if err != nil { + return nil, err + } + + var i, rounds uint64 + rounds = 1 << cost + for i = 0; i < rounds; i++ { + blowfish.ExpandKey(ckey, c) + blowfish.ExpandKey(csalt, c) + } + + return c, nil +} + +func (p *hashed) Hash() []byte { + arr := make([]byte, 60) + arr[0] = '$' + arr[1] = p.major + n := 2 + if p.minor != 0 { + arr[2] = p.minor + n = 3 + } + arr[n] = '$' + n++ + copy(arr[n:], []byte(fmt.Sprintf("%02d", p.cost))) + n += 2 + arr[n] = '$' + n++ + copy(arr[n:], p.salt) + n += encodedSaltSize + copy(arr[n:], p.hash) + n += encodedHashSize + return arr[:n] +} + +func (p *hashed) decodeVersion(sbytes []byte) (int, error) { + if sbytes[0] != '$' { + return -1, InvalidHashPrefixError(sbytes[0]) + } + if sbytes[1] > majorVersion { + return -1, HashVersionTooNewError(sbytes[1]) + } + p.major = sbytes[1] + n := 3 + if sbytes[2] != '$' { + p.minor = sbytes[2] + n++ + } + return n, nil +} + +// sbytes should begin where decodeVersion left off. +func (p *hashed) decodeCost(sbytes []byte) (int, error) { + cost, err := strconv.Atoi(string(sbytes[0:2])) + if err != nil { + return -1, err + } + err = checkCost(cost) + if err != nil { + return -1, err + } + p.cost = cost + return 3, nil +} + +func (p *hashed) String() string { + return fmt.Sprintf("&{hash: %#v, salt: %#v, cost: %d, major: %c, minor: %c}", string(p.hash), p.salt, p.cost, p.major, p.minor) +} + +func checkCost(cost int) error { + if cost < MinCost || cost > MaxCost { + return InvalidCostError(cost) + } + return nil +} diff --git a/vendor/golang.org/x/crypto/blowfish/block.go b/vendor/golang.org/x/crypto/blowfish/block.go new file mode 100644 index 00000000..9d80f195 --- /dev/null +++ b/vendor/golang.org/x/crypto/blowfish/block.go @@ -0,0 +1,159 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package blowfish + +// getNextWord returns the next big-endian uint32 value from the byte slice +// at the given position in a circular manner, updating the position. +func getNextWord(b []byte, pos *int) uint32 { + var w uint32 + j := *pos + for i := 0; i < 4; i++ { + w = w<<8 | uint32(b[j]) + j++ + if j >= len(b) { + j = 0 + } + } + *pos = j + return w +} + +// ExpandKey performs a key expansion on the given *Cipher. Specifically, it +// performs the Blowfish algorithm's key schedule which sets up the *Cipher's +// pi and substitution tables for calls to Encrypt. This is used, primarily, +// by the bcrypt package to reuse the Blowfish key schedule during its +// set up. It's unlikely that you need to use this directly. +func ExpandKey(key []byte, c *Cipher) { + j := 0 + for i := 0; i < 18; i++ { + // Using inlined getNextWord for performance. + var d uint32 + for k := 0; k < 4; k++ { + d = d<<8 | uint32(key[j]) + j++ + if j >= len(key) { + j = 0 + } + } + c.p[i] ^= d + } + + var l, r uint32 + for i := 0; i < 18; i += 2 { + l, r = encryptBlock(l, r, c) + c.p[i], c.p[i+1] = l, r + } + + for i := 0; i < 256; i += 2 { + l, r = encryptBlock(l, r, c) + c.s0[i], c.s0[i+1] = l, r + } + for i := 0; i < 256; i += 2 { + l, r = encryptBlock(l, r, c) + c.s1[i], c.s1[i+1] = l, r + } + for i := 0; i < 256; i += 2 { + l, r = encryptBlock(l, r, c) + c.s2[i], c.s2[i+1] = l, r + } + for i := 0; i < 256; i += 2 { + l, r = encryptBlock(l, r, c) + c.s3[i], c.s3[i+1] = l, r + } +} + +// This is similar to ExpandKey, but folds the salt during the key +// schedule. While ExpandKey is essentially expandKeyWithSalt with an all-zero +// salt passed in, reusing ExpandKey turns out to be a place of inefficiency +// and specializing it here is useful. +func expandKeyWithSalt(key []byte, salt []byte, c *Cipher) { + j := 0 + for i := 0; i < 18; i++ { + c.p[i] ^= getNextWord(key, &j) + } + + j = 0 + var l, r uint32 + for i := 0; i < 18; i += 2 { + l ^= getNextWord(salt, &j) + r ^= getNextWord(salt, &j) + l, r = encryptBlock(l, r, c) + c.p[i], c.p[i+1] = l, r + } + + for i := 0; i < 256; i += 2 { + l ^= getNextWord(salt, &j) + r ^= getNextWord(salt, &j) + l, r = encryptBlock(l, r, c) + c.s0[i], c.s0[i+1] = l, r + } + + for i := 0; i < 256; i += 2 { + l ^= getNextWord(salt, &j) + r ^= getNextWord(salt, &j) + l, r = encryptBlock(l, r, c) + c.s1[i], c.s1[i+1] = l, r + } + + for i := 0; i < 256; i += 2 { + l ^= getNextWord(salt, &j) + r ^= getNextWord(salt, &j) + l, r = encryptBlock(l, r, c) + c.s2[i], c.s2[i+1] = l, r + } + + for i := 0; i < 256; i += 2 { + l ^= getNextWord(salt, &j) + r ^= getNextWord(salt, &j) + l, r = encryptBlock(l, r, c) + c.s3[i], c.s3[i+1] = l, r + } +} + +func encryptBlock(l, r uint32, c *Cipher) (uint32, uint32) { + xl, xr := l, r + xl ^= c.p[0] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[1] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[2] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[3] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[4] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[5] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[6] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[7] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[8] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[9] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[10] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[11] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[12] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[13] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[14] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[15] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[16] + xr ^= c.p[17] + return xr, xl +} + +func decryptBlock(l, r uint32, c *Cipher) (uint32, uint32) { + xl, xr := l, r + xl ^= c.p[17] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[16] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[15] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[14] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[13] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[12] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[11] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[10] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[9] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[8] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[7] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[6] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[5] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[4] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[3] + xr ^= ((c.s0[byte(xl>>24)] + c.s1[byte(xl>>16)]) ^ c.s2[byte(xl>>8)]) + c.s3[byte(xl)] ^ c.p[2] + xl ^= ((c.s0[byte(xr>>24)] + c.s1[byte(xr>>16)]) ^ c.s2[byte(xr>>8)]) + c.s3[byte(xr)] ^ c.p[1] + xr ^= c.p[0] + return xr, xl +} diff --git a/vendor/golang.org/x/crypto/blowfish/cipher.go b/vendor/golang.org/x/crypto/blowfish/cipher.go new file mode 100644 index 00000000..2641dadd --- /dev/null +++ b/vendor/golang.org/x/crypto/blowfish/cipher.go @@ -0,0 +1,91 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package blowfish implements Bruce Schneier's Blowfish encryption algorithm. +package blowfish // import "golang.org/x/crypto/blowfish" + +// The code is a port of Bruce Schneier's C implementation. +// See https://www.schneier.com/blowfish.html. + +import "strconv" + +// The Blowfish block size in bytes. +const BlockSize = 8 + +// A Cipher is an instance of Blowfish encryption using a particular key. +type Cipher struct { + p [18]uint32 + s0, s1, s2, s3 [256]uint32 +} + +type KeySizeError int + +func (k KeySizeError) Error() string { + return "crypto/blowfish: invalid key size " + strconv.Itoa(int(k)) +} + +// NewCipher creates and returns a Cipher. +// The key argument should be the Blowfish key, from 1 to 56 bytes. +func NewCipher(key []byte) (*Cipher, error) { + var result Cipher + if k := len(key); k < 1 || k > 56 { + return nil, KeySizeError(k) + } + initCipher(&result) + ExpandKey(key, &result) + return &result, nil +} + +// NewSaltedCipher creates a returns a Cipher that folds a salt into its key +// schedule. For most purposes, NewCipher, instead of NewSaltedCipher, is +// sufficient and desirable. For bcrypt compatibility, the key can be over 56 +// bytes. +func NewSaltedCipher(key, salt []byte) (*Cipher, error) { + if len(salt) == 0 { + return NewCipher(key) + } + var result Cipher + if k := len(key); k < 1 { + return nil, KeySizeError(k) + } + initCipher(&result) + expandKeyWithSalt(key, salt, &result) + return &result, nil +} + +// BlockSize returns the Blowfish block size, 8 bytes. +// It is necessary to satisfy the Block interface in the +// package "crypto/cipher". +func (c *Cipher) BlockSize() int { return BlockSize } + +// Encrypt encrypts the 8-byte buffer src using the key k +// and stores the result in dst. +// Note that for amounts of data larger than a block, +// it is not safe to just call Encrypt on successive blocks; +// instead, use an encryption mode like CBC (see crypto/cipher/cbc.go). +func (c *Cipher) Encrypt(dst, src []byte) { + l := uint32(src[0])<<24 | uint32(src[1])<<16 | uint32(src[2])<<8 | uint32(src[3]) + r := uint32(src[4])<<24 | uint32(src[5])<<16 | uint32(src[6])<<8 | uint32(src[7]) + l, r = encryptBlock(l, r, c) + dst[0], dst[1], dst[2], dst[3] = byte(l>>24), byte(l>>16), byte(l>>8), byte(l) + dst[4], dst[5], dst[6], dst[7] = byte(r>>24), byte(r>>16), byte(r>>8), byte(r) +} + +// Decrypt decrypts the 8-byte buffer src using the key k +// and stores the result in dst. +func (c *Cipher) Decrypt(dst, src []byte) { + l := uint32(src[0])<<24 | uint32(src[1])<<16 | uint32(src[2])<<8 | uint32(src[3]) + r := uint32(src[4])<<24 | uint32(src[5])<<16 | uint32(src[6])<<8 | uint32(src[7]) + l, r = decryptBlock(l, r, c) + dst[0], dst[1], dst[2], dst[3] = byte(l>>24), byte(l>>16), byte(l>>8), byte(l) + dst[4], dst[5], dst[6], dst[7] = byte(r>>24), byte(r>>16), byte(r>>8), byte(r) +} + +func initCipher(c *Cipher) { + copy(c.p[0:], p[0:]) + copy(c.s0[0:], s0[0:]) + copy(c.s1[0:], s1[0:]) + copy(c.s2[0:], s2[0:]) + copy(c.s3[0:], s3[0:]) +} diff --git a/vendor/golang.org/x/crypto/blowfish/const.go b/vendor/golang.org/x/crypto/blowfish/const.go new file mode 100644 index 00000000..d0407759 --- /dev/null +++ b/vendor/golang.org/x/crypto/blowfish/const.go @@ -0,0 +1,199 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// The startup permutation array and substitution boxes. +// They are the hexadecimal digits of PI; see: +// https://www.schneier.com/code/constants.txt. + +package blowfish + +var s0 = [256]uint32{ + 0xd1310ba6, 0x98dfb5ac, 0x2ffd72db, 0xd01adfb7, 0xb8e1afed, 0x6a267e96, + 0xba7c9045, 0xf12c7f99, 0x24a19947, 0xb3916cf7, 0x0801f2e2, 0x858efc16, + 0x636920d8, 0x71574e69, 0xa458fea3, 0xf4933d7e, 0x0d95748f, 0x728eb658, + 0x718bcd58, 0x82154aee, 0x7b54a41d, 0xc25a59b5, 0x9c30d539, 0x2af26013, + 0xc5d1b023, 0x286085f0, 0xca417918, 0xb8db38ef, 0x8e79dcb0, 0x603a180e, + 0x6c9e0e8b, 0xb01e8a3e, 0xd71577c1, 0xbd314b27, 0x78af2fda, 0x55605c60, + 0xe65525f3, 0xaa55ab94, 0x57489862, 0x63e81440, 0x55ca396a, 0x2aab10b6, + 0xb4cc5c34, 0x1141e8ce, 0xa15486af, 0x7c72e993, 0xb3ee1411, 0x636fbc2a, + 0x2ba9c55d, 0x741831f6, 0xce5c3e16, 0x9b87931e, 0xafd6ba33, 0x6c24cf5c, + 0x7a325381, 0x28958677, 0x3b8f4898, 0x6b4bb9af, 0xc4bfe81b, 0x66282193, + 0x61d809cc, 0xfb21a991, 0x487cac60, 0x5dec8032, 0xef845d5d, 0xe98575b1, + 0xdc262302, 0xeb651b88, 0x23893e81, 0xd396acc5, 0x0f6d6ff3, 0x83f44239, + 0x2e0b4482, 0xa4842004, 0x69c8f04a, 0x9e1f9b5e, 0x21c66842, 0xf6e96c9a, + 0x670c9c61, 0xabd388f0, 0x6a51a0d2, 0xd8542f68, 0x960fa728, 0xab5133a3, + 0x6eef0b6c, 0x137a3be4, 0xba3bf050, 0x7efb2a98, 0xa1f1651d, 0x39af0176, + 0x66ca593e, 0x82430e88, 0x8cee8619, 0x456f9fb4, 0x7d84a5c3, 0x3b8b5ebe, + 0xe06f75d8, 0x85c12073, 0x401a449f, 0x56c16aa6, 0x4ed3aa62, 0x363f7706, + 0x1bfedf72, 0x429b023d, 0x37d0d724, 0xd00a1248, 0xdb0fead3, 0x49f1c09b, + 0x075372c9, 0x80991b7b, 0x25d479d8, 0xf6e8def7, 0xe3fe501a, 0xb6794c3b, + 0x976ce0bd, 0x04c006ba, 0xc1a94fb6, 0x409f60c4, 0x5e5c9ec2, 0x196a2463, + 0x68fb6faf, 0x3e6c53b5, 0x1339b2eb, 0x3b52ec6f, 0x6dfc511f, 0x9b30952c, + 0xcc814544, 0xaf5ebd09, 0xbee3d004, 0xde334afd, 0x660f2807, 0x192e4bb3, + 0xc0cba857, 0x45c8740f, 0xd20b5f39, 0xb9d3fbdb, 0x5579c0bd, 0x1a60320a, + 0xd6a100c6, 0x402c7279, 0x679f25fe, 0xfb1fa3cc, 0x8ea5e9f8, 0xdb3222f8, + 0x3c7516df, 0xfd616b15, 0x2f501ec8, 0xad0552ab, 0x323db5fa, 0xfd238760, + 0x53317b48, 0x3e00df82, 0x9e5c57bb, 0xca6f8ca0, 0x1a87562e, 0xdf1769db, + 0xd542a8f6, 0x287effc3, 0xac6732c6, 0x8c4f5573, 0x695b27b0, 0xbbca58c8, + 0xe1ffa35d, 0xb8f011a0, 0x10fa3d98, 0xfd2183b8, 0x4afcb56c, 0x2dd1d35b, + 0x9a53e479, 0xb6f84565, 0xd28e49bc, 0x4bfb9790, 0xe1ddf2da, 0xa4cb7e33, + 0x62fb1341, 0xcee4c6e8, 0xef20cada, 0x36774c01, 0xd07e9efe, 0x2bf11fb4, + 0x95dbda4d, 0xae909198, 0xeaad8e71, 0x6b93d5a0, 0xd08ed1d0, 0xafc725e0, + 0x8e3c5b2f, 0x8e7594b7, 0x8ff6e2fb, 0xf2122b64, 0x8888b812, 0x900df01c, + 0x4fad5ea0, 0x688fc31c, 0xd1cff191, 0xb3a8c1ad, 0x2f2f2218, 0xbe0e1777, + 0xea752dfe, 0x8b021fa1, 0xe5a0cc0f, 0xb56f74e8, 0x18acf3d6, 0xce89e299, + 0xb4a84fe0, 0xfd13e0b7, 0x7cc43b81, 0xd2ada8d9, 0x165fa266, 0x80957705, + 0x93cc7314, 0x211a1477, 0xe6ad2065, 0x77b5fa86, 0xc75442f5, 0xfb9d35cf, + 0xebcdaf0c, 0x7b3e89a0, 0xd6411bd3, 0xae1e7e49, 0x00250e2d, 0x2071b35e, + 0x226800bb, 0x57b8e0af, 0x2464369b, 0xf009b91e, 0x5563911d, 0x59dfa6aa, + 0x78c14389, 0xd95a537f, 0x207d5ba2, 0x02e5b9c5, 0x83260376, 0x6295cfa9, + 0x11c81968, 0x4e734a41, 0xb3472dca, 0x7b14a94a, 0x1b510052, 0x9a532915, + 0xd60f573f, 0xbc9bc6e4, 0x2b60a476, 0x81e67400, 0x08ba6fb5, 0x571be91f, + 0xf296ec6b, 0x2a0dd915, 0xb6636521, 0xe7b9f9b6, 0xff34052e, 0xc5855664, + 0x53b02d5d, 0xa99f8fa1, 0x08ba4799, 0x6e85076a, +} + +var s1 = [256]uint32{ + 0x4b7a70e9, 0xb5b32944, 0xdb75092e, 0xc4192623, 0xad6ea6b0, 0x49a7df7d, + 0x9cee60b8, 0x8fedb266, 0xecaa8c71, 0x699a17ff, 0x5664526c, 0xc2b19ee1, + 0x193602a5, 0x75094c29, 0xa0591340, 0xe4183a3e, 0x3f54989a, 0x5b429d65, + 0x6b8fe4d6, 0x99f73fd6, 0xa1d29c07, 0xefe830f5, 0x4d2d38e6, 0xf0255dc1, + 0x4cdd2086, 0x8470eb26, 0x6382e9c6, 0x021ecc5e, 0x09686b3f, 0x3ebaefc9, + 0x3c971814, 0x6b6a70a1, 0x687f3584, 0x52a0e286, 0xb79c5305, 0xaa500737, + 0x3e07841c, 0x7fdeae5c, 0x8e7d44ec, 0x5716f2b8, 0xb03ada37, 0xf0500c0d, + 0xf01c1f04, 0x0200b3ff, 0xae0cf51a, 0x3cb574b2, 0x25837a58, 0xdc0921bd, + 0xd19113f9, 0x7ca92ff6, 0x94324773, 0x22f54701, 0x3ae5e581, 0x37c2dadc, + 0xc8b57634, 0x9af3dda7, 0xa9446146, 0x0fd0030e, 0xecc8c73e, 0xa4751e41, + 0xe238cd99, 0x3bea0e2f, 0x3280bba1, 0x183eb331, 0x4e548b38, 0x4f6db908, + 0x6f420d03, 0xf60a04bf, 0x2cb81290, 0x24977c79, 0x5679b072, 0xbcaf89af, + 0xde9a771f, 0xd9930810, 0xb38bae12, 0xdccf3f2e, 0x5512721f, 0x2e6b7124, + 0x501adde6, 0x9f84cd87, 0x7a584718, 0x7408da17, 0xbc9f9abc, 0xe94b7d8c, + 0xec7aec3a, 0xdb851dfa, 0x63094366, 0xc464c3d2, 0xef1c1847, 0x3215d908, + 0xdd433b37, 0x24c2ba16, 0x12a14d43, 0x2a65c451, 0x50940002, 0x133ae4dd, + 0x71dff89e, 0x10314e55, 0x81ac77d6, 0x5f11199b, 0x043556f1, 0xd7a3c76b, + 0x3c11183b, 0x5924a509, 0xf28fe6ed, 0x97f1fbfa, 0x9ebabf2c, 0x1e153c6e, + 0x86e34570, 0xeae96fb1, 0x860e5e0a, 0x5a3e2ab3, 0x771fe71c, 0x4e3d06fa, + 0x2965dcb9, 0x99e71d0f, 0x803e89d6, 0x5266c825, 0x2e4cc978, 0x9c10b36a, + 0xc6150eba, 0x94e2ea78, 0xa5fc3c53, 0x1e0a2df4, 0xf2f74ea7, 0x361d2b3d, + 0x1939260f, 0x19c27960, 0x5223a708, 0xf71312b6, 0xebadfe6e, 0xeac31f66, + 0xe3bc4595, 0xa67bc883, 0xb17f37d1, 0x018cff28, 0xc332ddef, 0xbe6c5aa5, + 0x65582185, 0x68ab9802, 0xeecea50f, 0xdb2f953b, 0x2aef7dad, 0x5b6e2f84, + 0x1521b628, 0x29076170, 0xecdd4775, 0x619f1510, 0x13cca830, 0xeb61bd96, + 0x0334fe1e, 0xaa0363cf, 0xb5735c90, 0x4c70a239, 0xd59e9e0b, 0xcbaade14, + 0xeecc86bc, 0x60622ca7, 0x9cab5cab, 0xb2f3846e, 0x648b1eaf, 0x19bdf0ca, + 0xa02369b9, 0x655abb50, 0x40685a32, 0x3c2ab4b3, 0x319ee9d5, 0xc021b8f7, + 0x9b540b19, 0x875fa099, 0x95f7997e, 0x623d7da8, 0xf837889a, 0x97e32d77, + 0x11ed935f, 0x16681281, 0x0e358829, 0xc7e61fd6, 0x96dedfa1, 0x7858ba99, + 0x57f584a5, 0x1b227263, 0x9b83c3ff, 0x1ac24696, 0xcdb30aeb, 0x532e3054, + 0x8fd948e4, 0x6dbc3128, 0x58ebf2ef, 0x34c6ffea, 0xfe28ed61, 0xee7c3c73, + 0x5d4a14d9, 0xe864b7e3, 0x42105d14, 0x203e13e0, 0x45eee2b6, 0xa3aaabea, + 0xdb6c4f15, 0xfacb4fd0, 0xc742f442, 0xef6abbb5, 0x654f3b1d, 0x41cd2105, + 0xd81e799e, 0x86854dc7, 0xe44b476a, 0x3d816250, 0xcf62a1f2, 0x5b8d2646, + 0xfc8883a0, 0xc1c7b6a3, 0x7f1524c3, 0x69cb7492, 0x47848a0b, 0x5692b285, + 0x095bbf00, 0xad19489d, 0x1462b174, 0x23820e00, 0x58428d2a, 0x0c55f5ea, + 0x1dadf43e, 0x233f7061, 0x3372f092, 0x8d937e41, 0xd65fecf1, 0x6c223bdb, + 0x7cde3759, 0xcbee7460, 0x4085f2a7, 0xce77326e, 0xa6078084, 0x19f8509e, + 0xe8efd855, 0x61d99735, 0xa969a7aa, 0xc50c06c2, 0x5a04abfc, 0x800bcadc, + 0x9e447a2e, 0xc3453484, 0xfdd56705, 0x0e1e9ec9, 0xdb73dbd3, 0x105588cd, + 0x675fda79, 0xe3674340, 0xc5c43465, 0x713e38d8, 0x3d28f89e, 0xf16dff20, + 0x153e21e7, 0x8fb03d4a, 0xe6e39f2b, 0xdb83adf7, +} + +var s2 = [256]uint32{ + 0xe93d5a68, 0x948140f7, 0xf64c261c, 0x94692934, 0x411520f7, 0x7602d4f7, + 0xbcf46b2e, 0xd4a20068, 0xd4082471, 0x3320f46a, 0x43b7d4b7, 0x500061af, + 0x1e39f62e, 0x97244546, 0x14214f74, 0xbf8b8840, 0x4d95fc1d, 0x96b591af, + 0x70f4ddd3, 0x66a02f45, 0xbfbc09ec, 0x03bd9785, 0x7fac6dd0, 0x31cb8504, + 0x96eb27b3, 0x55fd3941, 0xda2547e6, 0xabca0a9a, 0x28507825, 0x530429f4, + 0x0a2c86da, 0xe9b66dfb, 0x68dc1462, 0xd7486900, 0x680ec0a4, 0x27a18dee, + 0x4f3ffea2, 0xe887ad8c, 0xb58ce006, 0x7af4d6b6, 0xaace1e7c, 0xd3375fec, + 0xce78a399, 0x406b2a42, 0x20fe9e35, 0xd9f385b9, 0xee39d7ab, 0x3b124e8b, + 0x1dc9faf7, 0x4b6d1856, 0x26a36631, 0xeae397b2, 0x3a6efa74, 0xdd5b4332, + 0x6841e7f7, 0xca7820fb, 0xfb0af54e, 0xd8feb397, 0x454056ac, 0xba489527, + 0x55533a3a, 0x20838d87, 0xfe6ba9b7, 0xd096954b, 0x55a867bc, 0xa1159a58, + 0xcca92963, 0x99e1db33, 0xa62a4a56, 0x3f3125f9, 0x5ef47e1c, 0x9029317c, + 0xfdf8e802, 0x04272f70, 0x80bb155c, 0x05282ce3, 0x95c11548, 0xe4c66d22, + 0x48c1133f, 0xc70f86dc, 0x07f9c9ee, 0x41041f0f, 0x404779a4, 0x5d886e17, + 0x325f51eb, 0xd59bc0d1, 0xf2bcc18f, 0x41113564, 0x257b7834, 0x602a9c60, + 0xdff8e8a3, 0x1f636c1b, 0x0e12b4c2, 0x02e1329e, 0xaf664fd1, 0xcad18115, + 0x6b2395e0, 0x333e92e1, 0x3b240b62, 0xeebeb922, 0x85b2a20e, 0xe6ba0d99, + 0xde720c8c, 0x2da2f728, 0xd0127845, 0x95b794fd, 0x647d0862, 0xe7ccf5f0, + 0x5449a36f, 0x877d48fa, 0xc39dfd27, 0xf33e8d1e, 0x0a476341, 0x992eff74, + 0x3a6f6eab, 0xf4f8fd37, 0xa812dc60, 0xa1ebddf8, 0x991be14c, 0xdb6e6b0d, + 0xc67b5510, 0x6d672c37, 0x2765d43b, 0xdcd0e804, 0xf1290dc7, 0xcc00ffa3, + 0xb5390f92, 0x690fed0b, 0x667b9ffb, 0xcedb7d9c, 0xa091cf0b, 0xd9155ea3, + 0xbb132f88, 0x515bad24, 0x7b9479bf, 0x763bd6eb, 0x37392eb3, 0xcc115979, + 0x8026e297, 0xf42e312d, 0x6842ada7, 0xc66a2b3b, 0x12754ccc, 0x782ef11c, + 0x6a124237, 0xb79251e7, 0x06a1bbe6, 0x4bfb6350, 0x1a6b1018, 0x11caedfa, + 0x3d25bdd8, 0xe2e1c3c9, 0x44421659, 0x0a121386, 0xd90cec6e, 0xd5abea2a, + 0x64af674e, 0xda86a85f, 0xbebfe988, 0x64e4c3fe, 0x9dbc8057, 0xf0f7c086, + 0x60787bf8, 0x6003604d, 0xd1fd8346, 0xf6381fb0, 0x7745ae04, 0xd736fccc, + 0x83426b33, 0xf01eab71, 0xb0804187, 0x3c005e5f, 0x77a057be, 0xbde8ae24, + 0x55464299, 0xbf582e61, 0x4e58f48f, 0xf2ddfda2, 0xf474ef38, 0x8789bdc2, + 0x5366f9c3, 0xc8b38e74, 0xb475f255, 0x46fcd9b9, 0x7aeb2661, 0x8b1ddf84, + 0x846a0e79, 0x915f95e2, 0x466e598e, 0x20b45770, 0x8cd55591, 0xc902de4c, + 0xb90bace1, 0xbb8205d0, 0x11a86248, 0x7574a99e, 0xb77f19b6, 0xe0a9dc09, + 0x662d09a1, 0xc4324633, 0xe85a1f02, 0x09f0be8c, 0x4a99a025, 0x1d6efe10, + 0x1ab93d1d, 0x0ba5a4df, 0xa186f20f, 0x2868f169, 0xdcb7da83, 0x573906fe, + 0xa1e2ce9b, 0x4fcd7f52, 0x50115e01, 0xa70683fa, 0xa002b5c4, 0x0de6d027, + 0x9af88c27, 0x773f8641, 0xc3604c06, 0x61a806b5, 0xf0177a28, 0xc0f586e0, + 0x006058aa, 0x30dc7d62, 0x11e69ed7, 0x2338ea63, 0x53c2dd94, 0xc2c21634, + 0xbbcbee56, 0x90bcb6de, 0xebfc7da1, 0xce591d76, 0x6f05e409, 0x4b7c0188, + 0x39720a3d, 0x7c927c24, 0x86e3725f, 0x724d9db9, 0x1ac15bb4, 0xd39eb8fc, + 0xed545578, 0x08fca5b5, 0xd83d7cd3, 0x4dad0fc4, 0x1e50ef5e, 0xb161e6f8, + 0xa28514d9, 0x6c51133c, 0x6fd5c7e7, 0x56e14ec4, 0x362abfce, 0xddc6c837, + 0xd79a3234, 0x92638212, 0x670efa8e, 0x406000e0, +} + +var s3 = [256]uint32{ + 0x3a39ce37, 0xd3faf5cf, 0xabc27737, 0x5ac52d1b, 0x5cb0679e, 0x4fa33742, + 0xd3822740, 0x99bc9bbe, 0xd5118e9d, 0xbf0f7315, 0xd62d1c7e, 0xc700c47b, + 0xb78c1b6b, 0x21a19045, 0xb26eb1be, 0x6a366eb4, 0x5748ab2f, 0xbc946e79, + 0xc6a376d2, 0x6549c2c8, 0x530ff8ee, 0x468dde7d, 0xd5730a1d, 0x4cd04dc6, + 0x2939bbdb, 0xa9ba4650, 0xac9526e8, 0xbe5ee304, 0xa1fad5f0, 0x6a2d519a, + 0x63ef8ce2, 0x9a86ee22, 0xc089c2b8, 0x43242ef6, 0xa51e03aa, 0x9cf2d0a4, + 0x83c061ba, 0x9be96a4d, 0x8fe51550, 0xba645bd6, 0x2826a2f9, 0xa73a3ae1, + 0x4ba99586, 0xef5562e9, 0xc72fefd3, 0xf752f7da, 0x3f046f69, 0x77fa0a59, + 0x80e4a915, 0x87b08601, 0x9b09e6ad, 0x3b3ee593, 0xe990fd5a, 0x9e34d797, + 0x2cf0b7d9, 0x022b8b51, 0x96d5ac3a, 0x017da67d, 0xd1cf3ed6, 0x7c7d2d28, + 0x1f9f25cf, 0xadf2b89b, 0x5ad6b472, 0x5a88f54c, 0xe029ac71, 0xe019a5e6, + 0x47b0acfd, 0xed93fa9b, 0xe8d3c48d, 0x283b57cc, 0xf8d56629, 0x79132e28, + 0x785f0191, 0xed756055, 0xf7960e44, 0xe3d35e8c, 0x15056dd4, 0x88f46dba, + 0x03a16125, 0x0564f0bd, 0xc3eb9e15, 0x3c9057a2, 0x97271aec, 0xa93a072a, + 0x1b3f6d9b, 0x1e6321f5, 0xf59c66fb, 0x26dcf319, 0x7533d928, 0xb155fdf5, + 0x03563482, 0x8aba3cbb, 0x28517711, 0xc20ad9f8, 0xabcc5167, 0xccad925f, + 0x4de81751, 0x3830dc8e, 0x379d5862, 0x9320f991, 0xea7a90c2, 0xfb3e7bce, + 0x5121ce64, 0x774fbe32, 0xa8b6e37e, 0xc3293d46, 0x48de5369, 0x6413e680, + 0xa2ae0810, 0xdd6db224, 0x69852dfd, 0x09072166, 0xb39a460a, 0x6445c0dd, + 0x586cdecf, 0x1c20c8ae, 0x5bbef7dd, 0x1b588d40, 0xccd2017f, 0x6bb4e3bb, + 0xdda26a7e, 0x3a59ff45, 0x3e350a44, 0xbcb4cdd5, 0x72eacea8, 0xfa6484bb, + 0x8d6612ae, 0xbf3c6f47, 0xd29be463, 0x542f5d9e, 0xaec2771b, 0xf64e6370, + 0x740e0d8d, 0xe75b1357, 0xf8721671, 0xaf537d5d, 0x4040cb08, 0x4eb4e2cc, + 0x34d2466a, 0x0115af84, 0xe1b00428, 0x95983a1d, 0x06b89fb4, 0xce6ea048, + 0x6f3f3b82, 0x3520ab82, 0x011a1d4b, 0x277227f8, 0x611560b1, 0xe7933fdc, + 0xbb3a792b, 0x344525bd, 0xa08839e1, 0x51ce794b, 0x2f32c9b7, 0xa01fbac9, + 0xe01cc87e, 0xbcc7d1f6, 0xcf0111c3, 0xa1e8aac7, 0x1a908749, 0xd44fbd9a, + 0xd0dadecb, 0xd50ada38, 0x0339c32a, 0xc6913667, 0x8df9317c, 0xe0b12b4f, + 0xf79e59b7, 0x43f5bb3a, 0xf2d519ff, 0x27d9459c, 0xbf97222c, 0x15e6fc2a, + 0x0f91fc71, 0x9b941525, 0xfae59361, 0xceb69ceb, 0xc2a86459, 0x12baa8d1, + 0xb6c1075e, 0xe3056a0c, 0x10d25065, 0xcb03a442, 0xe0ec6e0e, 0x1698db3b, + 0x4c98a0be, 0x3278e964, 0x9f1f9532, 0xe0d392df, 0xd3a0342b, 0x8971f21e, + 0x1b0a7441, 0x4ba3348c, 0xc5be7120, 0xc37632d8, 0xdf359f8d, 0x9b992f2e, + 0xe60b6f47, 0x0fe3f11d, 0xe54cda54, 0x1edad891, 0xce6279cf, 0xcd3e7e6f, + 0x1618b166, 0xfd2c1d05, 0x848fd2c5, 0xf6fb2299, 0xf523f357, 0xa6327623, + 0x93a83531, 0x56cccd02, 0xacf08162, 0x5a75ebb5, 0x6e163697, 0x88d273cc, + 0xde966292, 0x81b949d0, 0x4c50901b, 0x71c65614, 0xe6c6c7bd, 0x327a140a, + 0x45e1d006, 0xc3f27b9a, 0xc9aa53fd, 0x62a80f00, 0xbb25bfe2, 0x35bdd2f6, + 0x71126905, 0xb2040222, 0xb6cbcf7c, 0xcd769c2b, 0x53113ec0, 0x1640e3d3, + 0x38abbd60, 0x2547adf0, 0xba38209c, 0xf746ce76, 0x77afa1c5, 0x20756060, + 0x85cbfe4e, 0x8ae88dd8, 0x7aaaf9b0, 0x4cf9aa7e, 0x1948c25c, 0x02fb8a8c, + 0x01c36ae4, 0xd6ebe1f9, 0x90d4f869, 0xa65cdea0, 0x3f09252d, 0xc208e69f, + 0xb74e6132, 0xce77e25b, 0x578fdfe3, 0x3ac372e6, +} + +var p = [18]uint32{ + 0x243f6a88, 0x85a308d3, 0x13198a2e, 0x03707344, 0xa4093822, 0x299f31d0, + 0x082efa98, 0xec4e6c89, 0x452821e6, 0x38d01377, 0xbe5466cf, 0x34e90c6c, + 0xc0ac29b7, 0xc97c50dd, 0x3f84d5b5, 0xb5470917, 0x9216d5d9, 0x8979fb1b, +} diff --git a/vendor/golang.org/x/net/AUTHORS b/vendor/golang.org/x/net/AUTHORS new file mode 100644 index 00000000..15167cd7 --- /dev/null +++ b/vendor/golang.org/x/net/AUTHORS @@ -0,0 +1,3 @@ +# This source code refers to The Go Authors for copyright purposes. +# The master list of authors is in the main Go distribution, +# visible at http://tip.golang.org/AUTHORS. diff --git a/vendor/golang.org/x/net/CONTRIBUTORS b/vendor/golang.org/x/net/CONTRIBUTORS new file mode 100644 index 00000000..1c4577e9 --- /dev/null +++ b/vendor/golang.org/x/net/CONTRIBUTORS @@ -0,0 +1,3 @@ +# This source code was written by the Go contributors. +# The master list of contributors is in the main Go distribution, +# visible at http://tip.golang.org/CONTRIBUTORS. diff --git a/vendor/golang.org/x/net/LICENSE b/vendor/golang.org/x/net/LICENSE new file mode 100644 index 00000000..6a66aea5 --- /dev/null +++ b/vendor/golang.org/x/net/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/net/PATENTS b/vendor/golang.org/x/net/PATENTS new file mode 100644 index 00000000..73309904 --- /dev/null +++ b/vendor/golang.org/x/net/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/net/context/context.go b/vendor/golang.org/x/net/context/context.go new file mode 100644 index 00000000..a3c021d3 --- /dev/null +++ b/vendor/golang.org/x/net/context/context.go @@ -0,0 +1,56 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package context defines the Context type, which carries deadlines, +// cancelation signals, and other request-scoped values across API boundaries +// and between processes. +// As of Go 1.7 this package is available in the standard library under the +// name context. https://golang.org/pkg/context. +// +// Incoming requests to a server should create a Context, and outgoing calls to +// servers should accept a Context. The chain of function calls between must +// propagate the Context, optionally replacing it with a modified copy created +// using WithDeadline, WithTimeout, WithCancel, or WithValue. +// +// Programs that use Contexts should follow these rules to keep interfaces +// consistent across packages and enable static analysis tools to check context +// propagation: +// +// Do not store Contexts inside a struct type; instead, pass a Context +// explicitly to each function that needs it. The Context should be the first +// parameter, typically named ctx: +// +// func DoSomething(ctx context.Context, arg Arg) error { +// // ... use ctx ... +// } +// +// Do not pass a nil Context, even if a function permits it. Pass context.TODO +// if you are unsure about which Context to use. +// +// Use context Values only for request-scoped data that transits processes and +// APIs, not for passing optional parameters to functions. +// +// The same Context may be passed to functions running in different goroutines; +// Contexts are safe for simultaneous use by multiple goroutines. +// +// See http://blog.golang.org/context for example code for a server that uses +// Contexts. +package context // import "golang.org/x/net/context" + +// Background returns a non-nil, empty Context. It is never canceled, has no +// values, and has no deadline. It is typically used by the main function, +// initialization, and tests, and as the top-level Context for incoming +// requests. +func Background() Context { + return background +} + +// TODO returns a non-nil, empty Context. Code should use context.TODO when +// it's unclear which Context to use or it is not yet available (because the +// surrounding function has not yet been extended to accept a Context +// parameter). TODO is recognized by static analysis tools that determine +// whether Contexts are propagated correctly in a program. +func TODO() Context { + return todo +} diff --git a/vendor/golang.org/x/net/context/go17.go b/vendor/golang.org/x/net/context/go17.go new file mode 100644 index 00000000..d20f52b7 --- /dev/null +++ b/vendor/golang.org/x/net/context/go17.go @@ -0,0 +1,72 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.7 + +package context + +import ( + "context" // standard library's context, as of Go 1.7 + "time" +) + +var ( + todo = context.TODO() + background = context.Background() +) + +// Canceled is the error returned by Context.Err when the context is canceled. +var Canceled = context.Canceled + +// DeadlineExceeded is the error returned by Context.Err when the context's +// deadline passes. +var DeadlineExceeded = context.DeadlineExceeded + +// WithCancel returns a copy of parent with a new Done channel. The returned +// context's Done channel is closed when the returned cancel function is called +// or when the parent context's Done channel is closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { + ctx, f := context.WithCancel(parent) + return ctx, CancelFunc(f) +} + +// WithDeadline returns a copy of the parent context with the deadline adjusted +// to be no later than d. If the parent's deadline is already earlier than d, +// WithDeadline(parent, d) is semantically equivalent to parent. The returned +// context's Done channel is closed when the deadline expires, when the returned +// cancel function is called, or when the parent context's Done channel is +// closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { + ctx, f := context.WithDeadline(parent, deadline) + return ctx, CancelFunc(f) +} + +// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete: +// +// func slowOperationWithTimeout(ctx context.Context) (Result, error) { +// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) +// defer cancel() // releases resources if slowOperation completes before timeout elapses +// return slowOperation(ctx) +// } +func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { + return WithDeadline(parent, time.Now().Add(timeout)) +} + +// WithValue returns a copy of parent in which the value associated with key is +// val. +// +// Use context Values only for request-scoped data that transits processes and +// APIs, not for passing optional parameters to functions. +func WithValue(parent Context, key interface{}, val interface{}) Context { + return context.WithValue(parent, key, val) +} diff --git a/vendor/golang.org/x/net/context/go19.go b/vendor/golang.org/x/net/context/go19.go new file mode 100644 index 00000000..d88bd1db --- /dev/null +++ b/vendor/golang.org/x/net/context/go19.go @@ -0,0 +1,20 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.9 + +package context + +import "context" // standard library's context, as of Go 1.7 + +// A Context carries a deadline, a cancelation signal, and other values across +// API boundaries. +// +// Context's methods may be called by multiple goroutines simultaneously. +type Context = context.Context + +// A CancelFunc tells an operation to abandon its work. +// A CancelFunc does not wait for the work to stop. +// After the first call, subsequent calls to a CancelFunc do nothing. +type CancelFunc = context.CancelFunc diff --git a/vendor/golang.org/x/net/context/pre_go17.go b/vendor/golang.org/x/net/context/pre_go17.go new file mode 100644 index 00000000..0f35592d --- /dev/null +++ b/vendor/golang.org/x/net/context/pre_go17.go @@ -0,0 +1,300 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.7 + +package context + +import ( + "errors" + "fmt" + "sync" + "time" +) + +// An emptyCtx is never canceled, has no values, and has no deadline. It is not +// struct{}, since vars of this type must have distinct addresses. +type emptyCtx int + +func (*emptyCtx) Deadline() (deadline time.Time, ok bool) { + return +} + +func (*emptyCtx) Done() <-chan struct{} { + return nil +} + +func (*emptyCtx) Err() error { + return nil +} + +func (*emptyCtx) Value(key interface{}) interface{} { + return nil +} + +func (e *emptyCtx) String() string { + switch e { + case background: + return "context.Background" + case todo: + return "context.TODO" + } + return "unknown empty Context" +} + +var ( + background = new(emptyCtx) + todo = new(emptyCtx) +) + +// Canceled is the error returned by Context.Err when the context is canceled. +var Canceled = errors.New("context canceled") + +// DeadlineExceeded is the error returned by Context.Err when the context's +// deadline passes. +var DeadlineExceeded = errors.New("context deadline exceeded") + +// WithCancel returns a copy of parent with a new Done channel. The returned +// context's Done channel is closed when the returned cancel function is called +// or when the parent context's Done channel is closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { + c := newCancelCtx(parent) + propagateCancel(parent, c) + return c, func() { c.cancel(true, Canceled) } +} + +// newCancelCtx returns an initialized cancelCtx. +func newCancelCtx(parent Context) *cancelCtx { + return &cancelCtx{ + Context: parent, + done: make(chan struct{}), + } +} + +// propagateCancel arranges for child to be canceled when parent is. +func propagateCancel(parent Context, child canceler) { + if parent.Done() == nil { + return // parent is never canceled + } + if p, ok := parentCancelCtx(parent); ok { + p.mu.Lock() + if p.err != nil { + // parent has already been canceled + child.cancel(false, p.err) + } else { + if p.children == nil { + p.children = make(map[canceler]bool) + } + p.children[child] = true + } + p.mu.Unlock() + } else { + go func() { + select { + case <-parent.Done(): + child.cancel(false, parent.Err()) + case <-child.Done(): + } + }() + } +} + +// parentCancelCtx follows a chain of parent references until it finds a +// *cancelCtx. This function understands how each of the concrete types in this +// package represents its parent. +func parentCancelCtx(parent Context) (*cancelCtx, bool) { + for { + switch c := parent.(type) { + case *cancelCtx: + return c, true + case *timerCtx: + return c.cancelCtx, true + case *valueCtx: + parent = c.Context + default: + return nil, false + } + } +} + +// removeChild removes a context from its parent. +func removeChild(parent Context, child canceler) { + p, ok := parentCancelCtx(parent) + if !ok { + return + } + p.mu.Lock() + if p.children != nil { + delete(p.children, child) + } + p.mu.Unlock() +} + +// A canceler is a context type that can be canceled directly. The +// implementations are *cancelCtx and *timerCtx. +type canceler interface { + cancel(removeFromParent bool, err error) + Done() <-chan struct{} +} + +// A cancelCtx can be canceled. When canceled, it also cancels any children +// that implement canceler. +type cancelCtx struct { + Context + + done chan struct{} // closed by the first cancel call. + + mu sync.Mutex + children map[canceler]bool // set to nil by the first cancel call + err error // set to non-nil by the first cancel call +} + +func (c *cancelCtx) Done() <-chan struct{} { + return c.done +} + +func (c *cancelCtx) Err() error { + c.mu.Lock() + defer c.mu.Unlock() + return c.err +} + +func (c *cancelCtx) String() string { + return fmt.Sprintf("%v.WithCancel", c.Context) +} + +// cancel closes c.done, cancels each of c's children, and, if +// removeFromParent is true, removes c from its parent's children. +func (c *cancelCtx) cancel(removeFromParent bool, err error) { + if err == nil { + panic("context: internal error: missing cancel error") + } + c.mu.Lock() + if c.err != nil { + c.mu.Unlock() + return // already canceled + } + c.err = err + close(c.done) + for child := range c.children { + // NOTE: acquiring the child's lock while holding parent's lock. + child.cancel(false, err) + } + c.children = nil + c.mu.Unlock() + + if removeFromParent { + removeChild(c.Context, c) + } +} + +// WithDeadline returns a copy of the parent context with the deadline adjusted +// to be no later than d. If the parent's deadline is already earlier than d, +// WithDeadline(parent, d) is semantically equivalent to parent. The returned +// context's Done channel is closed when the deadline expires, when the returned +// cancel function is called, or when the parent context's Done channel is +// closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { + if cur, ok := parent.Deadline(); ok && cur.Before(deadline) { + // The current deadline is already sooner than the new one. + return WithCancel(parent) + } + c := &timerCtx{ + cancelCtx: newCancelCtx(parent), + deadline: deadline, + } + propagateCancel(parent, c) + d := deadline.Sub(time.Now()) + if d <= 0 { + c.cancel(true, DeadlineExceeded) // deadline has already passed + return c, func() { c.cancel(true, Canceled) } + } + c.mu.Lock() + defer c.mu.Unlock() + if c.err == nil { + c.timer = time.AfterFunc(d, func() { + c.cancel(true, DeadlineExceeded) + }) + } + return c, func() { c.cancel(true, Canceled) } +} + +// A timerCtx carries a timer and a deadline. It embeds a cancelCtx to +// implement Done and Err. It implements cancel by stopping its timer then +// delegating to cancelCtx.cancel. +type timerCtx struct { + *cancelCtx + timer *time.Timer // Under cancelCtx.mu. + + deadline time.Time +} + +func (c *timerCtx) Deadline() (deadline time.Time, ok bool) { + return c.deadline, true +} + +func (c *timerCtx) String() string { + return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now())) +} + +func (c *timerCtx) cancel(removeFromParent bool, err error) { + c.cancelCtx.cancel(false, err) + if removeFromParent { + // Remove this timerCtx from its parent cancelCtx's children. + removeChild(c.cancelCtx.Context, c) + } + c.mu.Lock() + if c.timer != nil { + c.timer.Stop() + c.timer = nil + } + c.mu.Unlock() +} + +// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete: +// +// func slowOperationWithTimeout(ctx context.Context) (Result, error) { +// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) +// defer cancel() // releases resources if slowOperation completes before timeout elapses +// return slowOperation(ctx) +// } +func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { + return WithDeadline(parent, time.Now().Add(timeout)) +} + +// WithValue returns a copy of parent in which the value associated with key is +// val. +// +// Use context Values only for request-scoped data that transits processes and +// APIs, not for passing optional parameters to functions. +func WithValue(parent Context, key interface{}, val interface{}) Context { + return &valueCtx{parent, key, val} +} + +// A valueCtx carries a key-value pair. It implements Value for that key and +// delegates all other calls to the embedded Context. +type valueCtx struct { + Context + key, val interface{} +} + +func (c *valueCtx) String() string { + return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val) +} + +func (c *valueCtx) Value(key interface{}) interface{} { + if c.key == key { + return c.val + } + return c.Context.Value(key) +} diff --git a/vendor/golang.org/x/net/context/pre_go19.go b/vendor/golang.org/x/net/context/pre_go19.go new file mode 100644 index 00000000..b105f80b --- /dev/null +++ b/vendor/golang.org/x/net/context/pre_go19.go @@ -0,0 +1,109 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.9 + +package context + +import "time" + +// A Context carries a deadline, a cancelation signal, and other values across +// API boundaries. +// +// Context's methods may be called by multiple goroutines simultaneously. +type Context interface { + // Deadline returns the time when work done on behalf of this context + // should be canceled. Deadline returns ok==false when no deadline is + // set. Successive calls to Deadline return the same results. + Deadline() (deadline time.Time, ok bool) + + // Done returns a channel that's closed when work done on behalf of this + // context should be canceled. Done may return nil if this context can + // never be canceled. Successive calls to Done return the same value. + // + // WithCancel arranges for Done to be closed when cancel is called; + // WithDeadline arranges for Done to be closed when the deadline + // expires; WithTimeout arranges for Done to be closed when the timeout + // elapses. + // + // Done is provided for use in select statements: + // + // // Stream generates values with DoSomething and sends them to out + // // until DoSomething returns an error or ctx.Done is closed. + // func Stream(ctx context.Context, out chan<- Value) error { + // for { + // v, err := DoSomething(ctx) + // if err != nil { + // return err + // } + // select { + // case <-ctx.Done(): + // return ctx.Err() + // case out <- v: + // } + // } + // } + // + // See http://blog.golang.org/pipelines for more examples of how to use + // a Done channel for cancelation. + Done() <-chan struct{} + + // Err returns a non-nil error value after Done is closed. Err returns + // Canceled if the context was canceled or DeadlineExceeded if the + // context's deadline passed. No other values for Err are defined. + // After Done is closed, successive calls to Err return the same value. + Err() error + + // Value returns the value associated with this context for key, or nil + // if no value is associated with key. Successive calls to Value with + // the same key returns the same result. + // + // Use context values only for request-scoped data that transits + // processes and API boundaries, not for passing optional parameters to + // functions. + // + // A key identifies a specific value in a Context. Functions that wish + // to store values in Context typically allocate a key in a global + // variable then use that key as the argument to context.WithValue and + // Context.Value. A key can be any type that supports equality; + // packages should define keys as an unexported type to avoid + // collisions. + // + // Packages that define a Context key should provide type-safe accessors + // for the values stores using that key: + // + // // Package user defines a User type that's stored in Contexts. + // package user + // + // import "golang.org/x/net/context" + // + // // User is the type of value stored in the Contexts. + // type User struct {...} + // + // // key is an unexported type for keys defined in this package. + // // This prevents collisions with keys defined in other packages. + // type key int + // + // // userKey is the key for user.User values in Contexts. It is + // // unexported; clients use user.NewContext and user.FromContext + // // instead of using this key directly. + // var userKey key = 0 + // + // // NewContext returns a new Context that carries value u. + // func NewContext(ctx context.Context, u *User) context.Context { + // return context.WithValue(ctx, userKey, u) + // } + // + // // FromContext returns the User value stored in ctx, if any. + // func FromContext(ctx context.Context) (*User, bool) { + // u, ok := ctx.Value(userKey).(*User) + // return u, ok + // } + Value(key interface{}) interface{} +} + +// A CancelFunc tells an operation to abandon its work. +// A CancelFunc does not wait for the work to stop. +// After the first call, subsequent calls to a CancelFunc do nothing. +type CancelFunc func() diff --git a/vendor/gopkg.in/mgo.v2/LICENSE b/vendor/gopkg.in/mgo.v2/LICENSE new file mode 100644 index 00000000..770c7672 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/LICENSE @@ -0,0 +1,25 @@ +mgo - MongoDB driver for Go + +Copyright (c) 2010-2013 - Gustavo Niemeyer + +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/gopkg.in/mgo.v2/auth.go b/vendor/gopkg.in/mgo.v2/auth.go new file mode 100644 index 00000000..dc26e52f --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/auth.go @@ -0,0 +1,467 @@ +// mgo - MongoDB driver for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package mgo + +import ( + "crypto/md5" + "crypto/sha1" + "encoding/hex" + "errors" + "fmt" + "sync" + + "gopkg.in/mgo.v2/bson" + "gopkg.in/mgo.v2/internal/scram" +) + +type authCmd struct { + Authenticate int + + Nonce string + User string + Key string +} + +type startSaslCmd struct { + StartSASL int `bson:"startSasl"` +} + +type authResult struct { + ErrMsg string + Ok bool +} + +type getNonceCmd struct { + GetNonce int +} + +type getNonceResult struct { + Nonce string + Err string "$err" + Code int +} + +type logoutCmd struct { + Logout int +} + +type saslCmd struct { + Start int `bson:"saslStart,omitempty"` + Continue int `bson:"saslContinue,omitempty"` + ConversationId int `bson:"conversationId,omitempty"` + Mechanism string `bson:"mechanism,omitempty"` + Payload []byte +} + +type saslResult struct { + Ok bool `bson:"ok"` + NotOk bool `bson:"code"` // Server <= 2.3.2 returns ok=1 & code>0 on errors (WTF?) + Done bool + + ConversationId int `bson:"conversationId"` + Payload []byte + ErrMsg string +} + +type saslStepper interface { + Step(serverData []byte) (clientData []byte, done bool, err error) + Close() +} + +func (socket *mongoSocket) getNonce() (nonce string, err error) { + socket.Lock() + for socket.cachedNonce == "" && socket.dead == nil { + debugf("Socket %p to %s: waiting for nonce", socket, socket.addr) + socket.gotNonce.Wait() + } + if socket.cachedNonce == "mongos" { + socket.Unlock() + return "", errors.New("Can't authenticate with mongos; see http://j.mp/mongos-auth") + } + debugf("Socket %p to %s: got nonce", socket, socket.addr) + nonce, err = socket.cachedNonce, socket.dead + socket.cachedNonce = "" + socket.Unlock() + if err != nil { + nonce = "" + } + return +} + +func (socket *mongoSocket) resetNonce() { + debugf("Socket %p to %s: requesting a new nonce", socket, socket.addr) + op := &queryOp{} + op.query = &getNonceCmd{GetNonce: 1} + op.collection = "admin.$cmd" + op.limit = -1 + op.replyFunc = func(err error, reply *replyOp, docNum int, docData []byte) { + if err != nil { + socket.kill(errors.New("getNonce: "+err.Error()), true) + return + } + result := &getNonceResult{} + err = bson.Unmarshal(docData, &result) + if err != nil { + socket.kill(errors.New("Failed to unmarshal nonce: "+err.Error()), true) + return + } + debugf("Socket %p to %s: nonce unmarshalled: %#v", socket, socket.addr, result) + if result.Code == 13390 { + // mongos doesn't yet support auth (see http://j.mp/mongos-auth) + result.Nonce = "mongos" + } else if result.Nonce == "" { + var msg string + if result.Err != "" { + msg = fmt.Sprintf("Got an empty nonce: %s (%d)", result.Err, result.Code) + } else { + msg = "Got an empty nonce" + } + socket.kill(errors.New(msg), true) + return + } + socket.Lock() + if socket.cachedNonce != "" { + socket.Unlock() + panic("resetNonce: nonce already cached") + } + socket.cachedNonce = result.Nonce + socket.gotNonce.Signal() + socket.Unlock() + } + err := socket.Query(op) + if err != nil { + socket.kill(errors.New("resetNonce: "+err.Error()), true) + } +} + +func (socket *mongoSocket) Login(cred Credential) error { + socket.Lock() + if cred.Mechanism == "" && socket.serverInfo.MaxWireVersion >= 3 { + cred.Mechanism = "SCRAM-SHA-1" + } + for _, sockCred := range socket.creds { + if sockCred == cred { + debugf("Socket %p to %s: login: db=%q user=%q (already logged in)", socket, socket.addr, cred.Source, cred.Username) + socket.Unlock() + return nil + } + } + if socket.dropLogout(cred) { + debugf("Socket %p to %s: login: db=%q user=%q (cached)", socket, socket.addr, cred.Source, cred.Username) + socket.creds = append(socket.creds, cred) + socket.Unlock() + return nil + } + socket.Unlock() + + debugf("Socket %p to %s: login: db=%q user=%q", socket, socket.addr, cred.Source, cred.Username) + + var err error + switch cred.Mechanism { + case "", "MONGODB-CR", "MONGO-CR": // Name changed to MONGODB-CR in SERVER-8501. + err = socket.loginClassic(cred) + case "PLAIN": + err = socket.loginPlain(cred) + case "MONGODB-X509": + err = socket.loginX509(cred) + default: + // Try SASL for everything else, if it is available. + err = socket.loginSASL(cred) + } + + if err != nil { + debugf("Socket %p to %s: login error: %s", socket, socket.addr, err) + } else { + debugf("Socket %p to %s: login successful", socket, socket.addr) + } + return err +} + +func (socket *mongoSocket) loginClassic(cred Credential) error { + // Note that this only works properly because this function is + // synchronous, which means the nonce won't get reset while we're + // using it and any other login requests will block waiting for a + // new nonce provided in the defer call below. + nonce, err := socket.getNonce() + if err != nil { + return err + } + defer socket.resetNonce() + + psum := md5.New() + psum.Write([]byte(cred.Username + ":mongo:" + cred.Password)) + + ksum := md5.New() + ksum.Write([]byte(nonce + cred.Username)) + ksum.Write([]byte(hex.EncodeToString(psum.Sum(nil)))) + + key := hex.EncodeToString(ksum.Sum(nil)) + + cmd := authCmd{Authenticate: 1, User: cred.Username, Nonce: nonce, Key: key} + res := authResult{} + return socket.loginRun(cred.Source, &cmd, &res, func() error { + if !res.Ok { + return errors.New(res.ErrMsg) + } + socket.Lock() + socket.dropAuth(cred.Source) + socket.creds = append(socket.creds, cred) + socket.Unlock() + return nil + }) +} + +type authX509Cmd struct { + Authenticate int + User string + Mechanism string +} + +func (socket *mongoSocket) loginX509(cred Credential) error { + cmd := authX509Cmd{Authenticate: 1, User: cred.Username, Mechanism: "MONGODB-X509"} + res := authResult{} + return socket.loginRun(cred.Source, &cmd, &res, func() error { + if !res.Ok { + return errors.New(res.ErrMsg) + } + socket.Lock() + socket.dropAuth(cred.Source) + socket.creds = append(socket.creds, cred) + socket.Unlock() + return nil + }) +} + +func (socket *mongoSocket) loginPlain(cred Credential) error { + cmd := saslCmd{Start: 1, Mechanism: "PLAIN", Payload: []byte("\x00" + cred.Username + "\x00" + cred.Password)} + res := authResult{} + return socket.loginRun(cred.Source, &cmd, &res, func() error { + if !res.Ok { + return errors.New(res.ErrMsg) + } + socket.Lock() + socket.dropAuth(cred.Source) + socket.creds = append(socket.creds, cred) + socket.Unlock() + return nil + }) +} + +func (socket *mongoSocket) loginSASL(cred Credential) error { + var sasl saslStepper + var err error + if cred.Mechanism == "SCRAM-SHA-1" { + // SCRAM is handled without external libraries. + sasl = saslNewScram(cred) + } else if len(cred.ServiceHost) > 0 { + sasl, err = saslNew(cred, cred.ServiceHost) + } else { + sasl, err = saslNew(cred, socket.Server().Addr) + } + if err != nil { + return err + } + defer sasl.Close() + + // The goal of this logic is to carry a locked socket until the + // local SASL step confirms the auth is valid; the socket needs to be + // locked so that concurrent action doesn't leave the socket in an + // auth state that doesn't reflect the operations that took place. + // As a simple case, imagine inverting login=>logout to logout=>login. + // + // The logic below works because the lock func isn't called concurrently. + locked := false + lock := func(b bool) { + if locked != b { + locked = b + if b { + socket.Lock() + } else { + socket.Unlock() + } + } + } + + lock(true) + defer lock(false) + + start := 1 + cmd := saslCmd{} + res := saslResult{} + for { + payload, done, err := sasl.Step(res.Payload) + if err != nil { + return err + } + if done && res.Done { + socket.dropAuth(cred.Source) + socket.creds = append(socket.creds, cred) + break + } + lock(false) + + cmd = saslCmd{ + Start: start, + Continue: 1 - start, + ConversationId: res.ConversationId, + Mechanism: cred.Mechanism, + Payload: payload, + } + start = 0 + err = socket.loginRun(cred.Source, &cmd, &res, func() error { + // See the comment on lock for why this is necessary. + lock(true) + if !res.Ok || res.NotOk { + return fmt.Errorf("server returned error on SASL authentication step: %s", res.ErrMsg) + } + return nil + }) + if err != nil { + return err + } + if done && res.Done { + socket.dropAuth(cred.Source) + socket.creds = append(socket.creds, cred) + break + } + } + + return nil +} + +func saslNewScram(cred Credential) *saslScram { + credsum := md5.New() + credsum.Write([]byte(cred.Username + ":mongo:" + cred.Password)) + client := scram.NewClient(sha1.New, cred.Username, hex.EncodeToString(credsum.Sum(nil))) + return &saslScram{cred: cred, client: client} +} + +type saslScram struct { + cred Credential + client *scram.Client +} + +func (s *saslScram) Close() {} + +func (s *saslScram) Step(serverData []byte) (clientData []byte, done bool, err error) { + more := s.client.Step(serverData) + return s.client.Out(), !more, s.client.Err() +} + +func (socket *mongoSocket) loginRun(db string, query, result interface{}, f func() error) error { + var mutex sync.Mutex + var replyErr error + mutex.Lock() + + op := queryOp{} + op.query = query + op.collection = db + ".$cmd" + op.limit = -1 + op.replyFunc = func(err error, reply *replyOp, docNum int, docData []byte) { + defer mutex.Unlock() + + if err != nil { + replyErr = err + return + } + + err = bson.Unmarshal(docData, result) + if err != nil { + replyErr = err + } else { + // Must handle this within the read loop for the socket, so + // that concurrent login requests are properly ordered. + replyErr = f() + } + } + + err := socket.Query(&op) + if err != nil { + return err + } + mutex.Lock() // Wait. + return replyErr +} + +func (socket *mongoSocket) Logout(db string) { + socket.Lock() + cred, found := socket.dropAuth(db) + if found { + debugf("Socket %p to %s: logout: db=%q (flagged)", socket, socket.addr, db) + socket.logout = append(socket.logout, cred) + } + socket.Unlock() +} + +func (socket *mongoSocket) LogoutAll() { + socket.Lock() + if l := len(socket.creds); l > 0 { + debugf("Socket %p to %s: logout all (flagged %d)", socket, socket.addr, l) + socket.logout = append(socket.logout, socket.creds...) + socket.creds = socket.creds[0:0] + } + socket.Unlock() +} + +func (socket *mongoSocket) flushLogout() (ops []interface{}) { + socket.Lock() + if l := len(socket.logout); l > 0 { + debugf("Socket %p to %s: logout all (flushing %d)", socket, socket.addr, l) + for i := 0; i != l; i++ { + op := queryOp{} + op.query = &logoutCmd{1} + op.collection = socket.logout[i].Source + ".$cmd" + op.limit = -1 + ops = append(ops, &op) + } + socket.logout = socket.logout[0:0] + } + socket.Unlock() + return +} + +func (socket *mongoSocket) dropAuth(db string) (cred Credential, found bool) { + for i, sockCred := range socket.creds { + if sockCred.Source == db { + copy(socket.creds[i:], socket.creds[i+1:]) + socket.creds = socket.creds[:len(socket.creds)-1] + return sockCred, true + } + } + return cred, false +} + +func (socket *mongoSocket) dropLogout(cred Credential) (found bool) { + for i, sockCred := range socket.logout { + if sockCred == cred { + copy(socket.logout[i:], socket.logout[i+1:]) + socket.logout = socket.logout[:len(socket.logout)-1] + return true + } + } + return false +} diff --git a/vendor/gopkg.in/mgo.v2/bson/LICENSE b/vendor/gopkg.in/mgo.v2/bson/LICENSE new file mode 100644 index 00000000..89032601 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bson/LICENSE @@ -0,0 +1,25 @@ +BSON library for Go + +Copyright (c) 2010-2012 - Gustavo Niemeyer + +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/gopkg.in/mgo.v2/bson/bson.go b/vendor/gopkg.in/mgo.v2/bson/bson.go new file mode 100644 index 00000000..7fb7f8ca --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bson/bson.go @@ -0,0 +1,738 @@ +// BSON library for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// Package bson is an implementation of the BSON specification for Go: +// +// http://bsonspec.org +// +// It was created as part of the mgo MongoDB driver for Go, but is standalone +// and may be used on its own without the driver. +package bson + +import ( + "bytes" + "crypto/md5" + "crypto/rand" + "encoding/binary" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "io" + "os" + "reflect" + "runtime" + "strings" + "sync" + "sync/atomic" + "time" +) + +// -------------------------------------------------------------------------- +// The public API. + +// A value implementing the bson.Getter interface will have its GetBSON +// method called when the given value has to be marshalled, and the result +// of this method will be marshaled in place of the actual object. +// +// If GetBSON returns return a non-nil error, the marshalling procedure +// will stop and error out with the provided value. +type Getter interface { + GetBSON() (interface{}, error) +} + +// A value implementing the bson.Setter interface will receive the BSON +// value via the SetBSON method during unmarshaling, and the object +// itself will not be changed as usual. +// +// If setting the value works, the method should return nil or alternatively +// bson.SetZero to set the respective field to its zero value (nil for +// pointer types). If SetBSON returns a value of type bson.TypeError, the +// BSON value will be omitted from a map or slice being decoded and the +// unmarshalling will continue. If it returns any other non-nil error, the +// unmarshalling procedure will stop and error out with the provided value. +// +// This interface is generally useful in pointer receivers, since the method +// will want to change the receiver. A type field that implements the Setter +// interface doesn't have to be a pointer, though. +// +// Unlike the usual behavior, unmarshalling onto a value that implements a +// Setter interface will NOT reset the value to its zero state. This allows +// the value to decide by itself how to be unmarshalled. +// +// For example: +// +// type MyString string +// +// func (s *MyString) SetBSON(raw bson.Raw) error { +// return raw.Unmarshal(s) +// } +// +type Setter interface { + SetBSON(raw Raw) error +} + +// SetZero may be returned from a SetBSON method to have the value set to +// its respective zero value. When used in pointer values, this will set the +// field to nil rather than to the pre-allocated value. +var SetZero = errors.New("set to zero") + +// M is a convenient alias for a map[string]interface{} map, useful for +// dealing with BSON in a native way. For instance: +// +// bson.M{"a": 1, "b": true} +// +// There's no special handling for this type in addition to what's done anyway +// for an equivalent map type. Elements in the map will be dumped in an +// undefined ordered. See also the bson.D type for an ordered alternative. +type M map[string]interface{} + +// D represents a BSON document containing ordered elements. For example: +// +// bson.D{{"a", 1}, {"b", true}} +// +// In some situations, such as when creating indexes for MongoDB, the order in +// which the elements are defined is important. If the order is not important, +// using a map is generally more comfortable. See bson.M and bson.RawD. +type D []DocElem + +// DocElem is an element of the bson.D document representation. +type DocElem struct { + Name string + Value interface{} +} + +// Map returns a map out of the ordered element name/value pairs in d. +func (d D) Map() (m M) { + m = make(M, len(d)) + for _, item := range d { + m[item.Name] = item.Value + } + return m +} + +// The Raw type represents raw unprocessed BSON documents and elements. +// Kind is the kind of element as defined per the BSON specification, and +// Data is the raw unprocessed data for the respective element. +// Using this type it is possible to unmarshal or marshal values partially. +// +// Relevant documentation: +// +// http://bsonspec.org/#/specification +// +type Raw struct { + Kind byte + Data []byte +} + +// RawD represents a BSON document containing raw unprocessed elements. +// This low-level representation may be useful when lazily processing +// documents of uncertain content, or when manipulating the raw content +// documents in general. +type RawD []RawDocElem + +// See the RawD type. +type RawDocElem struct { + Name string + Value Raw +} + +// ObjectId is a unique ID identifying a BSON value. It must be exactly 12 bytes +// long. MongoDB objects by default have such a property set in their "_id" +// property. +// +// http://www.mongodb.org/display/DOCS/Object+IDs +type ObjectId string + +// ObjectIdHex returns an ObjectId from the provided hex representation. +// Calling this function with an invalid hex representation will +// cause a runtime panic. See the IsObjectIdHex function. +func ObjectIdHex(s string) ObjectId { + d, err := hex.DecodeString(s) + if err != nil || len(d) != 12 { + panic(fmt.Sprintf("invalid input to ObjectIdHex: %q", s)) + } + return ObjectId(d) +} + +// IsObjectIdHex returns whether s is a valid hex representation of +// an ObjectId. See the ObjectIdHex function. +func IsObjectIdHex(s string) bool { + if len(s) != 24 { + return false + } + _, err := hex.DecodeString(s) + return err == nil +} + +// objectIdCounter is atomically incremented when generating a new ObjectId +// using NewObjectId() function. It's used as a counter part of an id. +var objectIdCounter uint32 = readRandomUint32() + +// readRandomUint32 returns a random objectIdCounter. +func readRandomUint32() uint32 { + var b [4]byte + _, err := io.ReadFull(rand.Reader, b[:]) + if err != nil { + panic(fmt.Errorf("cannot read random object id: %v", err)) + } + return uint32((uint32(b[0]) << 0) | (uint32(b[1]) << 8) | (uint32(b[2]) << 16) | (uint32(b[3]) << 24)) +} + +// machineId stores machine id generated once and used in subsequent calls +// to NewObjectId function. +var machineId = readMachineId() +var processId = os.Getpid() + +// readMachineId generates and returns a machine id. +// If this function fails to get the hostname it will cause a runtime error. +func readMachineId() []byte { + var sum [3]byte + id := sum[:] + hostname, err1 := os.Hostname() + if err1 != nil { + _, err2 := io.ReadFull(rand.Reader, id) + if err2 != nil { + panic(fmt.Errorf("cannot get hostname: %v; %v", err1, err2)) + } + return id + } + hw := md5.New() + hw.Write([]byte(hostname)) + copy(id, hw.Sum(nil)) + return id +} + +// NewObjectId returns a new unique ObjectId. +func NewObjectId() ObjectId { + var b [12]byte + // Timestamp, 4 bytes, big endian + binary.BigEndian.PutUint32(b[:], uint32(time.Now().Unix())) + // Machine, first 3 bytes of md5(hostname) + b[4] = machineId[0] + b[5] = machineId[1] + b[6] = machineId[2] + // Pid, 2 bytes, specs don't specify endianness, but we use big endian. + b[7] = byte(processId >> 8) + b[8] = byte(processId) + // Increment, 3 bytes, big endian + i := atomic.AddUint32(&objectIdCounter, 1) + b[9] = byte(i >> 16) + b[10] = byte(i >> 8) + b[11] = byte(i) + return ObjectId(b[:]) +} + +// NewObjectIdWithTime returns a dummy ObjectId with the timestamp part filled +// with the provided number of seconds from epoch UTC, and all other parts +// filled with zeroes. It's not safe to insert a document with an id generated +// by this method, it is useful only for queries to find documents with ids +// generated before or after the specified timestamp. +func NewObjectIdWithTime(t time.Time) ObjectId { + var b [12]byte + binary.BigEndian.PutUint32(b[:4], uint32(t.Unix())) + return ObjectId(string(b[:])) +} + +// String returns a hex string representation of the id. +// Example: ObjectIdHex("4d88e15b60f486e428412dc9"). +func (id ObjectId) String() string { + return fmt.Sprintf(`ObjectIdHex("%x")`, string(id)) +} + +// Hex returns a hex representation of the ObjectId. +func (id ObjectId) Hex() string { + return hex.EncodeToString([]byte(id)) +} + +// MarshalJSON turns a bson.ObjectId into a json.Marshaller. +func (id ObjectId) MarshalJSON() ([]byte, error) { + return []byte(fmt.Sprintf(`"%x"`, string(id))), nil +} + +var nullBytes = []byte("null") + +// UnmarshalJSON turns *bson.ObjectId into a json.Unmarshaller. +func (id *ObjectId) UnmarshalJSON(data []byte) error { + if len(data) > 0 && (data[0] == '{' || data[0] == 'O') { + var v struct { + Id json.RawMessage `json:"$oid"` + Func struct { + Id json.RawMessage + } `json:"$oidFunc"` + } + err := jdec(data, &v) + if err == nil { + if len(v.Id) > 0 { + data = []byte(v.Id) + } else { + data = []byte(v.Func.Id) + } + } + } + if len(data) == 2 && data[0] == '"' && data[1] == '"' || bytes.Equal(data, nullBytes) { + *id = "" + return nil + } + if len(data) != 26 || data[0] != '"' || data[25] != '"' { + return errors.New(fmt.Sprintf("invalid ObjectId in JSON: %s", string(data))) + } + var buf [12]byte + _, err := hex.Decode(buf[:], data[1:25]) + if err != nil { + return errors.New(fmt.Sprintf("invalid ObjectId in JSON: %s (%s)", string(data), err)) + } + *id = ObjectId(string(buf[:])) + return nil +} + +// MarshalText turns bson.ObjectId into an encoding.TextMarshaler. +func (id ObjectId) MarshalText() ([]byte, error) { + return []byte(fmt.Sprintf("%x", string(id))), nil +} + +// UnmarshalText turns *bson.ObjectId into an encoding.TextUnmarshaler. +func (id *ObjectId) UnmarshalText(data []byte) error { + if len(data) == 1 && data[0] == ' ' || len(data) == 0 { + *id = "" + return nil + } + if len(data) != 24 { + return fmt.Errorf("invalid ObjectId: %s", data) + } + var buf [12]byte + _, err := hex.Decode(buf[:], data[:]) + if err != nil { + return fmt.Errorf("invalid ObjectId: %s (%s)", data, err) + } + *id = ObjectId(string(buf[:])) + return nil +} + +// Valid returns true if id is valid. A valid id must contain exactly 12 bytes. +func (id ObjectId) Valid() bool { + return len(id) == 12 +} + +// byteSlice returns byte slice of id from start to end. +// Calling this function with an invalid id will cause a runtime panic. +func (id ObjectId) byteSlice(start, end int) []byte { + if len(id) != 12 { + panic(fmt.Sprintf("invalid ObjectId: %q", string(id))) + } + return []byte(string(id)[start:end]) +} + +// Time returns the timestamp part of the id. +// It's a runtime error to call this method with an invalid id. +func (id ObjectId) Time() time.Time { + // First 4 bytes of ObjectId is 32-bit big-endian seconds from epoch. + secs := int64(binary.BigEndian.Uint32(id.byteSlice(0, 4))) + return time.Unix(secs, 0) +} + +// Machine returns the 3-byte machine id part of the id. +// It's a runtime error to call this method with an invalid id. +func (id ObjectId) Machine() []byte { + return id.byteSlice(4, 7) +} + +// Pid returns the process id part of the id. +// It's a runtime error to call this method with an invalid id. +func (id ObjectId) Pid() uint16 { + return binary.BigEndian.Uint16(id.byteSlice(7, 9)) +} + +// Counter returns the incrementing value part of the id. +// It's a runtime error to call this method with an invalid id. +func (id ObjectId) Counter() int32 { + b := id.byteSlice(9, 12) + // Counter is stored as big-endian 3-byte value + return int32(uint32(b[0])<<16 | uint32(b[1])<<8 | uint32(b[2])) +} + +// The Symbol type is similar to a string and is used in languages with a +// distinct symbol type. +type Symbol string + +// Now returns the current time with millisecond precision. MongoDB stores +// timestamps with the same precision, so a Time returned from this method +// will not change after a roundtrip to the database. That's the only reason +// why this function exists. Using the time.Now function also works fine +// otherwise. +func Now() time.Time { + return time.Unix(0, time.Now().UnixNano()/1e6*1e6) +} + +// MongoTimestamp is a special internal type used by MongoDB that for some +// strange reason has its own datatype defined in BSON. +type MongoTimestamp int64 + +type orderKey int64 + +// MaxKey is a special value that compares higher than all other possible BSON +// values in a MongoDB database. +var MaxKey = orderKey(1<<63 - 1) + +// MinKey is a special value that compares lower than all other possible BSON +// values in a MongoDB database. +var MinKey = orderKey(-1 << 63) + +type undefined struct{} + +// Undefined represents the undefined BSON value. +var Undefined undefined + +// Binary is a representation for non-standard binary values. Any kind should +// work, but the following are known as of this writing: +// +// 0x00 - Generic. This is decoded as []byte(data), not Binary{0x00, data}. +// 0x01 - Function (!?) +// 0x02 - Obsolete generic. +// 0x03 - UUID +// 0x05 - MD5 +// 0x80 - User defined. +// +type Binary struct { + Kind byte + Data []byte +} + +// RegEx represents a regular expression. The Options field may contain +// individual characters defining the way in which the pattern should be +// applied, and must be sorted. Valid options as of this writing are 'i' for +// case insensitive matching, 'm' for multi-line matching, 'x' for verbose +// mode, 'l' to make \w, \W, and similar be locale-dependent, 's' for dot-all +// mode (a '.' matches everything), and 'u' to make \w, \W, and similar match +// unicode. The value of the Options parameter is not verified before being +// marshaled into the BSON format. +type RegEx struct { + Pattern string + Options string +} + +// JavaScript is a type that holds JavaScript code. If Scope is non-nil, it +// will be marshaled as a mapping from identifiers to values that may be +// used when evaluating the provided Code. +type JavaScript struct { + Code string + Scope interface{} +} + +// DBPointer refers to a document id in a namespace. +// +// This type is deprecated in the BSON specification and should not be used +// except for backwards compatibility with ancient applications. +type DBPointer struct { + Namespace string + Id ObjectId +} + +const initialBufferSize = 64 + +func handleErr(err *error) { + if r := recover(); r != nil { + if _, ok := r.(runtime.Error); ok { + panic(r) + } else if _, ok := r.(externalPanic); ok { + panic(r) + } else if s, ok := r.(string); ok { + *err = errors.New(s) + } else if e, ok := r.(error); ok { + *err = e + } else { + panic(r) + } + } +} + +// Marshal serializes the in value, which may be a map or a struct value. +// In the case of struct values, only exported fields will be serialized, +// and the order of serialized fields will match that of the struct itself. +// The lowercased field name is used as the key for each exported field, +// but this behavior may be changed using the respective field tag. +// The tag may also contain flags to tweak the marshalling behavior for +// the field. The tag formats accepted are: +// +// "[][,[,]]" +// +// `(...) bson:"[][,[,]]" (...)` +// +// The following flags are currently supported: +// +// omitempty Only include the field if it's not set to the zero +// value for the type or to empty slices or maps. +// +// minsize Marshal an int64 value as an int32, if that's feasible +// while preserving the numeric value. +// +// inline Inline the field, which must be a struct or a map, +// causing all of its fields or keys to be processed as if +// they were part of the outer struct. For maps, keys must +// not conflict with the bson keys of other struct fields. +// +// Some examples: +// +// type T struct { +// A bool +// B int "myb" +// C string "myc,omitempty" +// D string `bson:",omitempty" json:"jsonkey"` +// E int64 ",minsize" +// F int64 "myf,omitempty,minsize" +// } +// +func Marshal(in interface{}) (out []byte, err error) { + defer handleErr(&err) + e := &encoder{make([]byte, 0, initialBufferSize)} + e.addDoc(reflect.ValueOf(in)) + return e.out, nil +} + +// Unmarshal deserializes data from in into the out value. The out value +// must be a map, a pointer to a struct, or a pointer to a bson.D value. +// In the case of struct values, only exported fields will be deserialized. +// The lowercased field name is used as the key for each exported field, +// but this behavior may be changed using the respective field tag. +// The tag may also contain flags to tweak the marshalling behavior for +// the field. The tag formats accepted are: +// +// "[][,[,]]" +// +// `(...) bson:"[][,[,]]" (...)` +// +// The following flags are currently supported during unmarshal (see the +// Marshal method for other flags): +// +// inline Inline the field, which must be a struct or a map. +// Inlined structs are handled as if its fields were part +// of the outer struct. An inlined map causes keys that do +// not match any other struct field to be inserted in the +// map rather than being discarded as usual. +// +// The target field or element types of out may not necessarily match +// the BSON values of the provided data. The following conversions are +// made automatically: +// +// - Numeric types are converted if at least the integer part of the +// value would be preserved correctly +// - Bools are converted to numeric types as 1 or 0 +// - Numeric types are converted to bools as true if not 0 or false otherwise +// - Binary and string BSON data is converted to a string, array or byte slice +// +// If the value would not fit the type and cannot be converted, it's +// silently skipped. +// +// Pointer values are initialized when necessary. +func Unmarshal(in []byte, out interface{}) (err error) { + if raw, ok := out.(*Raw); ok { + raw.Kind = 3 + raw.Data = in + return nil + } + defer handleErr(&err) + v := reflect.ValueOf(out) + switch v.Kind() { + case reflect.Ptr: + fallthrough + case reflect.Map: + d := newDecoder(in) + d.readDocTo(v) + case reflect.Struct: + return errors.New("Unmarshal can't deal with struct values. Use a pointer.") + default: + return errors.New("Unmarshal needs a map or a pointer to a struct.") + } + return nil +} + +// Unmarshal deserializes raw into the out value. If the out value type +// is not compatible with raw, a *bson.TypeError is returned. +// +// See the Unmarshal function documentation for more details on the +// unmarshalling process. +func (raw Raw) Unmarshal(out interface{}) (err error) { + defer handleErr(&err) + v := reflect.ValueOf(out) + switch v.Kind() { + case reflect.Ptr: + v = v.Elem() + fallthrough + case reflect.Map: + d := newDecoder(raw.Data) + good := d.readElemTo(v, raw.Kind) + if !good { + return &TypeError{v.Type(), raw.Kind} + } + case reflect.Struct: + return errors.New("Raw Unmarshal can't deal with struct values. Use a pointer.") + default: + return errors.New("Raw Unmarshal needs a map or a valid pointer.") + } + return nil +} + +type TypeError struct { + Type reflect.Type + Kind byte +} + +func (e *TypeError) Error() string { + return fmt.Sprintf("BSON kind 0x%02x isn't compatible with type %s", e.Kind, e.Type.String()) +} + +// -------------------------------------------------------------------------- +// Maintain a mapping of keys to structure field indexes + +type structInfo struct { + FieldsMap map[string]fieldInfo + FieldsList []fieldInfo + InlineMap int + Zero reflect.Value +} + +type fieldInfo struct { + Key string + Num int + OmitEmpty bool + MinSize bool + Inline []int +} + +var structMap = make(map[reflect.Type]*structInfo) +var structMapMutex sync.RWMutex + +type externalPanic string + +func (e externalPanic) String() string { + return string(e) +} + +func getStructInfo(st reflect.Type) (*structInfo, error) { + structMapMutex.RLock() + sinfo, found := structMap[st] + structMapMutex.RUnlock() + if found { + return sinfo, nil + } + n := st.NumField() + fieldsMap := make(map[string]fieldInfo) + fieldsList := make([]fieldInfo, 0, n) + inlineMap := -1 + for i := 0; i != n; i++ { + field := st.Field(i) + if field.PkgPath != "" && !field.Anonymous { + continue // Private field + } + + info := fieldInfo{Num: i} + + tag := field.Tag.Get("bson") + if tag == "" && strings.Index(string(field.Tag), ":") < 0 { + tag = string(field.Tag) + } + if tag == "-" { + continue + } + + inline := false + fields := strings.Split(tag, ",") + if len(fields) > 1 { + for _, flag := range fields[1:] { + switch flag { + case "omitempty": + info.OmitEmpty = true + case "minsize": + info.MinSize = true + case "inline": + inline = true + default: + msg := fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st) + panic(externalPanic(msg)) + } + } + tag = fields[0] + } + + if inline { + switch field.Type.Kind() { + case reflect.Map: + if inlineMap >= 0 { + return nil, errors.New("Multiple ,inline maps in struct " + st.String()) + } + if field.Type.Key() != reflect.TypeOf("") { + return nil, errors.New("Option ,inline needs a map with string keys in struct " + st.String()) + } + inlineMap = info.Num + case reflect.Struct: + sinfo, err := getStructInfo(field.Type) + if err != nil { + return nil, err + } + for _, finfo := range sinfo.FieldsList { + if _, found := fieldsMap[finfo.Key]; found { + msg := "Duplicated key '" + finfo.Key + "' in struct " + st.String() + return nil, errors.New(msg) + } + if finfo.Inline == nil { + finfo.Inline = []int{i, finfo.Num} + } else { + finfo.Inline = append([]int{i}, finfo.Inline...) + } + fieldsMap[finfo.Key] = finfo + fieldsList = append(fieldsList, finfo) + } + default: + panic("Option ,inline needs a struct value or map field") + } + continue + } + + if tag != "" { + info.Key = tag + } else { + info.Key = strings.ToLower(field.Name) + } + + if _, found = fieldsMap[info.Key]; found { + msg := "Duplicated key '" + info.Key + "' in struct " + st.String() + return nil, errors.New(msg) + } + + fieldsList = append(fieldsList, info) + fieldsMap[info.Key] = info + } + sinfo = &structInfo{ + fieldsMap, + fieldsList, + inlineMap, + reflect.New(st).Elem(), + } + structMapMutex.Lock() + structMap[st] = sinfo + structMapMutex.Unlock() + return sinfo, nil +} diff --git a/vendor/gopkg.in/mgo.v2/bson/decimal.go b/vendor/gopkg.in/mgo.v2/bson/decimal.go new file mode 100644 index 00000000..3d2f7002 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bson/decimal.go @@ -0,0 +1,310 @@ +// BSON library for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package bson + +import ( + "fmt" + "strconv" + "strings" +) + +// Decimal128 holds decimal128 BSON values. +type Decimal128 struct { + h, l uint64 +} + +func (d Decimal128) String() string { + var pos int // positive sign + var e int // exponent + var h, l uint64 // significand high/low + + if d.h>>63&1 == 0 { + pos = 1 + } + + switch d.h >> 58 & (1<<5 - 1) { + case 0x1F: + return "NaN" + case 0x1E: + return "-Inf"[pos:] + } + + l = d.l + if d.h>>61&3 == 3 { + // Bits: 1*sign 2*ignored 14*exponent 111*significand. + // Implicit 0b100 prefix in significand. + e = int(d.h>>47&(1<<14-1)) - 6176 + //h = 4<<47 | d.h&(1<<47-1) + // Spec says all of these values are out of range. + h, l = 0, 0 + } else { + // Bits: 1*sign 14*exponent 113*significand + e = int(d.h>>49&(1<<14-1)) - 6176 + h = d.h & (1<<49 - 1) + } + + // Would be handled by the logic below, but that's trivial and common. + if h == 0 && l == 0 && e == 0 { + return "-0"[pos:] + } + + var repr [48]byte // Loop 5 times over 9 digits plus dot, negative sign, and leading zero. + var last = len(repr) + var i = len(repr) + var dot = len(repr) + e + var rem uint32 +Loop: + for d9 := 0; d9 < 5; d9++ { + h, l, rem = divmod(h, l, 1e9) + for d1 := 0; d1 < 9; d1++ { + // Handle "-0.0", "0.00123400", "-1.00E-6", "1.050E+3", etc. + if i < len(repr) && (dot == i || l == 0 && h == 0 && rem > 0 && rem < 10 && (dot < i-6 || e > 0)) { + e += len(repr) - i + i-- + repr[i] = '.' + last = i - 1 + dot = len(repr) // Unmark. + } + c := '0' + byte(rem%10) + rem /= 10 + i-- + repr[i] = c + // Handle "0E+3", "1E+3", etc. + if l == 0 && h == 0 && rem == 0 && i == len(repr)-1 && (dot < i-5 || e > 0) { + last = i + break Loop + } + if c != '0' { + last = i + } + // Break early. Works without it, but why. + if dot > i && l == 0 && h == 0 && rem == 0 { + break Loop + } + } + } + repr[last-1] = '-' + last-- + + if e > 0 { + return string(repr[last+pos:]) + "E+" + strconv.Itoa(e) + } + if e < 0 { + return string(repr[last+pos:]) + "E" + strconv.Itoa(e) + } + return string(repr[last+pos:]) +} + +func divmod(h, l uint64, div uint32) (qh, ql uint64, rem uint32) { + div64 := uint64(div) + a := h >> 32 + aq := a / div64 + ar := a % div64 + b := ar<<32 + h&(1<<32-1) + bq := b / div64 + br := b % div64 + c := br<<32 + l>>32 + cq := c / div64 + cr := c % div64 + d := cr<<32 + l&(1<<32-1) + dq := d / div64 + dr := d % div64 + return (aq<<32 | bq), (cq<<32 | dq), uint32(dr) +} + +var dNaN = Decimal128{0x1F << 58, 0} +var dPosInf = Decimal128{0x1E << 58, 0} +var dNegInf = Decimal128{0x3E << 58, 0} + +func dErr(s string) (Decimal128, error) { + return dNaN, fmt.Errorf("cannot parse %q as a decimal128", s) +} + +func ParseDecimal128(s string) (Decimal128, error) { + orig := s + if s == "" { + return dErr(orig) + } + neg := s[0] == '-' + if neg || s[0] == '+' { + s = s[1:] + } + + if (len(s) == 3 || len(s) == 8) && (s[0] == 'N' || s[0] == 'n' || s[0] == 'I' || s[0] == 'i') { + if s == "NaN" || s == "nan" || strings.EqualFold(s, "nan") { + return dNaN, nil + } + if s == "Inf" || s == "inf" || strings.EqualFold(s, "inf") || strings.EqualFold(s, "infinity") { + if neg { + return dNegInf, nil + } + return dPosInf, nil + } + return dErr(orig) + } + + var h, l uint64 + var e int + + var add, ovr uint32 + var mul uint32 = 1 + var dot = -1 + var digits = 0 + var i = 0 + for i < len(s) { + c := s[i] + if mul == 1e9 { + h, l, ovr = muladd(h, l, mul, add) + mul, add = 1, 0 + if ovr > 0 || h&((1<<15-1)<<49) > 0 { + return dErr(orig) + } + } + if c >= '0' && c <= '9' { + i++ + if c > '0' || digits > 0 { + digits++ + } + if digits > 34 { + if c == '0' { + // Exact rounding. + e++ + continue + } + return dErr(orig) + } + mul *= 10 + add *= 10 + add += uint32(c - '0') + continue + } + if c == '.' { + i++ + if dot >= 0 || i == 1 && len(s) == 1 { + return dErr(orig) + } + if i == len(s) { + break + } + if s[i] < '0' || s[i] > '9' || e > 0 { + return dErr(orig) + } + dot = i + continue + } + break + } + if i == 0 { + return dErr(orig) + } + if mul > 1 { + h, l, ovr = muladd(h, l, mul, add) + if ovr > 0 || h&((1<<15-1)<<49) > 0 { + return dErr(orig) + } + } + if dot >= 0 { + e += dot - i + } + if i+1 < len(s) && (s[i] == 'E' || s[i] == 'e') { + i++ + eneg := s[i] == '-' + if eneg || s[i] == '+' { + i++ + if i == len(s) { + return dErr(orig) + } + } + n := 0 + for i < len(s) && n < 1e4 { + c := s[i] + i++ + if c < '0' || c > '9' { + return dErr(orig) + } + n *= 10 + n += int(c - '0') + } + if eneg { + n = -n + } + e += n + for e < -6176 { + // Subnormal. + var div uint32 = 1 + for div < 1e9 && e < -6176 { + div *= 10 + e++ + } + var rem uint32 + h, l, rem = divmod(h, l, div) + if rem > 0 { + return dErr(orig) + } + } + for e > 6111 { + // Clamped. + var mul uint32 = 1 + for mul < 1e9 && e > 6111 { + mul *= 10 + e-- + } + h, l, ovr = muladd(h, l, mul, 0) + if ovr > 0 || h&((1<<15-1)<<49) > 0 { + return dErr(orig) + } + } + if e < -6176 || e > 6111 { + return dErr(orig) + } + } + + if i < len(s) { + return dErr(orig) + } + + h |= uint64(e+6176) & uint64(1<<14-1) << 49 + if neg { + h |= 1 << 63 + } + return Decimal128{h, l}, nil +} + +func muladd(h, l uint64, mul uint32, add uint32) (resh, resl uint64, overflow uint32) { + mul64 := uint64(mul) + a := mul64 * (l & (1<<32 - 1)) + b := a>>32 + mul64*(l>>32) + c := b>>32 + mul64*(h&(1<<32-1)) + d := c>>32 + mul64*(h>>32) + + a = a&(1<<32-1) + uint64(add) + b = b&(1<<32-1) + a>>32 + c = c&(1<<32-1) + b>>32 + d = d&(1<<32-1) + c>>32 + + return (d<<32 | c&(1<<32-1)), (b<<32 | a&(1<<32-1)), uint32(d >> 32) +} diff --git a/vendor/gopkg.in/mgo.v2/bson/decode.go b/vendor/gopkg.in/mgo.v2/bson/decode.go new file mode 100644 index 00000000..7c2d8416 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bson/decode.go @@ -0,0 +1,849 @@ +// BSON library for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// gobson - BSON library for Go. + +package bson + +import ( + "fmt" + "math" + "net/url" + "reflect" + "strconv" + "sync" + "time" +) + +type decoder struct { + in []byte + i int + docType reflect.Type +} + +var typeM = reflect.TypeOf(M{}) + +func newDecoder(in []byte) *decoder { + return &decoder{in, 0, typeM} +} + +// -------------------------------------------------------------------------- +// Some helper functions. + +func corrupted() { + panic("Document is corrupted") +} + +func settableValueOf(i interface{}) reflect.Value { + v := reflect.ValueOf(i) + sv := reflect.New(v.Type()).Elem() + sv.Set(v) + return sv +} + +// -------------------------------------------------------------------------- +// Unmarshaling of documents. + +const ( + setterUnknown = iota + setterNone + setterType + setterAddr +) + +var setterStyles map[reflect.Type]int +var setterIface reflect.Type +var setterMutex sync.RWMutex + +func init() { + var iface Setter + setterIface = reflect.TypeOf(&iface).Elem() + setterStyles = make(map[reflect.Type]int) +} + +func setterStyle(outt reflect.Type) int { + setterMutex.RLock() + style := setterStyles[outt] + setterMutex.RUnlock() + if style == setterUnknown { + setterMutex.Lock() + defer setterMutex.Unlock() + if outt.Implements(setterIface) { + setterStyles[outt] = setterType + } else if reflect.PtrTo(outt).Implements(setterIface) { + setterStyles[outt] = setterAddr + } else { + setterStyles[outt] = setterNone + } + style = setterStyles[outt] + } + return style +} + +func getSetter(outt reflect.Type, out reflect.Value) Setter { + style := setterStyle(outt) + if style == setterNone { + return nil + } + if style == setterAddr { + if !out.CanAddr() { + return nil + } + out = out.Addr() + } else if outt.Kind() == reflect.Ptr && out.IsNil() { + out.Set(reflect.New(outt.Elem())) + } + return out.Interface().(Setter) +} + +func clearMap(m reflect.Value) { + var none reflect.Value + for _, k := range m.MapKeys() { + m.SetMapIndex(k, none) + } +} + +func (d *decoder) readDocTo(out reflect.Value) { + var elemType reflect.Type + outt := out.Type() + outk := outt.Kind() + + for { + if outk == reflect.Ptr && out.IsNil() { + out.Set(reflect.New(outt.Elem())) + } + if setter := getSetter(outt, out); setter != nil { + var raw Raw + d.readDocTo(reflect.ValueOf(&raw)) + err := setter.SetBSON(raw) + if _, ok := err.(*TypeError); err != nil && !ok { + panic(err) + } + return + } + if outk == reflect.Ptr { + out = out.Elem() + outt = out.Type() + outk = out.Kind() + continue + } + break + } + + var fieldsMap map[string]fieldInfo + var inlineMap reflect.Value + start := d.i + + origout := out + if outk == reflect.Interface { + if d.docType.Kind() == reflect.Map { + mv := reflect.MakeMap(d.docType) + out.Set(mv) + out = mv + } else { + dv := reflect.New(d.docType).Elem() + out.Set(dv) + out = dv + } + outt = out.Type() + outk = outt.Kind() + } + + docType := d.docType + keyType := typeString + convertKey := false + switch outk { + case reflect.Map: + keyType = outt.Key() + if keyType.Kind() != reflect.String { + panic("BSON map must have string keys. Got: " + outt.String()) + } + if keyType != typeString { + convertKey = true + } + elemType = outt.Elem() + if elemType == typeIface { + d.docType = outt + } + if out.IsNil() { + out.Set(reflect.MakeMap(out.Type())) + } else if out.Len() > 0 { + clearMap(out) + } + case reflect.Struct: + if outt != typeRaw { + sinfo, err := getStructInfo(out.Type()) + if err != nil { + panic(err) + } + fieldsMap = sinfo.FieldsMap + out.Set(sinfo.Zero) + if sinfo.InlineMap != -1 { + inlineMap = out.Field(sinfo.InlineMap) + if !inlineMap.IsNil() && inlineMap.Len() > 0 { + clearMap(inlineMap) + } + elemType = inlineMap.Type().Elem() + if elemType == typeIface { + d.docType = inlineMap.Type() + } + } + } + case reflect.Slice: + switch outt.Elem() { + case typeDocElem: + origout.Set(d.readDocElems(outt)) + return + case typeRawDocElem: + origout.Set(d.readRawDocElems(outt)) + return + } + fallthrough + default: + panic("Unsupported document type for unmarshalling: " + out.Type().String()) + } + + end := int(d.readInt32()) + end += d.i - 4 + if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' { + corrupted() + } + for d.in[d.i] != '\x00' { + kind := d.readByte() + name := d.readCStr() + if d.i >= end { + corrupted() + } + + switch outk { + case reflect.Map: + e := reflect.New(elemType).Elem() + if d.readElemTo(e, kind) { + k := reflect.ValueOf(name) + if convertKey { + k = k.Convert(keyType) + } + out.SetMapIndex(k, e) + } + case reflect.Struct: + if outt == typeRaw { + d.dropElem(kind) + } else { + if info, ok := fieldsMap[name]; ok { + if info.Inline == nil { + d.readElemTo(out.Field(info.Num), kind) + } else { + d.readElemTo(out.FieldByIndex(info.Inline), kind) + } + } else if inlineMap.IsValid() { + if inlineMap.IsNil() { + inlineMap.Set(reflect.MakeMap(inlineMap.Type())) + } + e := reflect.New(elemType).Elem() + if d.readElemTo(e, kind) { + inlineMap.SetMapIndex(reflect.ValueOf(name), e) + } + } else { + d.dropElem(kind) + } + } + case reflect.Slice: + } + + if d.i >= end { + corrupted() + } + } + d.i++ // '\x00' + if d.i != end { + corrupted() + } + d.docType = docType + + if outt == typeRaw { + out.Set(reflect.ValueOf(Raw{0x03, d.in[start:d.i]})) + } +} + +func (d *decoder) readArrayDocTo(out reflect.Value) { + end := int(d.readInt32()) + end += d.i - 4 + if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' { + corrupted() + } + i := 0 + l := out.Len() + for d.in[d.i] != '\x00' { + if i >= l { + panic("Length mismatch on array field") + } + kind := d.readByte() + for d.i < end && d.in[d.i] != '\x00' { + d.i++ + } + if d.i >= end { + corrupted() + } + d.i++ + d.readElemTo(out.Index(i), kind) + if d.i >= end { + corrupted() + } + i++ + } + if i != l { + panic("Length mismatch on array field") + } + d.i++ // '\x00' + if d.i != end { + corrupted() + } +} + +func (d *decoder) readSliceDoc(t reflect.Type) interface{} { + tmp := make([]reflect.Value, 0, 8) + elemType := t.Elem() + if elemType == typeRawDocElem { + d.dropElem(0x04) + return reflect.Zero(t).Interface() + } + + end := int(d.readInt32()) + end += d.i - 4 + if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' { + corrupted() + } + for d.in[d.i] != '\x00' { + kind := d.readByte() + for d.i < end && d.in[d.i] != '\x00' { + d.i++ + } + if d.i >= end { + corrupted() + } + d.i++ + e := reflect.New(elemType).Elem() + if d.readElemTo(e, kind) { + tmp = append(tmp, e) + } + if d.i >= end { + corrupted() + } + } + d.i++ // '\x00' + if d.i != end { + corrupted() + } + + n := len(tmp) + slice := reflect.MakeSlice(t, n, n) + for i := 0; i != n; i++ { + slice.Index(i).Set(tmp[i]) + } + return slice.Interface() +} + +var typeSlice = reflect.TypeOf([]interface{}{}) +var typeIface = typeSlice.Elem() + +func (d *decoder) readDocElems(typ reflect.Type) reflect.Value { + docType := d.docType + d.docType = typ + slice := make([]DocElem, 0, 8) + d.readDocWith(func(kind byte, name string) { + e := DocElem{Name: name} + v := reflect.ValueOf(&e.Value) + if d.readElemTo(v.Elem(), kind) { + slice = append(slice, e) + } + }) + slicev := reflect.New(typ).Elem() + slicev.Set(reflect.ValueOf(slice)) + d.docType = docType + return slicev +} + +func (d *decoder) readRawDocElems(typ reflect.Type) reflect.Value { + docType := d.docType + d.docType = typ + slice := make([]RawDocElem, 0, 8) + d.readDocWith(func(kind byte, name string) { + e := RawDocElem{Name: name} + v := reflect.ValueOf(&e.Value) + if d.readElemTo(v.Elem(), kind) { + slice = append(slice, e) + } + }) + slicev := reflect.New(typ).Elem() + slicev.Set(reflect.ValueOf(slice)) + d.docType = docType + return slicev +} + +func (d *decoder) readDocWith(f func(kind byte, name string)) { + end := int(d.readInt32()) + end += d.i - 4 + if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' { + corrupted() + } + for d.in[d.i] != '\x00' { + kind := d.readByte() + name := d.readCStr() + if d.i >= end { + corrupted() + } + f(kind, name) + if d.i >= end { + corrupted() + } + } + d.i++ // '\x00' + if d.i != end { + corrupted() + } +} + +// -------------------------------------------------------------------------- +// Unmarshaling of individual elements within a document. + +var blackHole = settableValueOf(struct{}{}) + +func (d *decoder) dropElem(kind byte) { + d.readElemTo(blackHole, kind) +} + +// Attempt to decode an element from the document and put it into out. +// If the types are not compatible, the returned ok value will be +// false and out will be unchanged. +func (d *decoder) readElemTo(out reflect.Value, kind byte) (good bool) { + + start := d.i + + if kind == 0x03 { + // Delegate unmarshaling of documents. + outt := out.Type() + outk := out.Kind() + switch outk { + case reflect.Interface, reflect.Ptr, reflect.Struct, reflect.Map: + d.readDocTo(out) + return true + } + if setterStyle(outt) != setterNone { + d.readDocTo(out) + return true + } + if outk == reflect.Slice { + switch outt.Elem() { + case typeDocElem: + out.Set(d.readDocElems(outt)) + case typeRawDocElem: + out.Set(d.readRawDocElems(outt)) + default: + d.readDocTo(blackHole) + } + return true + } + d.readDocTo(blackHole) + return true + } + + var in interface{} + + switch kind { + case 0x01: // Float64 + in = d.readFloat64() + case 0x02: // UTF-8 string + in = d.readStr() + case 0x03: // Document + panic("Can't happen. Handled above.") + case 0x04: // Array + outt := out.Type() + if setterStyle(outt) != setterNone { + // Skip the value so its data is handed to the setter below. + d.dropElem(kind) + break + } + for outt.Kind() == reflect.Ptr { + outt = outt.Elem() + } + switch outt.Kind() { + case reflect.Array: + d.readArrayDocTo(out) + return true + case reflect.Slice: + in = d.readSliceDoc(outt) + default: + in = d.readSliceDoc(typeSlice) + } + case 0x05: // Binary + b := d.readBinary() + if b.Kind == 0x00 || b.Kind == 0x02 { + in = b.Data + } else { + in = b + } + case 0x06: // Undefined (obsolete, but still seen in the wild) + in = Undefined + case 0x07: // ObjectId + in = ObjectId(d.readBytes(12)) + case 0x08: // Bool + in = d.readBool() + case 0x09: // Timestamp + // MongoDB handles timestamps as milliseconds. + i := d.readInt64() + if i == -62135596800000 { + in = time.Time{} // In UTC for convenience. + } else { + in = time.Unix(i/1e3, i%1e3*1e6) + } + case 0x0A: // Nil + in = nil + case 0x0B: // RegEx + in = d.readRegEx() + case 0x0C: + in = DBPointer{Namespace: d.readStr(), Id: ObjectId(d.readBytes(12))} + case 0x0D: // JavaScript without scope + in = JavaScript{Code: d.readStr()} + case 0x0E: // Symbol + in = Symbol(d.readStr()) + case 0x0F: // JavaScript with scope + d.i += 4 // Skip length + js := JavaScript{d.readStr(), make(M)} + d.readDocTo(reflect.ValueOf(js.Scope)) + in = js + case 0x10: // Int32 + in = int(d.readInt32()) + case 0x11: // Mongo-specific timestamp + in = MongoTimestamp(d.readInt64()) + case 0x12: // Int64 + in = d.readInt64() + case 0x13: // Decimal128 + in = Decimal128{ + l: uint64(d.readInt64()), + h: uint64(d.readInt64()), + } + case 0x7F: // Max key + in = MaxKey + case 0xFF: // Min key + in = MinKey + default: + panic(fmt.Sprintf("Unknown element kind (0x%02X)", kind)) + } + + outt := out.Type() + + if outt == typeRaw { + out.Set(reflect.ValueOf(Raw{kind, d.in[start:d.i]})) + return true + } + + if setter := getSetter(outt, out); setter != nil { + err := setter.SetBSON(Raw{kind, d.in[start:d.i]}) + if err == SetZero { + out.Set(reflect.Zero(outt)) + return true + } + if err == nil { + return true + } + if _, ok := err.(*TypeError); !ok { + panic(err) + } + return false + } + + if in == nil { + out.Set(reflect.Zero(outt)) + return true + } + + outk := outt.Kind() + + // Dereference and initialize pointer if necessary. + first := true + for outk == reflect.Ptr { + if !out.IsNil() { + out = out.Elem() + } else { + elem := reflect.New(outt.Elem()) + if first { + // Only set if value is compatible. + first = false + defer func(out, elem reflect.Value) { + if good { + out.Set(elem) + } + }(out, elem) + } else { + out.Set(elem) + } + out = elem + } + outt = out.Type() + outk = outt.Kind() + } + + inv := reflect.ValueOf(in) + if outt == inv.Type() { + out.Set(inv) + return true + } + + switch outk { + case reflect.Interface: + out.Set(inv) + return true + case reflect.String: + switch inv.Kind() { + case reflect.String: + out.SetString(inv.String()) + return true + case reflect.Slice: + if b, ok := in.([]byte); ok { + out.SetString(string(b)) + return true + } + case reflect.Int, reflect.Int64: + if outt == typeJSONNumber { + out.SetString(strconv.FormatInt(inv.Int(), 10)) + return true + } + case reflect.Float64: + if outt == typeJSONNumber { + out.SetString(strconv.FormatFloat(inv.Float(), 'f', -1, 64)) + return true + } + } + case reflect.Slice, reflect.Array: + // Remember, array (0x04) slices are built with the correct + // element type. If we are here, must be a cross BSON kind + // conversion (e.g. 0x05 unmarshalling on string). + if outt.Elem().Kind() != reflect.Uint8 { + break + } + switch inv.Kind() { + case reflect.String: + slice := []byte(inv.String()) + out.Set(reflect.ValueOf(slice)) + return true + case reflect.Slice: + switch outt.Kind() { + case reflect.Array: + reflect.Copy(out, inv) + case reflect.Slice: + out.SetBytes(inv.Bytes()) + } + return true + } + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + switch inv.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + out.SetInt(inv.Int()) + return true + case reflect.Float32, reflect.Float64: + out.SetInt(int64(inv.Float())) + return true + case reflect.Bool: + if inv.Bool() { + out.SetInt(1) + } else { + out.SetInt(0) + } + return true + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + panic("can't happen: no uint types in BSON (!?)") + } + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + switch inv.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + out.SetUint(uint64(inv.Int())) + return true + case reflect.Float32, reflect.Float64: + out.SetUint(uint64(inv.Float())) + return true + case reflect.Bool: + if inv.Bool() { + out.SetUint(1) + } else { + out.SetUint(0) + } + return true + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + panic("Can't happen. No uint types in BSON.") + } + case reflect.Float32, reflect.Float64: + switch inv.Kind() { + case reflect.Float32, reflect.Float64: + out.SetFloat(inv.Float()) + return true + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + out.SetFloat(float64(inv.Int())) + return true + case reflect.Bool: + if inv.Bool() { + out.SetFloat(1) + } else { + out.SetFloat(0) + } + return true + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + panic("Can't happen. No uint types in BSON?") + } + case reflect.Bool: + switch inv.Kind() { + case reflect.Bool: + out.SetBool(inv.Bool()) + return true + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + out.SetBool(inv.Int() != 0) + return true + case reflect.Float32, reflect.Float64: + out.SetBool(inv.Float() != 0) + return true + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + panic("Can't happen. No uint types in BSON?") + } + case reflect.Struct: + if outt == typeURL && inv.Kind() == reflect.String { + u, err := url.Parse(inv.String()) + if err != nil { + panic(err) + } + out.Set(reflect.ValueOf(u).Elem()) + return true + } + if outt == typeBinary { + if b, ok := in.([]byte); ok { + out.Set(reflect.ValueOf(Binary{Data: b})) + return true + } + } + } + + return false +} + +// -------------------------------------------------------------------------- +// Parsers of basic types. + +func (d *decoder) readRegEx() RegEx { + re := RegEx{} + re.Pattern = d.readCStr() + re.Options = d.readCStr() + return re +} + +func (d *decoder) readBinary() Binary { + l := d.readInt32() + b := Binary{} + b.Kind = d.readByte() + b.Data = d.readBytes(l) + if b.Kind == 0x02 && len(b.Data) >= 4 { + // Weird obsolete format with redundant length. + b.Data = b.Data[4:] + } + return b +} + +func (d *decoder) readStr() string { + l := d.readInt32() + b := d.readBytes(l - 1) + if d.readByte() != '\x00' { + corrupted() + } + return string(b) +} + +func (d *decoder) readCStr() string { + start := d.i + end := start + l := len(d.in) + for ; end != l; end++ { + if d.in[end] == '\x00' { + break + } + } + d.i = end + 1 + if d.i > l { + corrupted() + } + return string(d.in[start:end]) +} + +func (d *decoder) readBool() bool { + b := d.readByte() + if b == 0 { + return false + } + if b == 1 { + return true + } + panic(fmt.Sprintf("encoded boolean must be 1 or 0, found %d", b)) +} + +func (d *decoder) readFloat64() float64 { + return math.Float64frombits(uint64(d.readInt64())) +} + +func (d *decoder) readInt32() int32 { + b := d.readBytes(4) + return int32((uint32(b[0]) << 0) | + (uint32(b[1]) << 8) | + (uint32(b[2]) << 16) | + (uint32(b[3]) << 24)) +} + +func (d *decoder) readInt64() int64 { + b := d.readBytes(8) + return int64((uint64(b[0]) << 0) | + (uint64(b[1]) << 8) | + (uint64(b[2]) << 16) | + (uint64(b[3]) << 24) | + (uint64(b[4]) << 32) | + (uint64(b[5]) << 40) | + (uint64(b[6]) << 48) | + (uint64(b[7]) << 56)) +} + +func (d *decoder) readByte() byte { + i := d.i + d.i++ + if d.i > len(d.in) { + corrupted() + } + return d.in[i] +} + +func (d *decoder) readBytes(length int32) []byte { + if length < 0 { + corrupted() + } + start := d.i + d.i += int(length) + if d.i < start || d.i > len(d.in) { + corrupted() + } + return d.in[start : start+int(length)] +} diff --git a/vendor/gopkg.in/mgo.v2/bson/encode.go b/vendor/gopkg.in/mgo.v2/bson/encode.go new file mode 100644 index 00000000..add39e86 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bson/encode.go @@ -0,0 +1,514 @@ +// BSON library for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// gobson - BSON library for Go. + +package bson + +import ( + "encoding/json" + "fmt" + "math" + "net/url" + "reflect" + "strconv" + "time" +) + +// -------------------------------------------------------------------------- +// Some internal infrastructure. + +var ( + typeBinary = reflect.TypeOf(Binary{}) + typeObjectId = reflect.TypeOf(ObjectId("")) + typeDBPointer = reflect.TypeOf(DBPointer{"", ObjectId("")}) + typeSymbol = reflect.TypeOf(Symbol("")) + typeMongoTimestamp = reflect.TypeOf(MongoTimestamp(0)) + typeOrderKey = reflect.TypeOf(MinKey) + typeDocElem = reflect.TypeOf(DocElem{}) + typeRawDocElem = reflect.TypeOf(RawDocElem{}) + typeRaw = reflect.TypeOf(Raw{}) + typeURL = reflect.TypeOf(url.URL{}) + typeTime = reflect.TypeOf(time.Time{}) + typeString = reflect.TypeOf("") + typeJSONNumber = reflect.TypeOf(json.Number("")) +) + +const itoaCacheSize = 32 + +var itoaCache []string + +func init() { + itoaCache = make([]string, itoaCacheSize) + for i := 0; i != itoaCacheSize; i++ { + itoaCache[i] = strconv.Itoa(i) + } +} + +func itoa(i int) string { + if i < itoaCacheSize { + return itoaCache[i] + } + return strconv.Itoa(i) +} + +// -------------------------------------------------------------------------- +// Marshaling of the document value itself. + +type encoder struct { + out []byte +} + +func (e *encoder) addDoc(v reflect.Value) { + for { + if vi, ok := v.Interface().(Getter); ok { + getv, err := vi.GetBSON() + if err != nil { + panic(err) + } + v = reflect.ValueOf(getv) + continue + } + if v.Kind() == reflect.Ptr { + v = v.Elem() + continue + } + break + } + + if v.Type() == typeRaw { + raw := v.Interface().(Raw) + if raw.Kind != 0x03 && raw.Kind != 0x00 { + panic("Attempted to marshal Raw kind " + strconv.Itoa(int(raw.Kind)) + " as a document") + } + if len(raw.Data) == 0 { + panic("Attempted to marshal empty Raw document") + } + e.addBytes(raw.Data...) + return + } + + start := e.reserveInt32() + + switch v.Kind() { + case reflect.Map: + e.addMap(v) + case reflect.Struct: + e.addStruct(v) + case reflect.Array, reflect.Slice: + e.addSlice(v) + default: + panic("Can't marshal " + v.Type().String() + " as a BSON document") + } + + e.addBytes(0) + e.setInt32(start, int32(len(e.out)-start)) +} + +func (e *encoder) addMap(v reflect.Value) { + for _, k := range v.MapKeys() { + e.addElem(k.String(), v.MapIndex(k), false) + } +} + +func (e *encoder) addStruct(v reflect.Value) { + sinfo, err := getStructInfo(v.Type()) + if err != nil { + panic(err) + } + var value reflect.Value + if sinfo.InlineMap >= 0 { + m := v.Field(sinfo.InlineMap) + if m.Len() > 0 { + for _, k := range m.MapKeys() { + ks := k.String() + if _, found := sinfo.FieldsMap[ks]; found { + panic(fmt.Sprintf("Can't have key %q in inlined map; conflicts with struct field", ks)) + } + e.addElem(ks, m.MapIndex(k), false) + } + } + } + for _, info := range sinfo.FieldsList { + if info.Inline == nil { + value = v.Field(info.Num) + } else { + value = v.FieldByIndex(info.Inline) + } + if info.OmitEmpty && isZero(value) { + continue + } + e.addElem(info.Key, value, info.MinSize) + } +} + +func isZero(v reflect.Value) bool { + switch v.Kind() { + case reflect.String: + return len(v.String()) == 0 + case reflect.Ptr, reflect.Interface: + return v.IsNil() + case reflect.Slice: + return v.Len() == 0 + case reflect.Map: + return v.Len() == 0 + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Struct: + vt := v.Type() + if vt == typeTime { + return v.Interface().(time.Time).IsZero() + } + for i := 0; i < v.NumField(); i++ { + if vt.Field(i).PkgPath != "" && !vt.Field(i).Anonymous { + continue // Private field + } + if !isZero(v.Field(i)) { + return false + } + } + return true + } + return false +} + +func (e *encoder) addSlice(v reflect.Value) { + vi := v.Interface() + if d, ok := vi.(D); ok { + for _, elem := range d { + e.addElem(elem.Name, reflect.ValueOf(elem.Value), false) + } + return + } + if d, ok := vi.(RawD); ok { + for _, elem := range d { + e.addElem(elem.Name, reflect.ValueOf(elem.Value), false) + } + return + } + l := v.Len() + et := v.Type().Elem() + if et == typeDocElem { + for i := 0; i < l; i++ { + elem := v.Index(i).Interface().(DocElem) + e.addElem(elem.Name, reflect.ValueOf(elem.Value), false) + } + return + } + if et == typeRawDocElem { + for i := 0; i < l; i++ { + elem := v.Index(i).Interface().(RawDocElem) + e.addElem(elem.Name, reflect.ValueOf(elem.Value), false) + } + return + } + for i := 0; i < l; i++ { + e.addElem(itoa(i), v.Index(i), false) + } +} + +// -------------------------------------------------------------------------- +// Marshaling of elements in a document. + +func (e *encoder) addElemName(kind byte, name string) { + e.addBytes(kind) + e.addBytes([]byte(name)...) + e.addBytes(0) +} + +func (e *encoder) addElem(name string, v reflect.Value, minSize bool) { + + if !v.IsValid() { + e.addElemName(0x0A, name) + return + } + + if getter, ok := v.Interface().(Getter); ok { + getv, err := getter.GetBSON() + if err != nil { + panic(err) + } + e.addElem(name, reflect.ValueOf(getv), minSize) + return + } + + switch v.Kind() { + + case reflect.Interface: + e.addElem(name, v.Elem(), minSize) + + case reflect.Ptr: + e.addElem(name, v.Elem(), minSize) + + case reflect.String: + s := v.String() + switch v.Type() { + case typeObjectId: + if len(s) != 12 { + panic("ObjectIDs must be exactly 12 bytes long (got " + + strconv.Itoa(len(s)) + ")") + } + e.addElemName(0x07, name) + e.addBytes([]byte(s)...) + case typeSymbol: + e.addElemName(0x0E, name) + e.addStr(s) + case typeJSONNumber: + n := v.Interface().(json.Number) + if i, err := n.Int64(); err == nil { + e.addElemName(0x12, name) + e.addInt64(i) + } else if f, err := n.Float64(); err == nil { + e.addElemName(0x01, name) + e.addFloat64(f) + } else { + panic("failed to convert json.Number to a number: " + s) + } + default: + e.addElemName(0x02, name) + e.addStr(s) + } + + case reflect.Float32, reflect.Float64: + e.addElemName(0x01, name) + e.addFloat64(v.Float()) + + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + u := v.Uint() + if int64(u) < 0 { + panic("BSON has no uint64 type, and value is too large to fit correctly in an int64") + } else if u <= math.MaxInt32 && (minSize || v.Kind() <= reflect.Uint32) { + e.addElemName(0x10, name) + e.addInt32(int32(u)) + } else { + e.addElemName(0x12, name) + e.addInt64(int64(u)) + } + + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + switch v.Type() { + case typeMongoTimestamp: + e.addElemName(0x11, name) + e.addInt64(v.Int()) + + case typeOrderKey: + if v.Int() == int64(MaxKey) { + e.addElemName(0x7F, name) + } else { + e.addElemName(0xFF, name) + } + + default: + i := v.Int() + if (minSize || v.Type().Kind() != reflect.Int64) && i >= math.MinInt32 && i <= math.MaxInt32 { + // It fits into an int32, encode as such. + e.addElemName(0x10, name) + e.addInt32(int32(i)) + } else { + e.addElemName(0x12, name) + e.addInt64(i) + } + } + + case reflect.Bool: + e.addElemName(0x08, name) + if v.Bool() { + e.addBytes(1) + } else { + e.addBytes(0) + } + + case reflect.Map: + e.addElemName(0x03, name) + e.addDoc(v) + + case reflect.Slice: + vt := v.Type() + et := vt.Elem() + if et.Kind() == reflect.Uint8 { + e.addElemName(0x05, name) + e.addBinary(0x00, v.Bytes()) + } else if et == typeDocElem || et == typeRawDocElem { + e.addElemName(0x03, name) + e.addDoc(v) + } else { + e.addElemName(0x04, name) + e.addDoc(v) + } + + case reflect.Array: + et := v.Type().Elem() + if et.Kind() == reflect.Uint8 { + e.addElemName(0x05, name) + if v.CanAddr() { + e.addBinary(0x00, v.Slice(0, v.Len()).Interface().([]byte)) + } else { + n := v.Len() + e.addInt32(int32(n)) + e.addBytes(0x00) + for i := 0; i < n; i++ { + el := v.Index(i) + e.addBytes(byte(el.Uint())) + } + } + } else { + e.addElemName(0x04, name) + e.addDoc(v) + } + + case reflect.Struct: + switch s := v.Interface().(type) { + + case Raw: + kind := s.Kind + if kind == 0x00 { + kind = 0x03 + } + if len(s.Data) == 0 && kind != 0x06 && kind != 0x0A && kind != 0xFF && kind != 0x7F { + panic("Attempted to marshal empty Raw document") + } + e.addElemName(kind, name) + e.addBytes(s.Data...) + + case Binary: + e.addElemName(0x05, name) + e.addBinary(s.Kind, s.Data) + + case Decimal128: + e.addElemName(0x13, name) + e.addInt64(int64(s.l)) + e.addInt64(int64(s.h)) + + case DBPointer: + e.addElemName(0x0C, name) + e.addStr(s.Namespace) + if len(s.Id) != 12 { + panic("ObjectIDs must be exactly 12 bytes long (got " + + strconv.Itoa(len(s.Id)) + ")") + } + e.addBytes([]byte(s.Id)...) + + case RegEx: + e.addElemName(0x0B, name) + e.addCStr(s.Pattern) + e.addCStr(s.Options) + + case JavaScript: + if s.Scope == nil { + e.addElemName(0x0D, name) + e.addStr(s.Code) + } else { + e.addElemName(0x0F, name) + start := e.reserveInt32() + e.addStr(s.Code) + e.addDoc(reflect.ValueOf(s.Scope)) + e.setInt32(start, int32(len(e.out)-start)) + } + + case time.Time: + // MongoDB handles timestamps as milliseconds. + e.addElemName(0x09, name) + e.addInt64(s.Unix()*1000 + int64(s.Nanosecond()/1e6)) + + case url.URL: + e.addElemName(0x02, name) + e.addStr(s.String()) + + case undefined: + e.addElemName(0x06, name) + + default: + e.addElemName(0x03, name) + e.addDoc(v) + } + + default: + panic("Can't marshal " + v.Type().String() + " in a BSON document") + } +} + +// -------------------------------------------------------------------------- +// Marshaling of base types. + +func (e *encoder) addBinary(subtype byte, v []byte) { + if subtype == 0x02 { + // Wonder how that brilliant idea came to life. Obsolete, luckily. + e.addInt32(int32(len(v) + 4)) + e.addBytes(subtype) + e.addInt32(int32(len(v))) + } else { + e.addInt32(int32(len(v))) + e.addBytes(subtype) + } + e.addBytes(v...) +} + +func (e *encoder) addStr(v string) { + e.addInt32(int32(len(v) + 1)) + e.addCStr(v) +} + +func (e *encoder) addCStr(v string) { + e.addBytes([]byte(v)...) + e.addBytes(0) +} + +func (e *encoder) reserveInt32() (pos int) { + pos = len(e.out) + e.addBytes(0, 0, 0, 0) + return pos +} + +func (e *encoder) setInt32(pos int, v int32) { + e.out[pos+0] = byte(v) + e.out[pos+1] = byte(v >> 8) + e.out[pos+2] = byte(v >> 16) + e.out[pos+3] = byte(v >> 24) +} + +func (e *encoder) addInt32(v int32) { + u := uint32(v) + e.addBytes(byte(u), byte(u>>8), byte(u>>16), byte(u>>24)) +} + +func (e *encoder) addInt64(v int64) { + u := uint64(v) + e.addBytes(byte(u), byte(u>>8), byte(u>>16), byte(u>>24), + byte(u>>32), byte(u>>40), byte(u>>48), byte(u>>56)) +} + +func (e *encoder) addFloat64(v float64) { + e.addInt64(int64(math.Float64bits(v))) +} + +func (e *encoder) addBytes(v ...byte) { + e.out = append(e.out, v...) +} diff --git a/vendor/gopkg.in/mgo.v2/bson/json.go b/vendor/gopkg.in/mgo.v2/bson/json.go new file mode 100644 index 00000000..09df8260 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bson/json.go @@ -0,0 +1,380 @@ +package bson + +import ( + "bytes" + "encoding/base64" + "fmt" + "gopkg.in/mgo.v2/internal/json" + "strconv" + "time" +) + +// UnmarshalJSON unmarshals a JSON value that may hold non-standard +// syntax as defined in BSON's extended JSON specification. +func UnmarshalJSON(data []byte, value interface{}) error { + d := json.NewDecoder(bytes.NewBuffer(data)) + d.Extend(&jsonExt) + return d.Decode(value) +} + +// MarshalJSON marshals a JSON value that may hold non-standard +// syntax as defined in BSON's extended JSON specification. +func MarshalJSON(value interface{}) ([]byte, error) { + var buf bytes.Buffer + e := json.NewEncoder(&buf) + e.Extend(&jsonExt) + err := e.Encode(value) + if err != nil { + return nil, err + } + return buf.Bytes(), nil +} + +// jdec is used internally by the JSON decoding functions +// so they may unmarshal functions without getting into endless +// recursion due to keyed objects. +func jdec(data []byte, value interface{}) error { + d := json.NewDecoder(bytes.NewBuffer(data)) + d.Extend(&funcExt) + return d.Decode(value) +} + +var jsonExt json.Extension +var funcExt json.Extension + +// TODO +// - Shell regular expressions ("/regexp/opts") + +func init() { + jsonExt.DecodeUnquotedKeys(true) + jsonExt.DecodeTrailingCommas(true) + + funcExt.DecodeFunc("BinData", "$binaryFunc", "$type", "$binary") + jsonExt.DecodeKeyed("$binary", jdecBinary) + jsonExt.DecodeKeyed("$binaryFunc", jdecBinary) + jsonExt.EncodeType([]byte(nil), jencBinarySlice) + jsonExt.EncodeType(Binary{}, jencBinaryType) + + funcExt.DecodeFunc("ISODate", "$dateFunc", "S") + funcExt.DecodeFunc("new Date", "$dateFunc", "S") + jsonExt.DecodeKeyed("$date", jdecDate) + jsonExt.DecodeKeyed("$dateFunc", jdecDate) + jsonExt.EncodeType(time.Time{}, jencDate) + + funcExt.DecodeFunc("Timestamp", "$timestamp", "t", "i") + jsonExt.DecodeKeyed("$timestamp", jdecTimestamp) + jsonExt.EncodeType(MongoTimestamp(0), jencTimestamp) + + funcExt.DecodeConst("undefined", Undefined) + + jsonExt.DecodeKeyed("$regex", jdecRegEx) + jsonExt.EncodeType(RegEx{}, jencRegEx) + + funcExt.DecodeFunc("ObjectId", "$oidFunc", "Id") + jsonExt.DecodeKeyed("$oid", jdecObjectId) + jsonExt.DecodeKeyed("$oidFunc", jdecObjectId) + jsonExt.EncodeType(ObjectId(""), jencObjectId) + + funcExt.DecodeFunc("DBRef", "$dbrefFunc", "$ref", "$id") + jsonExt.DecodeKeyed("$dbrefFunc", jdecDBRef) + + funcExt.DecodeFunc("NumberLong", "$numberLongFunc", "N") + jsonExt.DecodeKeyed("$numberLong", jdecNumberLong) + jsonExt.DecodeKeyed("$numberLongFunc", jdecNumberLong) + jsonExt.EncodeType(int64(0), jencNumberLong) + jsonExt.EncodeType(int(0), jencInt) + + funcExt.DecodeConst("MinKey", MinKey) + funcExt.DecodeConst("MaxKey", MaxKey) + jsonExt.DecodeKeyed("$minKey", jdecMinKey) + jsonExt.DecodeKeyed("$maxKey", jdecMaxKey) + jsonExt.EncodeType(orderKey(0), jencMinMaxKey) + + jsonExt.DecodeKeyed("$undefined", jdecUndefined) + jsonExt.EncodeType(Undefined, jencUndefined) + + jsonExt.Extend(&funcExt) +} + +func fbytes(format string, args ...interface{}) []byte { + var buf bytes.Buffer + fmt.Fprintf(&buf, format, args...) + return buf.Bytes() +} + +func jdecBinary(data []byte) (interface{}, error) { + var v struct { + Binary []byte `json:"$binary"` + Type string `json:"$type"` + Func struct { + Binary []byte `json:"$binary"` + Type int64 `json:"$type"` + } `json:"$binaryFunc"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + + var binData []byte + var binKind int64 + if v.Type == "" && v.Binary == nil { + binData = v.Func.Binary + binKind = v.Func.Type + } else if v.Type == "" { + return v.Binary, nil + } else { + binData = v.Binary + binKind, err = strconv.ParseInt(v.Type, 0, 64) + if err != nil { + binKind = -1 + } + } + + if binKind == 0 { + return binData, nil + } + if binKind < 0 || binKind > 255 { + return nil, fmt.Errorf("invalid type in binary object: %s", data) + } + + return Binary{Kind: byte(binKind), Data: binData}, nil +} + +func jencBinarySlice(v interface{}) ([]byte, error) { + in := v.([]byte) + out := make([]byte, base64.StdEncoding.EncodedLen(len(in))) + base64.StdEncoding.Encode(out, in) + return fbytes(`{"$binary":"%s","$type":"0x0"}`, out), nil +} + +func jencBinaryType(v interface{}) ([]byte, error) { + in := v.(Binary) + out := make([]byte, base64.StdEncoding.EncodedLen(len(in.Data))) + base64.StdEncoding.Encode(out, in.Data) + return fbytes(`{"$binary":"%s","$type":"0x%x"}`, out, in.Kind), nil +} + +const jdateFormat = "2006-01-02T15:04:05.999Z" + +func jdecDate(data []byte) (interface{}, error) { + var v struct { + S string `json:"$date"` + Func struct { + S string + } `json:"$dateFunc"` + } + _ = jdec(data, &v) + if v.S == "" { + v.S = v.Func.S + } + if v.S != "" { + for _, format := range []string{jdateFormat, "2006-01-02"} { + t, err := time.Parse(format, v.S) + if err == nil { + return t, nil + } + } + return nil, fmt.Errorf("cannot parse date: %q", v.S) + } + + var vn struct { + Date struct { + N int64 `json:"$numberLong,string"` + } `json:"$date"` + Func struct { + S int64 + } `json:"$dateFunc"` + } + err := jdec(data, &vn) + if err != nil { + return nil, fmt.Errorf("cannot parse date: %q", data) + } + n := vn.Date.N + if n == 0 { + n = vn.Func.S + } + return time.Unix(n/1000, n%1000*1e6).UTC(), nil +} + +func jencDate(v interface{}) ([]byte, error) { + t := v.(time.Time) + return fbytes(`{"$date":%q}`, t.Format(jdateFormat)), nil +} + +func jdecTimestamp(data []byte) (interface{}, error) { + var v struct { + Func struct { + T int32 `json:"t"` + I int32 `json:"i"` + } `json:"$timestamp"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + return MongoTimestamp(uint64(v.Func.T)<<32 | uint64(uint32(v.Func.I))), nil +} + +func jencTimestamp(v interface{}) ([]byte, error) { + ts := uint64(v.(MongoTimestamp)) + return fbytes(`{"$timestamp":{"t":%d,"i":%d}}`, ts>>32, uint32(ts)), nil +} + +func jdecRegEx(data []byte) (interface{}, error) { + var v struct { + Regex string `json:"$regex"` + Options string `json:"$options"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + return RegEx{v.Regex, v.Options}, nil +} + +func jencRegEx(v interface{}) ([]byte, error) { + re := v.(RegEx) + type regex struct { + Regex string `json:"$regex"` + Options string `json:"$options"` + } + return json.Marshal(regex{re.Pattern, re.Options}) +} + +func jdecObjectId(data []byte) (interface{}, error) { + var v struct { + Id string `json:"$oid"` + Func struct { + Id string + } `json:"$oidFunc"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + if v.Id == "" { + v.Id = v.Func.Id + } + return ObjectIdHex(v.Id), nil +} + +func jencObjectId(v interface{}) ([]byte, error) { + return fbytes(`{"$oid":"%s"}`, v.(ObjectId).Hex()), nil +} + +func jdecDBRef(data []byte) (interface{}, error) { + // TODO Support unmarshaling $ref and $id into the input value. + var v struct { + Obj map[string]interface{} `json:"$dbrefFunc"` + } + // TODO Fix this. Must not be required. + v.Obj = make(map[string]interface{}) + err := jdec(data, &v) + if err != nil { + return nil, err + } + return v.Obj, nil +} + +func jdecNumberLong(data []byte) (interface{}, error) { + var v struct { + N int64 `json:"$numberLong,string"` + Func struct { + N int64 `json:",string"` + } `json:"$numberLongFunc"` + } + var vn struct { + N int64 `json:"$numberLong"` + Func struct { + N int64 + } `json:"$numberLongFunc"` + } + err := jdec(data, &v) + if err != nil { + err = jdec(data, &vn) + v.N = vn.N + v.Func.N = vn.Func.N + } + if err != nil { + return nil, err + } + if v.N != 0 { + return v.N, nil + } + return v.Func.N, nil +} + +func jencNumberLong(v interface{}) ([]byte, error) { + n := v.(int64) + f := `{"$numberLong":"%d"}` + if n <= 1<<53 { + f = `{"$numberLong":%d}` + } + return fbytes(f, n), nil +} + +func jencInt(v interface{}) ([]byte, error) { + n := v.(int) + f := `{"$numberLong":"%d"}` + if int64(n) <= 1<<53 { + f = `%d` + } + return fbytes(f, n), nil +} + +func jdecMinKey(data []byte) (interface{}, error) { + var v struct { + N int64 `json:"$minKey"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + if v.N != 1 { + return nil, fmt.Errorf("invalid $minKey object: %s", data) + } + return MinKey, nil +} + +func jdecMaxKey(data []byte) (interface{}, error) { + var v struct { + N int64 `json:"$maxKey"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + if v.N != 1 { + return nil, fmt.Errorf("invalid $maxKey object: %s", data) + } + return MaxKey, nil +} + +func jencMinMaxKey(v interface{}) ([]byte, error) { + switch v.(orderKey) { + case MinKey: + return []byte(`{"$minKey":1}`), nil + case MaxKey: + return []byte(`{"$maxKey":1}`), nil + } + panic(fmt.Sprintf("invalid $minKey/$maxKey value: %d", v)) +} + +func jdecUndefined(data []byte) (interface{}, error) { + var v struct { + B bool `json:"$undefined"` + } + err := jdec(data, &v) + if err != nil { + return nil, err + } + if !v.B { + return nil, fmt.Errorf("invalid $undefined object: %s", data) + } + return Undefined, nil +} + +func jencUndefined(v interface{}) ([]byte, error) { + return []byte(`{"$undefined":true}`), nil +} diff --git a/vendor/gopkg.in/mgo.v2/bulk.go b/vendor/gopkg.in/mgo.v2/bulk.go new file mode 100644 index 00000000..072a5206 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/bulk.go @@ -0,0 +1,351 @@ +package mgo + +import ( + "bytes" + "sort" + + "gopkg.in/mgo.v2/bson" +) + +// Bulk represents an operation that can be prepared with several +// orthogonal changes before being delivered to the server. +// +// MongoDB servers older than version 2.6 do not have proper support for bulk +// operations, so the driver attempts to map its API as much as possible into +// the functionality that works. In particular, in those releases updates and +// removals are sent individually, and inserts are sent in bulk but have +// suboptimal error reporting compared to more recent versions of the server. +// See the documentation of BulkErrorCase for details on that. +// +// Relevant documentation: +// +// http://blog.mongodb.org/post/84922794768/mongodbs-new-bulk-api +// +type Bulk struct { + c *Collection + opcount int + actions []bulkAction + ordered bool +} + +type bulkOp int + +const ( + bulkInsert bulkOp = iota + 1 + bulkUpdate + bulkUpdateAll + bulkRemove +) + +type bulkAction struct { + op bulkOp + docs []interface{} + idxs []int +} + +type bulkUpdateOp []interface{} +type bulkDeleteOp []interface{} + +// BulkResult holds the results for a bulk operation. +type BulkResult struct { + Matched int + Modified int // Available only for MongoDB 2.6+ + + // Be conservative while we understand exactly how to report these + // results in a useful and convenient way, and also how to emulate + // them with prior servers. + private bool +} + +// BulkError holds an error returned from running a Bulk operation. +// Individual errors may be obtained and inspected via the Cases method. +type BulkError struct { + ecases []BulkErrorCase +} + +func (e *BulkError) Error() string { + if len(e.ecases) == 0 { + return "invalid BulkError instance: no errors" + } + if len(e.ecases) == 1 { + return e.ecases[0].Err.Error() + } + msgs := make([]string, 0, len(e.ecases)) + seen := make(map[string]bool) + for _, ecase := range e.ecases { + msg := ecase.Err.Error() + if !seen[msg] { + seen[msg] = true + msgs = append(msgs, msg) + } + } + if len(msgs) == 1 { + return msgs[0] + } + var buf bytes.Buffer + buf.WriteString("multiple errors in bulk operation:\n") + for _, msg := range msgs { + buf.WriteString(" - ") + buf.WriteString(msg) + buf.WriteByte('\n') + } + return buf.String() +} + +type bulkErrorCases []BulkErrorCase + +func (slice bulkErrorCases) Len() int { return len(slice) } +func (slice bulkErrorCases) Less(i, j int) bool { return slice[i].Index < slice[j].Index } +func (slice bulkErrorCases) Swap(i, j int) { slice[i], slice[j] = slice[j], slice[i] } + +// BulkErrorCase holds an individual error found while attempting a single change +// within a bulk operation, and the position in which it was enqueued. +// +// MongoDB servers older than version 2.6 do not have proper support for bulk +// operations, so the driver attempts to map its API as much as possible into +// the functionality that works. In particular, only the last error is reported +// for bulk inserts and without any positional information, so the Index +// field is set to -1 in these cases. +type BulkErrorCase struct { + Index int // Position of operation that failed, or -1 if unknown. + Err error +} + +// Cases returns all individual errors found while attempting the requested changes. +// +// See the documentation of BulkErrorCase for limitations in older MongoDB releases. +func (e *BulkError) Cases() []BulkErrorCase { + return e.ecases +} + +// Bulk returns a value to prepare the execution of a bulk operation. +func (c *Collection) Bulk() *Bulk { + return &Bulk{c: c, ordered: true} +} + +// Unordered puts the bulk operation in unordered mode. +// +// In unordered mode the indvidual operations may be sent +// out of order, which means latter operations may proceed +// even if prior ones have failed. +func (b *Bulk) Unordered() { + b.ordered = false +} + +func (b *Bulk) action(op bulkOp, opcount int) *bulkAction { + var action *bulkAction + if len(b.actions) > 0 && b.actions[len(b.actions)-1].op == op { + action = &b.actions[len(b.actions)-1] + } else if !b.ordered { + for i := range b.actions { + if b.actions[i].op == op { + action = &b.actions[i] + break + } + } + } + if action == nil { + b.actions = append(b.actions, bulkAction{op: op}) + action = &b.actions[len(b.actions)-1] + } + for i := 0; i < opcount; i++ { + action.idxs = append(action.idxs, b.opcount) + b.opcount++ + } + return action +} + +// Insert queues up the provided documents for insertion. +func (b *Bulk) Insert(docs ...interface{}) { + action := b.action(bulkInsert, len(docs)) + action.docs = append(action.docs, docs...) +} + +// Remove queues up the provided selectors for removing matching documents. +// Each selector will remove only a single matching document. +func (b *Bulk) Remove(selectors ...interface{}) { + action := b.action(bulkRemove, len(selectors)) + for _, selector := range selectors { + if selector == nil { + selector = bson.D{} + } + action.docs = append(action.docs, &deleteOp{ + Collection: b.c.FullName, + Selector: selector, + Flags: 1, + Limit: 1, + }) + } +} + +// RemoveAll queues up the provided selectors for removing all matching documents. +// Each selector will remove all matching documents. +func (b *Bulk) RemoveAll(selectors ...interface{}) { + action := b.action(bulkRemove, len(selectors)) + for _, selector := range selectors { + if selector == nil { + selector = bson.D{} + } + action.docs = append(action.docs, &deleteOp{ + Collection: b.c.FullName, + Selector: selector, + Flags: 0, + Limit: 0, + }) + } +} + +// Update queues up the provided pairs of updating instructions. +// The first element of each pair selects which documents must be +// updated, and the second element defines how to update it. +// Each pair matches exactly one document for updating at most. +func (b *Bulk) Update(pairs ...interface{}) { + if len(pairs)%2 != 0 { + panic("Bulk.Update requires an even number of parameters") + } + action := b.action(bulkUpdate, len(pairs)/2) + for i := 0; i < len(pairs); i += 2 { + selector := pairs[i] + if selector == nil { + selector = bson.D{} + } + action.docs = append(action.docs, &updateOp{ + Collection: b.c.FullName, + Selector: selector, + Update: pairs[i+1], + }) + } +} + +// UpdateAll queues up the provided pairs of updating instructions. +// The first element of each pair selects which documents must be +// updated, and the second element defines how to update it. +// Each pair updates all documents matching the selector. +func (b *Bulk) UpdateAll(pairs ...interface{}) { + if len(pairs)%2 != 0 { + panic("Bulk.UpdateAll requires an even number of parameters") + } + action := b.action(bulkUpdate, len(pairs)/2) + for i := 0; i < len(pairs); i += 2 { + selector := pairs[i] + if selector == nil { + selector = bson.D{} + } + action.docs = append(action.docs, &updateOp{ + Collection: b.c.FullName, + Selector: selector, + Update: pairs[i+1], + Flags: 2, + Multi: true, + }) + } +} + +// Upsert queues up the provided pairs of upserting instructions. +// The first element of each pair selects which documents must be +// updated, and the second element defines how to update it. +// Each pair matches exactly one document for updating at most. +func (b *Bulk) Upsert(pairs ...interface{}) { + if len(pairs)%2 != 0 { + panic("Bulk.Update requires an even number of parameters") + } + action := b.action(bulkUpdate, len(pairs)/2) + for i := 0; i < len(pairs); i += 2 { + selector := pairs[i] + if selector == nil { + selector = bson.D{} + } + action.docs = append(action.docs, &updateOp{ + Collection: b.c.FullName, + Selector: selector, + Update: pairs[i+1], + Flags: 1, + Upsert: true, + }) + } +} + +// Run runs all the operations queued up. +// +// If an error is reported on an unordered bulk operation, the error value may +// be an aggregation of all issues observed. As an exception to that, Insert +// operations running on MongoDB versions prior to 2.6 will report the last +// error only due to a limitation in the wire protocol. +func (b *Bulk) Run() (*BulkResult, error) { + var result BulkResult + var berr BulkError + var failed bool + for i := range b.actions { + action := &b.actions[i] + var ok bool + switch action.op { + case bulkInsert: + ok = b.runInsert(action, &result, &berr) + case bulkUpdate: + ok = b.runUpdate(action, &result, &berr) + case bulkRemove: + ok = b.runRemove(action, &result, &berr) + default: + panic("unknown bulk operation") + } + if !ok { + failed = true + if b.ordered { + break + } + } + } + if failed { + sort.Sort(bulkErrorCases(berr.ecases)) + return nil, &berr + } + return &result, nil +} + +func (b *Bulk) runInsert(action *bulkAction, result *BulkResult, berr *BulkError) bool { + op := &insertOp{b.c.FullName, action.docs, 0} + if !b.ordered { + op.flags = 1 // ContinueOnError + } + lerr, err := b.c.writeOp(op, b.ordered) + return b.checkSuccess(action, berr, lerr, err) +} + +func (b *Bulk) runUpdate(action *bulkAction, result *BulkResult, berr *BulkError) bool { + lerr, err := b.c.writeOp(bulkUpdateOp(action.docs), b.ordered) + if lerr != nil { + result.Matched += lerr.N + result.Modified += lerr.modified + } + return b.checkSuccess(action, berr, lerr, err) +} + +func (b *Bulk) runRemove(action *bulkAction, result *BulkResult, berr *BulkError) bool { + lerr, err := b.c.writeOp(bulkDeleteOp(action.docs), b.ordered) + if lerr != nil { + result.Matched += lerr.N + result.Modified += lerr.modified + } + return b.checkSuccess(action, berr, lerr, err) +} + +func (b *Bulk) checkSuccess(action *bulkAction, berr *BulkError, lerr *LastError, err error) bool { + if lerr != nil && len(lerr.ecases) > 0 { + for i := 0; i < len(lerr.ecases); i++ { + // Map back from the local error index into the visible one. + ecase := lerr.ecases[i] + idx := ecase.Index + if idx >= 0 { + idx = action.idxs[idx] + } + berr.ecases = append(berr.ecases, BulkErrorCase{idx, ecase.Err}) + } + return false + } else if err != nil { + for i := 0; i < len(action.idxs); i++ { + berr.ecases = append(berr.ecases, BulkErrorCase{action.idxs[i], err}) + } + return false + } + return true +} diff --git a/vendor/gopkg.in/mgo.v2/cluster.go b/vendor/gopkg.in/mgo.v2/cluster.go new file mode 100644 index 00000000..c3bf8b01 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/cluster.go @@ -0,0 +1,682 @@ +// mgo - MongoDB driver for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package mgo + +import ( + "errors" + "fmt" + "net" + "strconv" + "strings" + "sync" + "time" + + "gopkg.in/mgo.v2/bson" +) + +// --------------------------------------------------------------------------- +// Mongo cluster encapsulation. +// +// A cluster enables the communication with one or more servers participating +// in a mongo cluster. This works with individual servers, a replica set, +// a replica pair, one or multiple mongos routers, etc. + +type mongoCluster struct { + sync.RWMutex + serverSynced sync.Cond + userSeeds []string + dynaSeeds []string + servers mongoServers + masters mongoServers + references int + syncing bool + direct bool + failFast bool + syncCount uint + setName string + cachedIndex map[string]bool + sync chan bool + dial dialer +} + +func newCluster(userSeeds []string, direct, failFast bool, dial dialer, setName string) *mongoCluster { + cluster := &mongoCluster{ + userSeeds: userSeeds, + references: 1, + direct: direct, + failFast: failFast, + dial: dial, + setName: setName, + } + cluster.serverSynced.L = cluster.RWMutex.RLocker() + cluster.sync = make(chan bool, 1) + stats.cluster(+1) + go cluster.syncServersLoop() + return cluster +} + +// Acquire increases the reference count for the cluster. +func (cluster *mongoCluster) Acquire() { + cluster.Lock() + cluster.references++ + debugf("Cluster %p acquired (refs=%d)", cluster, cluster.references) + cluster.Unlock() +} + +// Release decreases the reference count for the cluster. Once +// it reaches zero, all servers will be closed. +func (cluster *mongoCluster) Release() { + cluster.Lock() + if cluster.references == 0 { + panic("cluster.Release() with references == 0") + } + cluster.references-- + debugf("Cluster %p released (refs=%d)", cluster, cluster.references) + if cluster.references == 0 { + for _, server := range cluster.servers.Slice() { + server.Close() + } + // Wake up the sync loop so it can die. + cluster.syncServers() + stats.cluster(-1) + } + cluster.Unlock() +} + +func (cluster *mongoCluster) LiveServers() (servers []string) { + cluster.RLock() + for _, serv := range cluster.servers.Slice() { + servers = append(servers, serv.Addr) + } + cluster.RUnlock() + return servers +} + +func (cluster *mongoCluster) removeServer(server *mongoServer) { + cluster.Lock() + cluster.masters.Remove(server) + other := cluster.servers.Remove(server) + cluster.Unlock() + if other != nil { + other.Close() + log("Removed server ", server.Addr, " from cluster.") + } + server.Close() +} + +type isMasterResult struct { + IsMaster bool + Secondary bool + Primary string + Hosts []string + Passives []string + Tags bson.D + Msg string + SetName string `bson:"setName"` + MaxWireVersion int `bson:"maxWireVersion"` +} + +func (cluster *mongoCluster) isMaster(socket *mongoSocket, result *isMasterResult) error { + // Monotonic let's it talk to a slave and still hold the socket. + session := newSession(Monotonic, cluster, 10*time.Second) + session.setSocket(socket) + err := session.Run("ismaster", result) + session.Close() + return err +} + +type possibleTimeout interface { + Timeout() bool +} + +var syncSocketTimeout = 5 * time.Second + +func (cluster *mongoCluster) syncServer(server *mongoServer) (info *mongoServerInfo, hosts []string, err error) { + var syncTimeout time.Duration + if raceDetector { + // This variable is only ever touched by tests. + globalMutex.Lock() + syncTimeout = syncSocketTimeout + globalMutex.Unlock() + } else { + syncTimeout = syncSocketTimeout + } + + addr := server.Addr + log("SYNC Processing ", addr, "...") + + // Retry a few times to avoid knocking a server down for a hiccup. + var result isMasterResult + var tryerr error + for retry := 0; ; retry++ { + if retry == 3 || retry == 1 && cluster.failFast { + return nil, nil, tryerr + } + if retry > 0 { + // Don't abuse the server needlessly if there's something actually wrong. + if err, ok := tryerr.(possibleTimeout); ok && err.Timeout() { + // Give a chance for waiters to timeout as well. + cluster.serverSynced.Broadcast() + } + time.Sleep(syncShortDelay) + } + + // It's not clear what would be a good timeout here. Is it + // better to wait longer or to retry? + socket, _, err := server.AcquireSocket(0, syncTimeout) + if err != nil { + tryerr = err + logf("SYNC Failed to get socket to %s: %v", addr, err) + continue + } + err = cluster.isMaster(socket, &result) + socket.Release() + if err != nil { + tryerr = err + logf("SYNC Command 'ismaster' to %s failed: %v", addr, err) + continue + } + debugf("SYNC Result of 'ismaster' from %s: %#v", addr, result) + break + } + + if cluster.setName != "" && result.SetName != cluster.setName { + logf("SYNC Server %s is not a member of replica set %q", addr, cluster.setName) + return nil, nil, fmt.Errorf("server %s is not a member of replica set %q", addr, cluster.setName) + } + + if result.IsMaster { + debugf("SYNC %s is a master.", addr) + if !server.info.Master { + // Made an incorrect assumption above, so fix stats. + stats.conn(-1, false) + stats.conn(+1, true) + } + } else if result.Secondary { + debugf("SYNC %s is a slave.", addr) + } else if cluster.direct { + logf("SYNC %s in unknown state. Pretending it's a slave due to direct connection.", addr) + } else { + logf("SYNC %s is neither a master nor a slave.", addr) + // Let stats track it as whatever was known before. + return nil, nil, errors.New(addr + " is not a master nor slave") + } + + info = &mongoServerInfo{ + Master: result.IsMaster, + Mongos: result.Msg == "isdbgrid", + Tags: result.Tags, + SetName: result.SetName, + MaxWireVersion: result.MaxWireVersion, + } + + hosts = make([]string, 0, 1+len(result.Hosts)+len(result.Passives)) + if result.Primary != "" { + // First in the list to speed up master discovery. + hosts = append(hosts, result.Primary) + } + hosts = append(hosts, result.Hosts...) + hosts = append(hosts, result.Passives...) + + debugf("SYNC %s knows about the following peers: %#v", addr, hosts) + return info, hosts, nil +} + +type syncKind bool + +const ( + completeSync syncKind = true + partialSync syncKind = false +) + +func (cluster *mongoCluster) addServer(server *mongoServer, info *mongoServerInfo, syncKind syncKind) { + cluster.Lock() + current := cluster.servers.Search(server.ResolvedAddr) + if current == nil { + if syncKind == partialSync { + cluster.Unlock() + server.Close() + log("SYNC Discarding unknown server ", server.Addr, " due to partial sync.") + return + } + cluster.servers.Add(server) + if info.Master { + cluster.masters.Add(server) + log("SYNC Adding ", server.Addr, " to cluster as a master.") + } else { + log("SYNC Adding ", server.Addr, " to cluster as a slave.") + } + } else { + if server != current { + panic("addServer attempting to add duplicated server") + } + if server.Info().Master != info.Master { + if info.Master { + log("SYNC Server ", server.Addr, " is now a master.") + cluster.masters.Add(server) + } else { + log("SYNC Server ", server.Addr, " is now a slave.") + cluster.masters.Remove(server) + } + } + } + server.SetInfo(info) + debugf("SYNC Broadcasting availability of server %s", server.Addr) + cluster.serverSynced.Broadcast() + cluster.Unlock() +} + +func (cluster *mongoCluster) getKnownAddrs() []string { + cluster.RLock() + max := len(cluster.userSeeds) + len(cluster.dynaSeeds) + cluster.servers.Len() + seen := make(map[string]bool, max) + known := make([]string, 0, max) + + add := func(addr string) { + if _, found := seen[addr]; !found { + seen[addr] = true + known = append(known, addr) + } + } + + for _, addr := range cluster.userSeeds { + add(addr) + } + for _, addr := range cluster.dynaSeeds { + add(addr) + } + for _, serv := range cluster.servers.Slice() { + add(serv.Addr) + } + cluster.RUnlock() + + return known +} + +// syncServers injects a value into the cluster.sync channel to force +// an iteration of the syncServersLoop function. +func (cluster *mongoCluster) syncServers() { + select { + case cluster.sync <- true: + default: + } +} + +// How long to wait for a checkup of the cluster topology if nothing +// else kicks a synchronization before that. +const syncServersDelay = 30 * time.Second +const syncShortDelay = 500 * time.Millisecond + +// syncServersLoop loops while the cluster is alive to keep its idea of +// the server topology up-to-date. It must be called just once from +// newCluster. The loop iterates once syncServersDelay has passed, or +// if somebody injects a value into the cluster.sync channel to force a +// synchronization. A loop iteration will contact all servers in +// parallel, ask them about known peers and their own role within the +// cluster, and then attempt to do the same with all the peers +// retrieved. +func (cluster *mongoCluster) syncServersLoop() { + for { + debugf("SYNC Cluster %p is starting a sync loop iteration.", cluster) + + cluster.Lock() + if cluster.references == 0 { + cluster.Unlock() + break + } + cluster.references++ // Keep alive while syncing. + direct := cluster.direct + cluster.Unlock() + + cluster.syncServersIteration(direct) + + // We just synchronized, so consume any outstanding requests. + select { + case <-cluster.sync: + default: + } + + cluster.Release() + + // Hold off before allowing another sync. No point in + // burning CPU looking for down servers. + if !cluster.failFast { + time.Sleep(syncShortDelay) + } + + cluster.Lock() + if cluster.references == 0 { + cluster.Unlock() + break + } + cluster.syncCount++ + // Poke all waiters so they have a chance to timeout or + // restart syncing if they wish to. + cluster.serverSynced.Broadcast() + // Check if we have to restart immediately either way. + restart := !direct && cluster.masters.Empty() || cluster.servers.Empty() + cluster.Unlock() + + if restart { + log("SYNC No masters found. Will synchronize again.") + time.Sleep(syncShortDelay) + continue + } + + debugf("SYNC Cluster %p waiting for next requested or scheduled sync.", cluster) + + // Hold off until somebody explicitly requests a synchronization + // or it's time to check for a cluster topology change again. + select { + case <-cluster.sync: + case <-time.After(syncServersDelay): + } + } + debugf("SYNC Cluster %p is stopping its sync loop.", cluster) +} + +func (cluster *mongoCluster) server(addr string, tcpaddr *net.TCPAddr) *mongoServer { + cluster.RLock() + server := cluster.servers.Search(tcpaddr.String()) + cluster.RUnlock() + if server != nil { + return server + } + return newServer(addr, tcpaddr, cluster.sync, cluster.dial) +} + +func resolveAddr(addr string) (*net.TCPAddr, error) { + // Simple cases that do not need actual resolution. Works with IPv4 and v6. + if host, port, err := net.SplitHostPort(addr); err == nil { + if port, _ := strconv.Atoi(port); port > 0 { + zone := "" + if i := strings.LastIndex(host, "%"); i >= 0 { + zone = host[i+1:] + host = host[:i] + } + ip := net.ParseIP(host) + if ip != nil { + return &net.TCPAddr{IP: ip, Port: port, Zone: zone}, nil + } + } + } + + // Attempt to resolve IPv4 and v6 concurrently. + addrChan := make(chan *net.TCPAddr, 2) + for _, network := range []string{"udp4", "udp6"} { + network := network + go func() { + // The unfortunate UDP dialing hack allows having a timeout on address resolution. + conn, err := net.DialTimeout(network, addr, 10*time.Second) + if err != nil { + addrChan <- nil + } else { + addrChan <- (*net.TCPAddr)(conn.RemoteAddr().(*net.UDPAddr)) + conn.Close() + } + }() + } + + // Wait for the result of IPv4 and v6 resolution. Use IPv4 if available. + tcpaddr := <-addrChan + if tcpaddr == nil || len(tcpaddr.IP) != 4 { + var timeout <-chan time.Time + if tcpaddr != nil { + // Don't wait too long if an IPv6 address is known. + timeout = time.After(50 * time.Millisecond) + } + select { + case <-timeout: + case tcpaddr2 := <-addrChan: + if tcpaddr == nil || tcpaddr2 != nil { + // It's an IPv4 address or the only known address. Use it. + tcpaddr = tcpaddr2 + } + } + } + + if tcpaddr == nil { + log("SYNC Failed to resolve server address: ", addr) + return nil, errors.New("failed to resolve server address: " + addr) + } + if tcpaddr.String() != addr { + debug("SYNC Address ", addr, " resolved as ", tcpaddr.String()) + } + return tcpaddr, nil +} + +type pendingAdd struct { + server *mongoServer + info *mongoServerInfo +} + +func (cluster *mongoCluster) syncServersIteration(direct bool) { + log("SYNC Starting full topology synchronization...") + + var wg sync.WaitGroup + var m sync.Mutex + notYetAdded := make(map[string]pendingAdd) + addIfFound := make(map[string]bool) + seen := make(map[string]bool) + syncKind := partialSync + + var spawnSync func(addr string, byMaster bool) + spawnSync = func(addr string, byMaster bool) { + wg.Add(1) + go func() { + defer wg.Done() + + tcpaddr, err := resolveAddr(addr) + if err != nil { + log("SYNC Failed to start sync of ", addr, ": ", err.Error()) + return + } + resolvedAddr := tcpaddr.String() + + m.Lock() + if byMaster { + if pending, ok := notYetAdded[resolvedAddr]; ok { + delete(notYetAdded, resolvedAddr) + m.Unlock() + cluster.addServer(pending.server, pending.info, completeSync) + return + } + addIfFound[resolvedAddr] = true + } + if seen[resolvedAddr] { + m.Unlock() + return + } + seen[resolvedAddr] = true + m.Unlock() + + server := cluster.server(addr, tcpaddr) + info, hosts, err := cluster.syncServer(server) + if err != nil { + cluster.removeServer(server) + return + } + + m.Lock() + add := direct || info.Master || addIfFound[resolvedAddr] + if add { + syncKind = completeSync + } else { + notYetAdded[resolvedAddr] = pendingAdd{server, info} + } + m.Unlock() + if add { + cluster.addServer(server, info, completeSync) + } + if !direct { + for _, addr := range hosts { + spawnSync(addr, info.Master) + } + } + }() + } + + knownAddrs := cluster.getKnownAddrs() + for _, addr := range knownAddrs { + spawnSync(addr, false) + } + wg.Wait() + + if syncKind == completeSync { + logf("SYNC Synchronization was complete (got data from primary).") + for _, pending := range notYetAdded { + cluster.removeServer(pending.server) + } + } else { + logf("SYNC Synchronization was partial (cannot talk to primary).") + for _, pending := range notYetAdded { + cluster.addServer(pending.server, pending.info, partialSync) + } + } + + cluster.Lock() + mastersLen := cluster.masters.Len() + logf("SYNC Synchronization completed: %d master(s) and %d slave(s) alive.", mastersLen, cluster.servers.Len()-mastersLen) + + // Update dynamic seeds, but only if we have any good servers. Otherwise, + // leave them alone for better chances of a successful sync in the future. + if syncKind == completeSync { + dynaSeeds := make([]string, cluster.servers.Len()) + for i, server := range cluster.servers.Slice() { + dynaSeeds[i] = server.Addr + } + cluster.dynaSeeds = dynaSeeds + debugf("SYNC New dynamic seeds: %#v\n", dynaSeeds) + } + cluster.Unlock() +} + +// AcquireSocket returns a socket to a server in the cluster. If slaveOk is +// true, it will attempt to return a socket to a slave server. If it is +// false, the socket will necessarily be to a master server. +func (cluster *mongoCluster) AcquireSocket(mode Mode, slaveOk bool, syncTimeout time.Duration, socketTimeout time.Duration, serverTags []bson.D, poolLimit int) (s *mongoSocket, err error) { + var started time.Time + var syncCount uint + warnedLimit := false + for { + cluster.RLock() + for { + mastersLen := cluster.masters.Len() + slavesLen := cluster.servers.Len() - mastersLen + debugf("Cluster has %d known masters and %d known slaves.", mastersLen, slavesLen) + if mastersLen > 0 && !(slaveOk && mode == Secondary) || slavesLen > 0 && slaveOk { + break + } + if mastersLen > 0 && mode == Secondary && cluster.masters.HasMongos() { + break + } + if started.IsZero() { + // Initialize after fast path above. + started = time.Now() + syncCount = cluster.syncCount + } else if syncTimeout != 0 && started.Before(time.Now().Add(-syncTimeout)) || cluster.failFast && cluster.syncCount != syncCount { + cluster.RUnlock() + return nil, errors.New("no reachable servers") + } + log("Waiting for servers to synchronize...") + cluster.syncServers() + + // Remember: this will release and reacquire the lock. + cluster.serverSynced.Wait() + } + + var server *mongoServer + if slaveOk { + server = cluster.servers.BestFit(mode, serverTags) + } else { + server = cluster.masters.BestFit(mode, nil) + } + cluster.RUnlock() + + if server == nil { + // Must have failed the requested tags. Sleep to avoid spinning. + time.Sleep(1e8) + continue + } + + s, abended, err := server.AcquireSocket(poolLimit, socketTimeout) + if err == errPoolLimit { + if !warnedLimit { + warnedLimit = true + log("WARNING: Per-server connection limit reached.") + } + time.Sleep(100 * time.Millisecond) + continue + } + if err != nil { + cluster.removeServer(server) + cluster.syncServers() + continue + } + if abended && !slaveOk { + var result isMasterResult + err := cluster.isMaster(s, &result) + if err != nil || !result.IsMaster { + logf("Cannot confirm server %s as master (%v)", server.Addr, err) + s.Release() + cluster.syncServers() + time.Sleep(100 * time.Millisecond) + continue + } + } + return s, nil + } + panic("unreached") +} + +func (cluster *mongoCluster) CacheIndex(cacheKey string, exists bool) { + cluster.Lock() + if cluster.cachedIndex == nil { + cluster.cachedIndex = make(map[string]bool) + } + if exists { + cluster.cachedIndex[cacheKey] = true + } else { + delete(cluster.cachedIndex, cacheKey) + } + cluster.Unlock() +} + +func (cluster *mongoCluster) HasCachedIndex(cacheKey string) (result bool) { + cluster.RLock() + if cluster.cachedIndex != nil { + result = cluster.cachedIndex[cacheKey] + } + cluster.RUnlock() + return +} + +func (cluster *mongoCluster) ResetIndexCache() { + cluster.Lock() + cluster.cachedIndex = make(map[string]bool) + cluster.Unlock() +} diff --git a/vendor/gopkg.in/mgo.v2/doc.go b/vendor/gopkg.in/mgo.v2/doc.go new file mode 100644 index 00000000..859fd9b8 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/doc.go @@ -0,0 +1,31 @@ +// Package mgo offers a rich MongoDB driver for Go. +// +// Details about the mgo project (pronounced as "mango") are found +// in its web page: +// +// http://labix.org/mgo +// +// Usage of the driver revolves around the concept of sessions. To +// get started, obtain a session using the Dial function: +// +// session, err := mgo.Dial(url) +// +// This will establish one or more connections with the cluster of +// servers defined by the url parameter. From then on, the cluster +// may be queried with multiple consistency rules (see SetMode) and +// documents retrieved with statements such as: +// +// c := session.DB(database).C(collection) +// err := c.Find(query).One(&result) +// +// New sessions are typically created by calling session.Copy on the +// initial session obtained at dial time. These new sessions will share +// the same cluster information and connection pool, and may be easily +// handed into other methods and functions for organizing logic. +// Every session created must have its Close method called at the end +// of its life time, so its resources may be put back in the pool or +// collected, depending on the case. +// +// For more details, see the documentation for the types and methods. +// +package mgo diff --git a/vendor/gopkg.in/mgo.v2/gridfs.go b/vendor/gopkg.in/mgo.v2/gridfs.go new file mode 100644 index 00000000..42147209 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/gridfs.go @@ -0,0 +1,761 @@ +// mgo - MongoDB driver for Go +// +// Copyright (c) 2010-2012 - Gustavo Niemeyer +// +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +package mgo + +import ( + "crypto/md5" + "encoding/hex" + "errors" + "hash" + "io" + "os" + "sync" + "time" + + "gopkg.in/mgo.v2/bson" +) + +type GridFS struct { + Files *Collection + Chunks *Collection +} + +type gfsFileMode int + +const ( + gfsClosed gfsFileMode = 0 + gfsReading gfsFileMode = 1 + gfsWriting gfsFileMode = 2 +) + +type GridFile struct { + m sync.Mutex + c sync.Cond + gfs *GridFS + mode gfsFileMode + err error + + chunk int + offset int64 + + wpending int + wbuf []byte + wsum hash.Hash + + rbuf []byte + rcache *gfsCachedChunk + + doc gfsFile +} + +type gfsFile struct { + Id interface{} "_id" + ChunkSize int "chunkSize" + UploadDate time.Time "uploadDate" + Length int64 ",minsize" + MD5 string + Filename string ",omitempty" + ContentType string "contentType,omitempty" + Metadata *bson.Raw ",omitempty" +} + +type gfsChunk struct { + Id interface{} "_id" + FilesId interface{} "files_id" + N int + Data []byte +} + +type gfsCachedChunk struct { + wait sync.Mutex + n int + data []byte + err error +} + +func newGridFS(db *Database, prefix string) *GridFS { + return &GridFS{db.C(prefix + ".files"), db.C(prefix + ".chunks")} +} + +func (gfs *GridFS) newFile() *GridFile { + file := &GridFile{gfs: gfs} + file.c.L = &file.m + //runtime.SetFinalizer(file, finalizeFile) + return file +} + +func finalizeFile(file *GridFile) { + file.Close() +} + +// Create creates a new file with the provided name in the GridFS. If the file +// name already exists, a new version will be inserted with an up-to-date +// uploadDate that will cause it to be atomically visible to the Open and +// OpenId methods. If the file name is not important, an empty name may be +// provided and the file Id used instead. +// +// It's important to Close files whether they are being written to +// or read from, and to check the err result to ensure the operation +// completed successfully. +// +// A simple example inserting a new file: +// +// func check(err error) { +// if err != nil { +// panic(err.String()) +// } +// } +// file, err := db.GridFS("fs").Create("myfile.txt") +// check(err) +// n, err := file.Write([]byte("Hello world!")) +// check(err) +// err = file.Close() +// check(err) +// fmt.Printf("%d bytes written\n", n) +// +// The io.Writer interface is implemented by *GridFile and may be used to +// help on the file creation. For example: +// +// file, err := db.GridFS("fs").Create("myfile.txt") +// check(err) +// messages, err := os.Open("/var/log/messages") +// check(err) +// defer messages.Close() +// err = io.Copy(file, messages) +// check(err) +// err = file.Close() +// check(err) +// +func (gfs *GridFS) Create(name string) (file *GridFile, err error) { + file = gfs.newFile() + file.mode = gfsWriting + file.wsum = md5.New() + file.doc = gfsFile{Id: bson.NewObjectId(), ChunkSize: 255 * 1024, Filename: name} + return +} + +// OpenId returns the file with the provided id, for reading. +// If the file isn't found, err will be set to mgo.ErrNotFound. +// +// It's important to Close files whether they are being written to +// or read from, and to check the err result to ensure the operation +// completed successfully. +// +// The following example will print the first 8192 bytes from the file: +// +// func check(err error) { +// if err != nil { +// panic(err.String()) +// } +// } +// file, err := db.GridFS("fs").OpenId(objid) +// check(err) +// b := make([]byte, 8192) +// n, err := file.Read(b) +// check(err) +// fmt.Println(string(b)) +// check(err) +// err = file.Close() +// check(err) +// fmt.Printf("%d bytes read\n", n) +// +// The io.Reader interface is implemented by *GridFile and may be used to +// deal with it. As an example, the following snippet will dump the whole +// file into the standard output: +// +// file, err := db.GridFS("fs").OpenId(objid) +// check(err) +// err = io.Copy(os.Stdout, file) +// check(err) +// err = file.Close() +// check(err) +// +func (gfs *GridFS) OpenId(id interface{}) (file *GridFile, err error) { + var doc gfsFile + err = gfs.Files.Find(bson.M{"_id": id}).One(&doc) + if err != nil { + return + } + file = gfs.newFile() + file.mode = gfsReading + file.doc = doc + return +} + +// Open returns the most recently uploaded file with the provided +// name, for reading. If the file isn't found, err will be set +// to mgo.ErrNotFound. +// +// It's important to Close files whether they are being written to +// or read from, and to check the err result to ensure the operation +// completed successfully. +// +// The following example will print the first 8192 bytes from the file: +// +// file, err := db.GridFS("fs").Open("myfile.txt") +// check(err) +// b := make([]byte, 8192) +// n, err := file.Read(b) +// check(err) +// fmt.Println(string(b)) +// check(err) +// err = file.Close() +// check(err) +// fmt.Printf("%d bytes read\n", n) +// +// The io.Reader interface is implemented by *GridFile and may be used to +// deal with it. As an example, the following snippet will dump the whole +// file into the standard output: +// +// file, err := db.GridFS("fs").Open("myfile.txt") +// check(err) +// err = io.Copy(os.Stdout, file) +// check(err) +// err = file.Close() +// check(err) +// +func (gfs *GridFS) Open(name string) (file *GridFile, err error) { + var doc gfsFile + err = gfs.Files.Find(bson.M{"filename": name}).Sort("-uploadDate").One(&doc) + if err != nil { + return + } + file = gfs.newFile() + file.mode = gfsReading + file.doc = doc + return +} + +// OpenNext opens the next file from iter for reading, sets *file to it, +// and returns true on the success case. If no more documents are available +// on iter or an error occurred, *file is set to nil and the result is false. +// Errors will be available via iter.Err(). +// +// The iter parameter must be an iterator on the GridFS files collection. +// Using the GridFS.Find method is an easy way to obtain such an iterator, +// but any iterator on the collection will work. +// +// If the provided *file is non-nil, OpenNext will close it before attempting +// to iterate to the next element. This means that in a loop one only +// has to worry about closing files when breaking out of the loop early +// (break, return, or panic). +// +// For example: +// +// gfs := db.GridFS("fs") +// query := gfs.Find(nil).Sort("filename") +// iter := query.Iter() +// var f *mgo.GridFile +// for gfs.OpenNext(iter, &f) { +// fmt.Printf("Filename: %s\n", f.Name()) +// } +// if iter.Close() != nil { +// panic(iter.Close()) +// } +// +func (gfs *GridFS) OpenNext(iter *Iter, file **GridFile) bool { + if *file != nil { + // Ignoring the error here shouldn't be a big deal + // as we're reading the file and the loop iteration + // for this file is finished. + _ = (*file).Close() + } + var doc gfsFile + if !iter.Next(&doc) { + *file = nil + return false + } + f := gfs.newFile() + f.mode = gfsReading + f.doc = doc + *file = f + return true +} + +// Find runs query on GridFS's files collection and returns +// the resulting Query. +// +// This logic: +// +// gfs := db.GridFS("fs") +// iter := gfs.Find(nil).Iter() +// +// Is equivalent to: +// +// files := db.C("fs" + ".files") +// iter := files.Find(nil).Iter() +// +func (gfs *GridFS) Find(query interface{}) *Query { + return gfs.Files.Find(query) +} + +// RemoveId deletes the file with the provided id from the GridFS. +func (gfs *GridFS) RemoveId(id interface{}) error { + err := gfs.Files.Remove(bson.M{"_id": id}) + if err != nil { + return err + } + _, err = gfs.Chunks.RemoveAll(bson.D{{"files_id", id}}) + return err +} + +type gfsDocId struct { + Id interface{} "_id" +} + +// Remove deletes all files with the provided name from the GridFS. +func (gfs *GridFS) Remove(name string) (err error) { + iter := gfs.Files.Find(bson.M{"filename": name}).Select(bson.M{"_id": 1}).Iter() + var doc gfsDocId + for iter.Next(&doc) { + if e := gfs.RemoveId(doc.Id); e != nil { + err = e + } + } + if err == nil { + err = iter.Close() + } + return err +} + +func (file *GridFile) assertMode(mode gfsFileMode) { + switch file.mode { + case mode: + return + case gfsWriting: + panic("GridFile is open for writing") + case gfsReading: + panic("GridFile is open for reading") + case gfsClosed: + panic("GridFile is closed") + default: + panic("internal error: missing GridFile mode") + } +} + +// SetChunkSize sets size of saved chunks. Once the file is written to, it +// will be split in blocks of that size and each block saved into an +// independent chunk document. The default chunk size is 255kb. +// +// It is a runtime error to call this function once the file has started +// being written to. +func (file *GridFile) SetChunkSize(bytes int) { + file.assertMode(gfsWriting) + debugf("GridFile %p: setting chunk size to %d", file, bytes) + file.m.Lock() + file.doc.ChunkSize = bytes + file.m.Unlock() +} + +// Id returns the current file Id. +func (file *GridFile) Id() interface{} { + return file.doc.Id +} + +// SetId changes the current file Id. +// +// It is a runtime error to call this function once the file has started +// being written to, or when the file is not open for writing. +func (file *GridFile) SetId(id interface{}) { + file.assertMode(gfsWriting) + file.m.Lock() + file.doc.Id = id + file.m.Unlock() +} + +// Name returns the optional file name. An empty string will be returned +// in case it is unset. +func (file *GridFile) Name() string { + return file.doc.Filename +} + +// SetName changes the optional file name. An empty string may be used to +// unset it. +// +// It is a runtime error to call this function when the file is not open +// for writing. +func (file *GridFile) SetName(name string) { + file.assertMode(gfsWriting) + file.m.Lock() + file.doc.Filename = name + file.m.Unlock() +} + +// ContentType returns the optional file content type. An empty string will be +// returned in case it is unset. +func (file *GridFile) ContentType() string { + return file.doc.ContentType +} + +// ContentType changes the optional file content type. An empty string may be +// used to unset it. +// +// It is a runtime error to call this function when the file is not open +// for writing. +func (file *GridFile) SetContentType(ctype string) { + file.assertMode(gfsWriting) + file.m.Lock() + file.doc.ContentType = ctype + file.m.Unlock() +} + +// GetMeta unmarshals the optional "metadata" field associated with the +// file into the result parameter. The meaning of keys under that field +// is user-defined. For example: +// +// result := struct{ INode int }{} +// err = file.GetMeta(&result) +// if err != nil { +// panic(err.String()) +// } +// fmt.Printf("inode: %d\n", result.INode) +// +func (file *GridFile) GetMeta(result interface{}) (err error) { + file.m.Lock() + if file.doc.Metadata != nil { + err = bson.Unmarshal(file.doc.Metadata.Data, result) + } + file.m.Unlock() + return +} + +// SetMeta changes the optional "metadata" field associated with the +// file. The meaning of keys under that field is user-defined. +// For example: +// +// file.SetMeta(bson.M{"inode": inode}) +// +// It is a runtime error to call this function when the file is not open +// for writing. +func (file *GridFile) SetMeta(metadata interface{}) { + file.assertMode(gfsWriting) + data, err := bson.Marshal(metadata) + file.m.Lock() + if err != nil && file.err == nil { + file.err = err + } else { + file.doc.Metadata = &bson.Raw{Data: data} + } + file.m.Unlock() +} + +// Size returns the file size in bytes. +func (file *GridFile) Size() (bytes int64) { + file.m.Lock() + bytes = file.doc.Length + file.m.Unlock() + return +} + +// MD5 returns the file MD5 as a hex-encoded string. +func (file *GridFile) MD5() (md5 string) { + return file.doc.MD5 +} + +// UploadDate returns the file upload time. +func (file *GridFile) UploadDate() time.Time { + return file.doc.UploadDate +} + +// SetUploadDate changes the file upload time. +// +// It is a runtime error to call this function when the file is not open +// for writing. +func (file *GridFile) SetUploadDate(t time.Time) { + file.assertMode(gfsWriting) + file.m.Lock() + file.doc.UploadDate = t + file.m.Unlock() +} + +// Close flushes any pending changes in case the file is being written +// to, waits for any background operations to finish, and closes the file. +// +// It's important to Close files whether they are being written to +// or read from, and to check the err result to ensure the operation +// completed successfully. +func (file *GridFile) Close() (err error) { + file.m.Lock() + defer file.m.Unlock() + if file.mode == gfsWriting { + if len(file.wbuf) > 0 && file.err == nil { + file.insertChunk(file.wbuf) + file.wbuf = file.wbuf[0:0] + } + file.completeWrite() + } else if file.mode == gfsReading && file.rcache != nil { + file.rcache.wait.Lock() + file.rcache = nil + } + file.mode = gfsClosed + debugf("GridFile %p: closed", file) + return file.err +} + +func (file *GridFile) completeWrite() { + for file.wpending > 0 { + debugf("GridFile %p: waiting for %d pending chunks to complete file write", file, file.wpending) + file.c.Wait() + } + if file.err == nil { + hexsum := hex.EncodeToString(file.wsum.Sum(nil)) + if file.doc.UploadDate.IsZero() { + file.doc.UploadDate = bson.Now() + } + file.doc.MD5 = hexsum + file.err = file.gfs.Files.Insert(file.doc) + } + if file.err != nil { + file.gfs.Chunks.RemoveAll(bson.D{{"files_id", file.doc.Id}}) + } + if file.err == nil { + index := Index{ + Key: []string{"files_id", "n"}, + Unique: true, + } + file.err = file.gfs.Chunks.EnsureIndex(index) + } +} + +// Abort cancels an in-progress write, preventing the file from being +// automically created and ensuring previously written chunks are +// removed when the file is closed. +// +// It is a runtime error to call Abort when the file was not opened +// for writing. +func (file *GridFile) Abort() { + if file.mode != gfsWriting { + panic("file.Abort must be called on file opened for writing") + } + file.err = errors.New("write aborted") +} + +// Write writes the provided data to the file and returns the +// number of bytes written and an error in case something +// wrong happened. +// +// The file will internally cache the data so that all but the last +// chunk sent to the database have the size defined by SetChunkSize. +// This also means that errors may be deferred until a future call +// to Write or Close. +// +// The parameters and behavior of this function turn the file +// into an io.Writer. +func (file *GridFile) Write(data []byte) (n int, err error) { + file.assertMode(gfsWriting) + file.m.Lock() + debugf("GridFile %p: writing %d bytes", file, len(data)) + defer file.m.Unlock() + + if file.err != nil { + return 0, file.err + } + + n = len(data) + file.doc.Length += int64(n) + chunkSize := file.doc.ChunkSize + + if len(file.wbuf)+len(data) < chunkSize { + file.wbuf = append(file.wbuf, data...) + return + } + + // First, flush file.wbuf complementing with data. + if len(file.wbuf) > 0 { + missing := chunkSize - len(file.wbuf) + if missing > len(data) { + missing = len(data) + } + file.wbuf = append(file.wbuf, data[:missing]...) + data = data[missing:] + file.insertChunk(file.wbuf) + file.wbuf = file.wbuf[0:0] + } + + // Then, flush all chunks from data without copying. + for len(data) > chunkSize { + size := chunkSize + if size > len(data) { + size = len(data) + } + file.insertChunk(data[:size]) + data = data[size:] + } + + // And append the rest for a future call. + file.wbuf = append(file.wbuf, data...) + + return n, file.err +} + +func (file *GridFile) insertChunk(data []byte) { + n := file.chunk + file.chunk++ + debugf("GridFile %p: adding to checksum: %q", file, string(data)) + file.wsum.Write(data) + + for file.doc.ChunkSize*file.wpending >= 1024*1024 { + // Hold on.. we got a MB pending. + file.c.Wait() + if file.err != nil { + return + } + } + + file.wpending++ + + debugf("GridFile %p: inserting chunk %d with %d bytes", file, n, len(data)) + + // We may not own the memory of data, so rather than + // simply copying it, we'll marshal the document ahead of time. + data, err := bson.Marshal(gfsChunk{bson.NewObjectId(), file.doc.Id, n, data}) + if err != nil { + file.err = err + return + } + + go func() { + err := file.gfs.Chunks.Insert(bson.Raw{Data: data}) + file.m.Lock() + file.wpending-- + if err != nil && file.err == nil { + file.err = err + } + file.c.Broadcast() + file.m.Unlock() + }() +} + +// Seek sets the offset for the next Read or Write on file to +// offset, interpreted according to whence: 0 means relative to +// the origin of the file, 1 means relative to the current offset, +// and 2 means relative to the end. It returns the new offset and +// an error, if any. +func (file *GridFile) Seek(offset int64, whence int) (pos int64, err error) { + file.m.Lock() + debugf("GridFile %p: seeking for %s (whence=%d)", file, offset, whence) + defer file.m.Unlock() + switch whence { + case os.SEEK_SET: + case os.SEEK_CUR: + offset += file.offset + case os.SEEK_END: + offset += file.doc.Length + default: + panic("unsupported whence value") + } + if offset > file.doc.Length { + return file.offset, errors.New("seek past end of file") + } + if offset == file.doc.Length { + // If we're seeking to the end of the file, + // no need to read anything. This enables + // a client to find the size of the file using only the + // io.ReadSeeker interface with low overhead. + file.offset = offset + return file.offset, nil + } + chunk := int(offset / int64(file.doc.ChunkSize)) + if chunk+1 == file.chunk && offset >= file.offset { + file.rbuf = file.rbuf[int(offset-file.offset):] + file.offset = offset + return file.offset, nil + } + file.offset = offset + file.chunk = chunk + file.rbuf = nil + file.rbuf, err = file.getChunk() + if err == nil { + file.rbuf = file.rbuf[int(file.offset-int64(chunk)*int64(file.doc.ChunkSize)):] + } + return file.offset, err +} + +// Read reads into b the next available data from the file and +// returns the number of bytes written and an error in case +// something wrong happened. At the end of the file, n will +// be zero and err will be set to io.EOF. +// +// The parameters and behavior of this function turn the file +// into an io.Reader. +func (file *GridFile) Read(b []byte) (n int, err error) { + file.assertMode(gfsReading) + file.m.Lock() + debugf("GridFile %p: reading at offset %d into buffer of length %d", file, file.offset, len(b)) + defer file.m.Unlock() + if file.offset == file.doc.Length { + return 0, io.EOF + } + for err == nil { + i := copy(b, file.rbuf) + n += i + file.offset += int64(i) + file.rbuf = file.rbuf[i:] + if i == len(b) || file.offset == file.doc.Length { + break + } + b = b[i:] + file.rbuf, err = file.getChunk() + } + return n, err +} + +func (file *GridFile) getChunk() (data []byte, err error) { + cache := file.rcache + file.rcache = nil + if cache != nil && cache.n == file.chunk { + debugf("GridFile %p: Getting chunk %d from cache", file, file.chunk) + cache.wait.Lock() + data, err = cache.data, cache.err + } else { + debugf("GridFile %p: Fetching chunk %d", file, file.chunk) + var doc gfsChunk + err = file.gfs.Chunks.Find(bson.D{{"files_id", file.doc.Id}, {"n", file.chunk}}).One(&doc) + data = doc.Data + } + file.chunk++ + if int64(file.chunk)*int64(file.doc.ChunkSize) < file.doc.Length { + // Read the next one in background. + cache = &gfsCachedChunk{n: file.chunk} + cache.wait.Lock() + debugf("GridFile %p: Scheduling chunk %d for background caching", file, file.chunk) + // Clone the session to avoid having it closed in between. + chunks := file.gfs.Chunks + session := chunks.Database.Session.Clone() + go func(id interface{}, n int) { + defer session.Close() + chunks = chunks.With(session) + var doc gfsChunk + cache.err = chunks.Find(bson.D{{"files_id", id}, {"n", n}}).One(&doc) + cache.data = doc.Data + cache.wait.Unlock() + }(file.doc.Id, file.chunk) + file.rcache = cache + } + debugf("Returning err: %#v", err) + return +} diff --git a/vendor/gopkg.in/mgo.v2/internal/json/LICENSE b/vendor/gopkg.in/mgo.v2/internal/json/LICENSE new file mode 100644 index 00000000..74487567 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/internal/json/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2012 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/gopkg.in/mgo.v2/internal/json/decode.go b/vendor/gopkg.in/mgo.v2/internal/json/decode.go new file mode 100644 index 00000000..ce7c7d24 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/internal/json/decode.go @@ -0,0 +1,1685 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Represents JSON data structure using native Go types: booleans, floats, +// strings, arrays, and maps. + +package json + +import ( + "bytes" + "encoding" + "encoding/base64" + "errors" + "fmt" + "reflect" + "runtime" + "strconv" + "unicode" + "unicode/utf16" + "unicode/utf8" +) + +// Unmarshal parses the JSON-encoded data and stores the result +// in the value pointed to by v. +// +// Unmarshal uses the inverse of the encodings that +// Marshal uses, allocating maps, slices, and pointers as necessary, +// with the following additional rules: +// +// To unmarshal JSON into a pointer, Unmarshal first handles the case of +// the JSON being the JSON literal null. In that case, Unmarshal sets +// the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into +// the value pointed at by the pointer. If the pointer is nil, Unmarshal +// allocates a new value for it to point to. +// +// To unmarshal JSON into a struct, Unmarshal matches incoming object +// keys to the keys used by Marshal (either the struct field name or its tag), +// preferring an exact match but also accepting a case-insensitive match. +// Unmarshal will only set exported fields of the struct. +// +// To unmarshal JSON into an interface value, +// Unmarshal stores one of these in the interface value: +// +// bool, for JSON booleans +// float64, for JSON numbers +// string, for JSON strings +// []interface{}, for JSON arrays +// map[string]interface{}, for JSON objects +// nil for JSON null +// +// To unmarshal a JSON array into a slice, Unmarshal resets the slice length +// to zero and then appends each element to the slice. +// As a special case, to unmarshal an empty JSON array into a slice, +// Unmarshal replaces the slice with a new empty slice. +// +// To unmarshal a JSON array into a Go array, Unmarshal decodes +// JSON array elements into corresponding Go array elements. +// If the Go array is smaller than the JSON array, +// the additional JSON array elements are discarded. +// If the JSON array is smaller than the Go array, +// the additional Go array elements are set to zero values. +// +// To unmarshal a JSON object into a map, Unmarshal first establishes a map to +// use, If the map is nil, Unmarshal allocates a new map. Otherwise Unmarshal +// reuses the existing map, keeping existing entries. Unmarshal then stores key- +// value pairs from the JSON object into the map. The map's key type must +// either be a string or implement encoding.TextUnmarshaler. +// +// If a JSON value is not appropriate for a given target type, +// or if a JSON number overflows the target type, Unmarshal +// skips that field and completes the unmarshaling as best it can. +// If no more serious errors are encountered, Unmarshal returns +// an UnmarshalTypeError describing the earliest such error. +// +// The JSON null value unmarshals into an interface, map, pointer, or slice +// by setting that Go value to nil. Because null is often used in JSON to mean +// ``not present,'' unmarshaling a JSON null into any other Go type has no effect +// on the value and produces no error. +// +// When unmarshaling quoted strings, invalid UTF-8 or +// invalid UTF-16 surrogate pairs are not treated as an error. +// Instead, they are replaced by the Unicode replacement +// character U+FFFD. +// +func Unmarshal(data []byte, v interface{}) error { + // Check for well-formedness. + // Avoids filling out half a data structure + // before discovering a JSON syntax error. + var d decodeState + err := checkValid(data, &d.scan) + if err != nil { + return err + } + + d.init(data) + return d.unmarshal(v) +} + +// Unmarshaler is the interface implemented by types +// that can unmarshal a JSON description of themselves. +// The input can be assumed to be a valid encoding of +// a JSON value. UnmarshalJSON must copy the JSON data +// if it wishes to retain the data after returning. +type Unmarshaler interface { + UnmarshalJSON([]byte) error +} + +// An UnmarshalTypeError describes a JSON value that was +// not appropriate for a value of a specific Go type. +type UnmarshalTypeError struct { + Value string // description of JSON value - "bool", "array", "number -5" + Type reflect.Type // type of Go value it could not be assigned to + Offset int64 // error occurred after reading Offset bytes +} + +func (e *UnmarshalTypeError) Error() string { + return "json: cannot unmarshal " + e.Value + " into Go value of type " + e.Type.String() +} + +// An UnmarshalFieldError describes a JSON object key that +// led to an unexported (and therefore unwritable) struct field. +// (No longer used; kept for compatibility.) +type UnmarshalFieldError struct { + Key string + Type reflect.Type + Field reflect.StructField +} + +func (e *UnmarshalFieldError) Error() string { + return "json: cannot unmarshal object key " + strconv.Quote(e.Key) + " into unexported field " + e.Field.Name + " of type " + e.Type.String() +} + +// An InvalidUnmarshalError describes an invalid argument passed to Unmarshal. +// (The argument to Unmarshal must be a non-nil pointer.) +type InvalidUnmarshalError struct { + Type reflect.Type +} + +func (e *InvalidUnmarshalError) Error() string { + if e.Type == nil { + return "json: Unmarshal(nil)" + } + + if e.Type.Kind() != reflect.Ptr { + return "json: Unmarshal(non-pointer " + e.Type.String() + ")" + } + return "json: Unmarshal(nil " + e.Type.String() + ")" +} + +func (d *decodeState) unmarshal(v interface{}) (err error) { + defer func() { + if r := recover(); r != nil { + if _, ok := r.(runtime.Error); ok { + panic(r) + } + err = r.(error) + } + }() + + rv := reflect.ValueOf(v) + if rv.Kind() != reflect.Ptr || rv.IsNil() { + return &InvalidUnmarshalError{reflect.TypeOf(v)} + } + + d.scan.reset() + // We decode rv not rv.Elem because the Unmarshaler interface + // test must be applied at the top level of the value. + d.value(rv) + return d.savedError +} + +// A Number represents a JSON number literal. +type Number string + +// String returns the literal text of the number. +func (n Number) String() string { return string(n) } + +// Float64 returns the number as a float64. +func (n Number) Float64() (float64, error) { + return strconv.ParseFloat(string(n), 64) +} + +// Int64 returns the number as an int64. +func (n Number) Int64() (int64, error) { + return strconv.ParseInt(string(n), 10, 64) +} + +// isValidNumber reports whether s is a valid JSON number literal. +func isValidNumber(s string) bool { + // This function implements the JSON numbers grammar. + // See https://tools.ietf.org/html/rfc7159#section-6 + // and http://json.org/number.gif + + if s == "" { + return false + } + + // Optional - + if s[0] == '-' { + s = s[1:] + if s == "" { + return false + } + } + + // Digits + switch { + default: + return false + + case s[0] == '0': + s = s[1:] + + case '1' <= s[0] && s[0] <= '9': + s = s[1:] + for len(s) > 0 && '0' <= s[0] && s[0] <= '9' { + s = s[1:] + } + } + + // . followed by 1 or more digits. + if len(s) >= 2 && s[0] == '.' && '0' <= s[1] && s[1] <= '9' { + s = s[2:] + for len(s) > 0 && '0' <= s[0] && s[0] <= '9' { + s = s[1:] + } + } + + // e or E followed by an optional - or + and + // 1 or more digits. + if len(s) >= 2 && (s[0] == 'e' || s[0] == 'E') { + s = s[1:] + if s[0] == '+' || s[0] == '-' { + s = s[1:] + if s == "" { + return false + } + } + for len(s) > 0 && '0' <= s[0] && s[0] <= '9' { + s = s[1:] + } + } + + // Make sure we are at the end. + return s == "" +} + +// decodeState represents the state while decoding a JSON value. +type decodeState struct { + data []byte + off int // read offset in data + scan scanner + nextscan scanner // for calls to nextValue + savedError error + useNumber bool + ext Extension +} + +// errPhase is used for errors that should not happen unless +// there is a bug in the JSON decoder or something is editing +// the data slice while the decoder executes. +var errPhase = errors.New("JSON decoder out of sync - data changing underfoot?") + +func (d *decodeState) init(data []byte) *decodeState { + d.data = data + d.off = 0 + d.savedError = nil + return d +} + +// error aborts the decoding by panicking with err. +func (d *decodeState) error(err error) { + panic(err) +} + +// saveError saves the first err it is called with, +// for reporting at the end of the unmarshal. +func (d *decodeState) saveError(err error) { + if d.savedError == nil { + d.savedError = err + } +} + +// next cuts off and returns the next full JSON value in d.data[d.off:]. +// The next value is known to be an object or array, not a literal. +func (d *decodeState) next() []byte { + c := d.data[d.off] + item, rest, err := nextValue(d.data[d.off:], &d.nextscan) + if err != nil { + d.error(err) + } + d.off = len(d.data) - len(rest) + + // Our scanner has seen the opening brace/bracket + // and thinks we're still in the middle of the object. + // invent a closing brace/bracket to get it out. + if c == '{' { + d.scan.step(&d.scan, '}') + } else if c == '[' { + d.scan.step(&d.scan, ']') + } else { + // Was inside a function name. Get out of it. + d.scan.step(&d.scan, '(') + d.scan.step(&d.scan, ')') + } + + return item +} + +// scanWhile processes bytes in d.data[d.off:] until it +// receives a scan code not equal to op. +// It updates d.off and returns the new scan code. +func (d *decodeState) scanWhile(op int) int { + var newOp int + for { + if d.off >= len(d.data) { + newOp = d.scan.eof() + d.off = len(d.data) + 1 // mark processed EOF with len+1 + } else { + c := d.data[d.off] + d.off++ + newOp = d.scan.step(&d.scan, c) + } + if newOp != op { + break + } + } + return newOp +} + +// value decodes a JSON value from d.data[d.off:] into the value. +// it updates d.off to point past the decoded value. +func (d *decodeState) value(v reflect.Value) { + if !v.IsValid() { + _, rest, err := nextValue(d.data[d.off:], &d.nextscan) + if err != nil { + d.error(err) + } + d.off = len(d.data) - len(rest) + + // d.scan thinks we're still at the beginning of the item. + // Feed in an empty string - the shortest, simplest value - + // so that it knows we got to the end of the value. + if d.scan.redo { + // rewind. + d.scan.redo = false + d.scan.step = stateBeginValue + } + d.scan.step(&d.scan, '"') + d.scan.step(&d.scan, '"') + + n := len(d.scan.parseState) + if n > 0 && d.scan.parseState[n-1] == parseObjectKey { + // d.scan thinks we just read an object key; finish the object + d.scan.step(&d.scan, ':') + d.scan.step(&d.scan, '"') + d.scan.step(&d.scan, '"') + d.scan.step(&d.scan, '}') + } + + return + } + + switch op := d.scanWhile(scanSkipSpace); op { + default: + d.error(errPhase) + + case scanBeginArray: + d.array(v) + + case scanBeginObject: + d.object(v) + + case scanBeginLiteral: + d.literal(v) + + case scanBeginName: + d.name(v) + } +} + +type unquotedValue struct{} + +// valueQuoted is like value but decodes a +// quoted string literal or literal null into an interface value. +// If it finds anything other than a quoted string literal or null, +// valueQuoted returns unquotedValue{}. +func (d *decodeState) valueQuoted() interface{} { + switch op := d.scanWhile(scanSkipSpace); op { + default: + d.error(errPhase) + + case scanBeginArray: + d.array(reflect.Value{}) + + case scanBeginObject: + d.object(reflect.Value{}) + + case scanBeginName: + switch v := d.nameInterface().(type) { + case nil, string: + return v + } + + case scanBeginLiteral: + switch v := d.literalInterface().(type) { + case nil, string: + return v + } + } + return unquotedValue{} +} + +// indirect walks down v allocating pointers as needed, +// until it gets to a non-pointer. +// if it encounters an Unmarshaler, indirect stops and returns that. +// if decodingNull is true, indirect stops at the last pointer so it can be set to nil. +func (d *decodeState) indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnmarshaler, reflect.Value) { + // If v is a named type and is addressable, + // start with its address, so that if the type has pointer methods, + // we find them. + if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() { + v = v.Addr() + } + for { + // Load value from interface, but only if the result will be + // usefully addressable. + if v.Kind() == reflect.Interface && !v.IsNil() { + e := v.Elem() + if e.Kind() == reflect.Ptr && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Ptr) { + v = e + continue + } + } + + if v.Kind() != reflect.Ptr { + break + } + + if v.Elem().Kind() != reflect.Ptr && decodingNull && v.CanSet() { + break + } + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + if v.Type().NumMethod() > 0 { + if u, ok := v.Interface().(Unmarshaler); ok { + return u, nil, v + } + if u, ok := v.Interface().(encoding.TextUnmarshaler); ok { + return nil, u, v + } + } + v = v.Elem() + } + return nil, nil, v +} + +// array consumes an array from d.data[d.off-1:], decoding into the value v. +// the first byte of the array ('[') has been read already. +func (d *decodeState) array(v reflect.Value) { + // Check for unmarshaler. + u, ut, pv := d.indirect(v, false) + if u != nil { + d.off-- + err := u.UnmarshalJSON(d.next()) + if err != nil { + d.error(err) + } + return + } + if ut != nil { + d.saveError(&UnmarshalTypeError{"array", v.Type(), int64(d.off)}) + d.off-- + d.next() + return + } + + v = pv + + // Check type of target. + switch v.Kind() { + case reflect.Interface: + if v.NumMethod() == 0 { + // Decoding into nil interface? Switch to non-reflect code. + v.Set(reflect.ValueOf(d.arrayInterface())) + return + } + // Otherwise it's invalid. + fallthrough + default: + d.saveError(&UnmarshalTypeError{"array", v.Type(), int64(d.off)}) + d.off-- + d.next() + return + case reflect.Array: + case reflect.Slice: + break + } + + i := 0 + for { + // Look ahead for ] - can only happen on first iteration. + op := d.scanWhile(scanSkipSpace) + if op == scanEndArray { + break + } + + // Back up so d.value can have the byte we just read. + d.off-- + d.scan.undo(op) + + // Get element of array, growing if necessary. + if v.Kind() == reflect.Slice { + // Grow slice if necessary + if i >= v.Cap() { + newcap := v.Cap() + v.Cap()/2 + if newcap < 4 { + newcap = 4 + } + newv := reflect.MakeSlice(v.Type(), v.Len(), newcap) + reflect.Copy(newv, v) + v.Set(newv) + } + if i >= v.Len() { + v.SetLen(i + 1) + } + } + + if i < v.Len() { + // Decode into element. + d.value(v.Index(i)) + } else { + // Ran out of fixed array: skip. + d.value(reflect.Value{}) + } + i++ + + // Next token must be , or ]. + op = d.scanWhile(scanSkipSpace) + if op == scanEndArray { + break + } + if op != scanArrayValue { + d.error(errPhase) + } + } + + if i < v.Len() { + if v.Kind() == reflect.Array { + // Array. Zero the rest. + z := reflect.Zero(v.Type().Elem()) + for ; i < v.Len(); i++ { + v.Index(i).Set(z) + } + } else { + v.SetLen(i) + } + } + if i == 0 && v.Kind() == reflect.Slice { + v.Set(reflect.MakeSlice(v.Type(), 0, 0)) + } +} + +var nullLiteral = []byte("null") +var textUnmarshalerType = reflect.TypeOf(new(encoding.TextUnmarshaler)).Elem() + +// object consumes an object from d.data[d.off-1:], decoding into the value v. +// the first byte ('{') of the object has been read already. +func (d *decodeState) object(v reflect.Value) { + // Check for unmarshaler. + u, ut, pv := d.indirect(v, false) + if d.storeKeyed(pv) { + return + } + if u != nil { + d.off-- + err := u.UnmarshalJSON(d.next()) + if err != nil { + d.error(err) + } + return + } + if ut != nil { + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + d.off-- + d.next() // skip over { } in input + return + } + v = pv + + // Decoding into nil interface? Switch to non-reflect code. + if v.Kind() == reflect.Interface && v.NumMethod() == 0 { + v.Set(reflect.ValueOf(d.objectInterface())) + return + } + + // Check type of target: + // struct or + // map[string]T or map[encoding.TextUnmarshaler]T + switch v.Kind() { + case reflect.Map: + // Map key must either have string kind or be an encoding.TextUnmarshaler. + t := v.Type() + if t.Key().Kind() != reflect.String && + !reflect.PtrTo(t.Key()).Implements(textUnmarshalerType) { + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + d.off-- + d.next() // skip over { } in input + return + } + if v.IsNil() { + v.Set(reflect.MakeMap(t)) + } + case reflect.Struct: + + default: + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + d.off-- + d.next() // skip over { } in input + return + } + + var mapElem reflect.Value + + empty := true + for { + // Read opening " of string key or closing }. + op := d.scanWhile(scanSkipSpace) + if op == scanEndObject { + if !empty && !d.ext.trailingCommas { + d.syntaxError("beginning of object key string") + } + break + } + empty = false + if op == scanBeginName { + if !d.ext.unquotedKeys { + d.syntaxError("beginning of object key string") + } + } else if op != scanBeginLiteral { + d.error(errPhase) + } + unquotedKey := op == scanBeginName + + // Read key. + start := d.off - 1 + op = d.scanWhile(scanContinue) + item := d.data[start : d.off-1] + var key []byte + if unquotedKey { + key = item + // TODO Fix code below to quote item when necessary. + } else { + var ok bool + key, ok = unquoteBytes(item) + if !ok { + d.error(errPhase) + } + } + + // Figure out field corresponding to key. + var subv reflect.Value + destring := false // whether the value is wrapped in a string to be decoded first + + if v.Kind() == reflect.Map { + elemType := v.Type().Elem() + if !mapElem.IsValid() { + mapElem = reflect.New(elemType).Elem() + } else { + mapElem.Set(reflect.Zero(elemType)) + } + subv = mapElem + } else { + var f *field + fields := cachedTypeFields(v.Type()) + for i := range fields { + ff := &fields[i] + if bytes.Equal(ff.nameBytes, key) { + f = ff + break + } + if f == nil && ff.equalFold(ff.nameBytes, key) { + f = ff + } + } + if f != nil { + subv = v + destring = f.quoted + for _, i := range f.index { + if subv.Kind() == reflect.Ptr { + if subv.IsNil() { + subv.Set(reflect.New(subv.Type().Elem())) + } + subv = subv.Elem() + } + subv = subv.Field(i) + } + } + } + + // Read : before value. + if op == scanSkipSpace { + op = d.scanWhile(scanSkipSpace) + } + if op != scanObjectKey { + d.error(errPhase) + } + + // Read value. + if destring { + switch qv := d.valueQuoted().(type) { + case nil: + d.literalStore(nullLiteral, subv, false) + case string: + d.literalStore([]byte(qv), subv, true) + default: + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal unquoted value into %v", subv.Type())) + } + } else { + d.value(subv) + } + + // Write value back to map; + // if using struct, subv points into struct already. + if v.Kind() == reflect.Map { + kt := v.Type().Key() + var kv reflect.Value + switch { + case kt.Kind() == reflect.String: + kv = reflect.ValueOf(key).Convert(v.Type().Key()) + case reflect.PtrTo(kt).Implements(textUnmarshalerType): + kv = reflect.New(v.Type().Key()) + d.literalStore(item, kv, true) + kv = kv.Elem() + default: + panic("json: Unexpected key type") // should never occur + } + v.SetMapIndex(kv, subv) + } + + // Next token must be , or }. + op = d.scanWhile(scanSkipSpace) + if op == scanEndObject { + break + } + if op != scanObjectValue { + d.error(errPhase) + } + } +} + +// isNull returns whether there's a null literal at the provided offset. +func (d *decodeState) isNull(off int) bool { + if off+4 >= len(d.data) || d.data[off] != 'n' || d.data[off+1] != 'u' || d.data[off+2] != 'l' || d.data[off+3] != 'l' { + return false + } + d.nextscan.reset() + for i, c := range d.data[off:] { + if i > 4 { + return false + } + switch d.nextscan.step(&d.nextscan, c) { + case scanContinue, scanBeginName: + continue + } + break + } + return true +} + +// name consumes a const or function from d.data[d.off-1:], decoding into the value v. +// the first byte of the function name has been read already. +func (d *decodeState) name(v reflect.Value) { + if d.isNull(d.off-1) { + d.literal(v) + return + } + + // Check for unmarshaler. + u, ut, pv := d.indirect(v, false) + if d.storeKeyed(pv) { + return + } + if u != nil { + d.off-- + err := u.UnmarshalJSON(d.next()) + if err != nil { + d.error(err) + } + return + } + if ut != nil { + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + d.off-- + d.next() // skip over function in input + return + } + v = pv + + // Decoding into nil interface? Switch to non-reflect code. + if v.Kind() == reflect.Interface && v.NumMethod() == 0 { + out := d.nameInterface() + if out == nil { + v.Set(reflect.Zero(v.Type())) + } else { + v.Set(reflect.ValueOf(out)) + } + return + } + + nameStart := d.off - 1 + + op := d.scanWhile(scanContinue) + + name := d.data[nameStart : d.off-1] + if op != scanParam { + // Back up so the byte just read is consumed next. + d.off-- + d.scan.undo(op) + if l, ok := d.convertLiteral(name); ok { + d.storeValue(v, l) + return + } + d.error(&SyntaxError{fmt.Sprintf("json: unknown constant %q", name), int64(d.off)}) + } + + funcName := string(name) + funcData := d.ext.funcs[funcName] + if funcData.key == "" { + d.error(fmt.Errorf("json: unknown function %q", funcName)) + } + + // Check type of target: + // struct or + // map[string]T or map[encoding.TextUnmarshaler]T + switch v.Kind() { + case reflect.Map: + // Map key must either have string kind or be an encoding.TextUnmarshaler. + t := v.Type() + if t.Key().Kind() != reflect.String && + !reflect.PtrTo(t.Key()).Implements(textUnmarshalerType) { + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + d.off-- + d.next() // skip over { } in input + return + } + if v.IsNil() { + v.Set(reflect.MakeMap(t)) + } + case reflect.Struct: + + default: + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + d.off-- + d.next() // skip over { } in input + return + } + + // TODO Fix case of func field as map. + //topv := v + + // Figure out field corresponding to function. + key := []byte(funcData.key) + if v.Kind() == reflect.Map { + elemType := v.Type().Elem() + v = reflect.New(elemType).Elem() + } else { + var f *field + fields := cachedTypeFields(v.Type()) + for i := range fields { + ff := &fields[i] + if bytes.Equal(ff.nameBytes, key) { + f = ff + break + } + if f == nil && ff.equalFold(ff.nameBytes, key) { + f = ff + } + } + if f != nil { + for _, i := range f.index { + if v.Kind() == reflect.Ptr { + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + v = v.Elem() + } + v = v.Field(i) + } + if v.Kind() == reflect.Ptr { + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + v = v.Elem() + } + } + } + + // Check for unmarshaler on func field itself. + u, ut, pv = d.indirect(v, false) + if u != nil { + d.off = nameStart + err := u.UnmarshalJSON(d.next()) + if err != nil { + d.error(err) + } + return + } + + var mapElem reflect.Value + + // Parse function arguments. + for i := 0; ; i++ { + // closing ) - can only happen on first iteration. + op := d.scanWhile(scanSkipSpace) + if op == scanEndParams { + break + } + + // Back up so d.value can have the byte we just read. + d.off-- + d.scan.undo(op) + + if i >= len(funcData.args) { + d.error(fmt.Errorf("json: too many arguments for function %s", funcName)) + } + key := []byte(funcData.args[i]) + + // Figure out field corresponding to key. + var subv reflect.Value + destring := false // whether the value is wrapped in a string to be decoded first + + if v.Kind() == reflect.Map { + elemType := v.Type().Elem() + if !mapElem.IsValid() { + mapElem = reflect.New(elemType).Elem() + } else { + mapElem.Set(reflect.Zero(elemType)) + } + subv = mapElem + } else { + var f *field + fields := cachedTypeFields(v.Type()) + for i := range fields { + ff := &fields[i] + if bytes.Equal(ff.nameBytes, key) { + f = ff + break + } + if f == nil && ff.equalFold(ff.nameBytes, key) { + f = ff + } + } + if f != nil { + subv = v + destring = f.quoted + for _, i := range f.index { + if subv.Kind() == reflect.Ptr { + if subv.IsNil() { + subv.Set(reflect.New(subv.Type().Elem())) + } + subv = subv.Elem() + } + subv = subv.Field(i) + } + } + } + + // Read value. + if destring { + switch qv := d.valueQuoted().(type) { + case nil: + d.literalStore(nullLiteral, subv, false) + case string: + d.literalStore([]byte(qv), subv, true) + default: + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal unquoted value into %v", subv.Type())) + } + } else { + d.value(subv) + } + + // Write value back to map; + // if using struct, subv points into struct already. + if v.Kind() == reflect.Map { + kt := v.Type().Key() + var kv reflect.Value + switch { + case kt.Kind() == reflect.String: + kv = reflect.ValueOf(key).Convert(v.Type().Key()) + case reflect.PtrTo(kt).Implements(textUnmarshalerType): + kv = reflect.New(v.Type().Key()) + d.literalStore(key, kv, true) + kv = kv.Elem() + default: + panic("json: Unexpected key type") // should never occur + } + v.SetMapIndex(kv, subv) + } + + // Next token must be , or ). + op = d.scanWhile(scanSkipSpace) + if op == scanEndParams { + break + } + if op != scanParam { + d.error(errPhase) + } + } +} + +// keyed attempts to decode an object or function using a keyed doc extension, +// and returns the value and true on success, or nil and false otherwise. +func (d *decodeState) keyed() (interface{}, bool) { + if len(d.ext.keyed) == 0 { + return nil, false + } + + unquote := false + + // Look-ahead first key to check for a keyed document extension. + d.nextscan.reset() + var start, end int + for i, c := range d.data[d.off-1:] { + switch op := d.nextscan.step(&d.nextscan, c); op { + case scanSkipSpace, scanContinue, scanBeginObject: + continue + case scanBeginLiteral, scanBeginName: + unquote = op == scanBeginLiteral + start = i + continue + } + end = i + break + } + + name := d.data[d.off-1+start : d.off-1+end] + + var key []byte + var ok bool + if unquote { + key, ok = unquoteBytes(name) + if !ok { + d.error(errPhase) + } + } else { + funcData, ok := d.ext.funcs[string(name)] + if !ok { + return nil, false + } + key = []byte(funcData.key) + } + + decode, ok := d.ext.keyed[string(key)] + if !ok { + return nil, false + } + + d.off-- + out, err := decode(d.next()) + if err != nil { + d.error(err) + } + return out, true +} + +func (d *decodeState) storeKeyed(v reflect.Value) bool { + keyed, ok := d.keyed() + if !ok { + return false + } + d.storeValue(v, keyed) + return true +} + +var ( + trueBytes = []byte("true") + falseBytes = []byte("false") + nullBytes = []byte("null") +) + +func (d *decodeState) storeValue(v reflect.Value, from interface{}) { + switch from { + case nil: + d.literalStore(nullBytes, v, false) + return + case true: + d.literalStore(trueBytes, v, false) + return + case false: + d.literalStore(falseBytes, v, false) + return + } + fromv := reflect.ValueOf(from) + for fromv.Kind() == reflect.Ptr && !fromv.IsNil() { + fromv = fromv.Elem() + } + fromt := fromv.Type() + for v.Kind() == reflect.Ptr && !v.IsNil() { + v = v.Elem() + } + vt := v.Type() + if fromt.AssignableTo(vt) { + v.Set(fromv) + } else if fromt.ConvertibleTo(vt) { + v.Set(fromv.Convert(vt)) + } else { + d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)}) + } +} + +func (d *decodeState) convertLiteral(name []byte) (interface{}, bool) { + if len(name) == 0 { + return nil, false + } + switch name[0] { + case 't': + if bytes.Equal(name, trueBytes) { + return true, true + } + case 'f': + if bytes.Equal(name, falseBytes) { + return false, true + } + case 'n': + if bytes.Equal(name, nullBytes) { + return nil, true + } + } + if l, ok := d.ext.consts[string(name)]; ok { + return l, true + } + return nil, false +} + +// literal consumes a literal from d.data[d.off-1:], decoding into the value v. +// The first byte of the literal has been read already +// (that's how the caller knows it's a literal). +func (d *decodeState) literal(v reflect.Value) { + // All bytes inside literal return scanContinue op code. + start := d.off - 1 + op := d.scanWhile(scanContinue) + + // Scan read one byte too far; back up. + d.off-- + d.scan.undo(op) + + d.literalStore(d.data[start:d.off], v, false) +} + +// convertNumber converts the number literal s to a float64 or a Number +// depending on the setting of d.useNumber. +func (d *decodeState) convertNumber(s string) (interface{}, error) { + if d.useNumber { + return Number(s), nil + } + f, err := strconv.ParseFloat(s, 64) + if err != nil { + return nil, &UnmarshalTypeError{"number " + s, reflect.TypeOf(0.0), int64(d.off)} + } + return f, nil +} + +var numberType = reflect.TypeOf(Number("")) + +// literalStore decodes a literal stored in item into v. +// +// fromQuoted indicates whether this literal came from unwrapping a +// string from the ",string" struct tag option. this is used only to +// produce more helpful error messages. +func (d *decodeState) literalStore(item []byte, v reflect.Value, fromQuoted bool) { + // Check for unmarshaler. + if len(item) == 0 { + //Empty string given + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + return + } + wantptr := item[0] == 'n' // null + u, ut, pv := d.indirect(v, wantptr) + if u != nil { + err := u.UnmarshalJSON(item) + if err != nil { + d.error(err) + } + return + } + if ut != nil { + if item[0] != '"' { + if fromQuoted { + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)}) + } + return + } + s, ok := unquoteBytes(item) + if !ok { + if fromQuoted { + d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.error(errPhase) + } + } + err := ut.UnmarshalText(s) + if err != nil { + d.error(err) + } + return + } + + v = pv + + switch c := item[0]; c { + case 'n': // null + switch v.Kind() { + case reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice: + v.Set(reflect.Zero(v.Type())) + // otherwise, ignore null for primitives/string + } + case 't', 'f': // true, false + value := c == 't' + switch v.Kind() { + default: + if fromQuoted { + d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.saveError(&UnmarshalTypeError{"bool", v.Type(), int64(d.off)}) + } + case reflect.Bool: + v.SetBool(value) + case reflect.Interface: + if v.NumMethod() == 0 { + v.Set(reflect.ValueOf(value)) + } else { + d.saveError(&UnmarshalTypeError{"bool", v.Type(), int64(d.off)}) + } + } + + case '"': // string + s, ok := unquoteBytes(item) + if !ok { + if fromQuoted { + d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.error(errPhase) + } + } + switch v.Kind() { + default: + d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)}) + case reflect.Slice: + if v.Type().Elem().Kind() != reflect.Uint8 { + d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)}) + break + } + b := make([]byte, base64.StdEncoding.DecodedLen(len(s))) + n, err := base64.StdEncoding.Decode(b, s) + if err != nil { + d.saveError(err) + break + } + v.SetBytes(b[:n]) + case reflect.String: + v.SetString(string(s)) + case reflect.Interface: + if v.NumMethod() == 0 { + v.Set(reflect.ValueOf(string(s))) + } else { + d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)}) + } + } + + default: // number + if c != '-' && (c < '0' || c > '9') { + if fromQuoted { + d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.error(errPhase) + } + } + s := string(item) + switch v.Kind() { + default: + if v.Kind() == reflect.String && v.Type() == numberType { + v.SetString(s) + if !isValidNumber(s) { + d.error(fmt.Errorf("json: invalid number literal, trying to unmarshal %q into Number", item)) + } + break + } + if fromQuoted { + d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) + } else { + d.error(&UnmarshalTypeError{"number", v.Type(), int64(d.off)}) + } + case reflect.Interface: + n, err := d.convertNumber(s) + if err != nil { + d.saveError(err) + break + } + if v.NumMethod() != 0 { + d.saveError(&UnmarshalTypeError{"number", v.Type(), int64(d.off)}) + break + } + v.Set(reflect.ValueOf(n)) + + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + n, err := strconv.ParseInt(s, 10, 64) + if err != nil || v.OverflowInt(n) { + d.saveError(&UnmarshalTypeError{"number " + s, v.Type(), int64(d.off)}) + break + } + v.SetInt(n) + + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + n, err := strconv.ParseUint(s, 10, 64) + if err != nil || v.OverflowUint(n) { + d.saveError(&UnmarshalTypeError{"number " + s, v.Type(), int64(d.off)}) + break + } + v.SetUint(n) + + case reflect.Float32, reflect.Float64: + n, err := strconv.ParseFloat(s, v.Type().Bits()) + if err != nil || v.OverflowFloat(n) { + d.saveError(&UnmarshalTypeError{"number " + s, v.Type(), int64(d.off)}) + break + } + v.SetFloat(n) + } + } +} + +// The xxxInterface routines build up a value to be stored +// in an empty interface. They are not strictly necessary, +// but they avoid the weight of reflection in this common case. + +// valueInterface is like value but returns interface{} +func (d *decodeState) valueInterface() interface{} { + switch d.scanWhile(scanSkipSpace) { + default: + d.error(errPhase) + panic("unreachable") + case scanBeginArray: + return d.arrayInterface() + case scanBeginObject: + return d.objectInterface() + case scanBeginLiteral: + return d.literalInterface() + case scanBeginName: + return d.nameInterface() + } +} + +func (d *decodeState) syntaxError(expected string) { + msg := fmt.Sprintf("invalid character '%c' looking for %s", d.data[d.off-1], expected) + d.error(&SyntaxError{msg, int64(d.off)}) +} + +// arrayInterface is like array but returns []interface{}. +func (d *decodeState) arrayInterface() []interface{} { + var v = make([]interface{}, 0) + for { + // Look ahead for ] - can only happen on first iteration. + op := d.scanWhile(scanSkipSpace) + if op == scanEndArray { + if len(v) > 0 && !d.ext.trailingCommas { + d.syntaxError("beginning of value") + } + break + } + + // Back up so d.value can have the byte we just read. + d.off-- + d.scan.undo(op) + + v = append(v, d.valueInterface()) + + // Next token must be , or ]. + op = d.scanWhile(scanSkipSpace) + if op == scanEndArray { + break + } + if op != scanArrayValue { + d.error(errPhase) + } + } + return v +} + +// objectInterface is like object but returns map[string]interface{}. +func (d *decodeState) objectInterface() interface{} { + v, ok := d.keyed() + if ok { + return v + } + + m := make(map[string]interface{}) + for { + // Read opening " of string key or closing }. + op := d.scanWhile(scanSkipSpace) + if op == scanEndObject { + if len(m) > 0 && !d.ext.trailingCommas { + d.syntaxError("beginning of object key string") + } + break + } + if op == scanBeginName { + if !d.ext.unquotedKeys { + d.syntaxError("beginning of object key string") + } + } else if op != scanBeginLiteral { + d.error(errPhase) + } + unquotedKey := op == scanBeginName + + // Read string key. + start := d.off - 1 + op = d.scanWhile(scanContinue) + item := d.data[start : d.off-1] + var key string + if unquotedKey { + key = string(item) + } else { + var ok bool + key, ok = unquote(item) + if !ok { + d.error(errPhase) + } + } + + // Read : before value. + if op == scanSkipSpace { + op = d.scanWhile(scanSkipSpace) + } + if op != scanObjectKey { + d.error(errPhase) + } + + // Read value. + m[key] = d.valueInterface() + + // Next token must be , or }. + op = d.scanWhile(scanSkipSpace) + if op == scanEndObject { + break + } + if op != scanObjectValue { + d.error(errPhase) + } + } + return m +} + +// literalInterface is like literal but returns an interface value. +func (d *decodeState) literalInterface() interface{} { + // All bytes inside literal return scanContinue op code. + start := d.off - 1 + op := d.scanWhile(scanContinue) + + // Scan read one byte too far; back up. + d.off-- + d.scan.undo(op) + item := d.data[start:d.off] + + switch c := item[0]; c { + case 'n': // null + return nil + + case 't', 'f': // true, false + return c == 't' + + case '"': // string + s, ok := unquote(item) + if !ok { + d.error(errPhase) + } + return s + + default: // number + if c != '-' && (c < '0' || c > '9') { + d.error(errPhase) + } + n, err := d.convertNumber(string(item)) + if err != nil { + d.saveError(err) + } + return n + } +} + +// nameInterface is like function but returns map[string]interface{}. +func (d *decodeState) nameInterface() interface{} { + v, ok := d.keyed() + if ok { + return v + } + + nameStart := d.off - 1 + + op := d.scanWhile(scanContinue) + + name := d.data[nameStart : d.off-1] + if op != scanParam { + // Back up so the byte just read is consumed next. + d.off-- + d.scan.undo(op) + if l, ok := d.convertLiteral(name); ok { + return l + } + d.error(&SyntaxError{fmt.Sprintf("json: unknown constant %q", name), int64(d.off)}) + } + + funcName := string(name) + funcData := d.ext.funcs[funcName] + if funcData.key == "" { + d.error(fmt.Errorf("json: unknown function %q", funcName)) + } + + m := make(map[string]interface{}) + for i := 0; ; i++ { + // Look ahead for ) - can only happen on first iteration. + op := d.scanWhile(scanSkipSpace) + if op == scanEndParams { + break + } + + // Back up so d.value can have the byte we just read. + d.off-- + d.scan.undo(op) + + if i >= len(funcData.args) { + d.error(fmt.Errorf("json: too many arguments for function %s", funcName)) + } + m[funcData.args[i]] = d.valueInterface() + + // Next token must be , or ). + op = d.scanWhile(scanSkipSpace) + if op == scanEndParams { + break + } + if op != scanParam { + d.error(errPhase) + } + } + return map[string]interface{}{funcData.key: m} +} + +// getu4 decodes \uXXXX from the beginning of s, returning the hex value, +// or it returns -1. +func getu4(s []byte) rune { + if len(s) < 6 || s[0] != '\\' || s[1] != 'u' { + return -1 + } + r, err := strconv.ParseUint(string(s[2:6]), 16, 64) + if err != nil { + return -1 + } + return rune(r) +} + +// unquote converts a quoted JSON string literal s into an actual string t. +// The rules are different than for Go, so cannot use strconv.Unquote. +func unquote(s []byte) (t string, ok bool) { + s, ok = unquoteBytes(s) + t = string(s) + return +} + +func unquoteBytes(s []byte) (t []byte, ok bool) { + if len(s) < 2 || s[0] != '"' || s[len(s)-1] != '"' { + return + } + s = s[1 : len(s)-1] + + // Check for unusual characters. If there are none, + // then no unquoting is needed, so return a slice of the + // original bytes. + r := 0 + for r < len(s) { + c := s[r] + if c == '\\' || c == '"' || c < ' ' { + break + } + if c < utf8.RuneSelf { + r++ + continue + } + rr, size := utf8.DecodeRune(s[r:]) + if rr == utf8.RuneError && size == 1 { + break + } + r += size + } + if r == len(s) { + return s, true + } + + b := make([]byte, len(s)+2*utf8.UTFMax) + w := copy(b, s[0:r]) + for r < len(s) { + // Out of room? Can only happen if s is full of + // malformed UTF-8 and we're replacing each + // byte with RuneError. + if w >= len(b)-2*utf8.UTFMax { + nb := make([]byte, (len(b)+utf8.UTFMax)*2) + copy(nb, b[0:w]) + b = nb + } + switch c := s[r]; { + case c == '\\': + r++ + if r >= len(s) { + return + } + switch s[r] { + default: + return + case '"', '\\', '/', '\'': + b[w] = s[r] + r++ + w++ + case 'b': + b[w] = '\b' + r++ + w++ + case 'f': + b[w] = '\f' + r++ + w++ + case 'n': + b[w] = '\n' + r++ + w++ + case 'r': + b[w] = '\r' + r++ + w++ + case 't': + b[w] = '\t' + r++ + w++ + case 'u': + r-- + rr := getu4(s[r:]) + if rr < 0 { + return + } + r += 6 + if utf16.IsSurrogate(rr) { + rr1 := getu4(s[r:]) + if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar { + // A valid pair; consume. + r += 6 + w += utf8.EncodeRune(b[w:], dec) + break + } + // Invalid surrogate; fall back to replacement rune. + rr = unicode.ReplacementChar + } + w += utf8.EncodeRune(b[w:], rr) + } + + // Quote, control characters are invalid. + case c == '"', c < ' ': + return + + // ASCII + case c < utf8.RuneSelf: + b[w] = c + r++ + w++ + + // Coerce to well-formed UTF-8. + default: + rr, size := utf8.DecodeRune(s[r:]) + r += size + w += utf8.EncodeRune(b[w:], rr) + } + } + return b[0:w], true +} diff --git a/vendor/gopkg.in/mgo.v2/internal/json/encode.go b/vendor/gopkg.in/mgo.v2/internal/json/encode.go new file mode 100644 index 00000000..67a0f006 --- /dev/null +++ b/vendor/gopkg.in/mgo.v2/internal/json/encode.go @@ -0,0 +1,1256 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package json implements encoding and decoding of JSON as defined in +// RFC 4627. The mapping between JSON and Go values is described +// in the documentation for the Marshal and Unmarshal functions. +// +// See "JSON and Go" for an introduction to this package: +// https://golang.org/doc/articles/json_and_go.html +package json + +import ( + "bytes" + "encoding" + "encoding/base64" + "fmt" + "math" + "reflect" + "runtime" + "sort" + "strconv" + "strings" + "sync" + "unicode" + "unicode/utf8" +) + +// Marshal returns the JSON encoding of v. +// +// Marshal traverses the value v recursively. +// If an encountered value implements the Marshaler interface +// and is not a nil pointer, Marshal calls its MarshalJSON method +// to produce JSON. If no MarshalJSON method is present but the +// value implements encoding.TextMarshaler instead, Marshal calls +// its MarshalText method. +// The nil pointer exception is not strictly necessary +// but mimics a similar, necessary exception in the behavior of +// UnmarshalJSON. +// +// Otherwise, Marshal uses the following type-dependent default encodings: +// +// Boolean values encode as JSON booleans. +// +// Floating point, integer, and Number values encode as JSON numbers. +// +// String values encode as JSON strings coerced to valid UTF-8, +// replacing invalid bytes with the Unicode replacement rune. +// The angle brackets "<" and ">" are escaped to "\u003c" and "\u003e" +// to keep some browsers from misinterpreting JSON output as HTML. +// Ampersand "&" is also escaped to "\u0026" for the same reason. +// This escaping can be disabled using an Encoder with DisableHTMLEscaping. +// +// Array and slice values encode as JSON arrays, except that +// []byte encodes as a base64-encoded string, and a nil slice +// encodes as the null JSON value. +// +// Struct values encode as JSON objects. Each exported struct field +// becomes a member of the object unless +// - the field's tag is "-", or +// - the field is empty and its tag specifies the "omitempty" option. +// The empty values are false, 0, any +// nil pointer or interface value, and any array, slice, map, or string of +// length zero. The object's default key string is the struct field name +// but can be specified in the struct field's tag value. The "json" key in +// the struct field's tag value is the key name, followed by an optional comma +// and options. Examples: +// +// // Field is ignored by this package. +// Field int `json:"-"` +// +// // Field appears in JSON as key "myName". +// Field int `json:"myName"` +// +// // Field appears in JSON as key "myName" and +// // the field is omitted from the object if its value is empty, +// // as defined above. +// Field int `json:"myName,omitempty"` +// +// // Field appears in JSON as key "Field" (the default), but +// // the field is skipped if empty. +// // Note the leading comma. +// Field int `json:",omitempty"` +// +// The "string" option signals that a field is stored as JSON inside a +// JSON-encoded string. It applies only to fields of string, floating point, +// integer, or boolean types. This extra level of encoding is sometimes used +// when communicating with JavaScript programs: +// +// Int64String int64 `json:",string"` +// +// The key name will be used if it's a non-empty string consisting of +// only Unicode letters, digits, dollar signs, percent signs, hyphens, +// underscores and slashes. +// +// Anonymous struct fields are usually marshaled as if their inner exported fields +// were fields in the outer struct, subject to the usual Go visibility rules amended +// as described in the next paragraph. +// An anonymous struct field with a name given in its JSON tag is treated as +// having that name, rather than being anonymous. +// An anonymous struct field of interface type is treated the same as having +// that type as its name, rather than being anonymous. +// +// The Go visibility rules for struct fields are amended for JSON when +// deciding which field to marshal or unmarshal. If there are +// multiple fields at the same level, and that level is the least +// nested (and would therefore be the nesting level selected by the +// usual Go rules), the following extra rules apply: +// +// 1) Of those fields, if any are JSON-tagged, only tagged fields are considered, +// even if there are multiple untagged fields that would otherwise conflict. +// 2) If there is exactly one field (tagged or not according to the first rule), that is selected. +// 3) Otherwise there are multiple fields, and all are ignored; no error occurs. +// +// Handling of anonymous struct fields is new in Go 1.1. +// Prior to Go 1.1, anonymous struct fields were ignored. To force ignoring of +// an anonymous struct field in both current and earlier versions, give the field +// a JSON tag of "-". +// +// Map values encode as JSON objects. The map's key type must either be a string +// or implement encoding.TextMarshaler. The map keys are used as JSON object +// keys, subject to the UTF-8 coercion described for string values above. +// +// Pointer values encode as the value pointed to. +// A nil pointer encodes as the null JSON value. +// +// Interface values encode as the value contained in the interface. +// A nil interface value encodes as the null JSON value. +// +// Channel, complex, and function values cannot be encoded in JSON. +// Attempting to encode such a value causes Marshal to return +// an UnsupportedTypeError. +// +// JSON cannot represent cyclic data structures and Marshal does not +// handle them. Passing cyclic structures to Marshal will result in +// an infinite recursion. +// +func Marshal(v interface{}) ([]byte, error) { + e := &encodeState{} + err := e.marshal(v, encOpts{escapeHTML: true}) + if err != nil { + return nil, err + } + return e.Bytes(), nil +} + +// MarshalIndent is like Marshal but applies Indent to format the output. +func MarshalIndent(v interface{}, prefix, indent string) ([]byte, error) { + b, err := Marshal(v) + if err != nil { + return nil, err + } + var buf bytes.Buffer + err = Indent(&buf, b, prefix, indent) + if err != nil { + return nil, err + } + return buf.Bytes(), nil +} + +// HTMLEscape appends to dst the JSON-encoded src with <, >, &, U+2028 and U+2029 +// characters inside string literals changed to \u003c, \u003e, \u0026, \u2028, \u2029 +// so that the JSON will be safe to embed inside HTML